Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,361)

Search Parameters:
Keywords = hyperspectral image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
43 pages, 15122 KB  
Article
CloudAHSI: A Hyperspectral Dataset for Cloud Segmentation from GF-5 AHSI
by Yuanyuan Jia, Siwei Zhao, Xuanbin Liu and Yinnian Liu
Remote Sens. 2026, 18(9), 1269; https://doi.org/10.3390/rs18091269 (registering DOI) - 22 Apr 2026
Abstract
Cloud detection is essential for optical remote sensing data preprocessing. However, hyperspectral cloud detection datasets remain scarce, suffering from issues such as limited spectral coverage, small annotation scales, and a lack of scene diversity, which hinders the development of hyperspectral cloud detection algorithms. [...] Read more.
Cloud detection is essential for optical remote sensing data preprocessing. However, hyperspectral cloud detection datasets remain scarce, suffering from issues such as limited spectral coverage, small annotation scales, and a lack of scene diversity, which hinders the development of hyperspectral cloud detection algorithms. To address this, this paper constructs CloudAHSI—a multi-source hyperspectral cloud detection dataset for global complex scenes—based on the Advanced Hyperspectral Imager (AHSI) aboard the GF-5 01 satellite. The dataset comprises 45 original scenes and enhanced sub-scenes, achieving full-spectrum coverage from 400 to 2500 nm. Through a semi-supervised annotation framework combining “spectral prior-based rough labeling and manual refinement,” the dataset provides pixel-level labels for thick clouds, thin clouds, and non-cloud areas, with scenes further categorized by cloud coverage and primary land cover types. Experiments demonstrate that CloudAHSI effectively supports deep learning models in cloud detection tasks over complex surface backgrounds, particularly showing significant data value in the detection and evaluation of thin clouds, thereby meeting multi-level cloud detection requirements ranging from pixel segmentation to scene understanding. The release of this dataset provides a critical data foundation for overcoming spectral confusion bottlenecks in hyperspectral cloud detection and advancing the utilization of full-spectrum remote sensing information. Full article
(This article belongs to the Section Remote Sensing Image Processing)
25 pages, 11923 KB  
Article
CADR-BL: Class-Adaptive Dictionary Reconstruction with Broad Learning for Few-Shot Hyperspectral Image Classification
by Ziwei Li, Jiali Guo, Weizhen Zhang, Mengya Han, Zhenqiang Xu, Baowei Zhang, Ning Li, Weiran Luo, Menglei Xie and Jianzhong Guo
Remote Sens. 2026, 18(9), 1263; https://doi.org/10.3390/rs18091263 - 22 Apr 2026
Abstract
Hyperspectral image (HSI) classification in few-shot scenarios faces two core challenges. Limited samples and high spectral similarity lead to insufficient inter-class feature discriminability, and commonly used deep models suffer from the risk of overfitting. To address these problems, this paper proposes a Class-Adaptive [...] Read more.
Hyperspectral image (HSI) classification in few-shot scenarios faces two core challenges. Limited samples and high spectral similarity lead to insufficient inter-class feature discriminability, and commonly used deep models suffer from the risk of overfitting. To address these problems, this paper proposes a Class-Adaptive Dictionary Reconstruction with Broad Learning (CADR-BL) method. Specifically, the method constructs an exclusive adaptive dictionary for each category and adopts an alternating minimization strategy to achieve sparse reconstruction of intra-class pixels, thereby enhancing intra-class spectral consistency and suppressing inter-class interference. On this basis, an improved Hyperspectral Broad Learning (HS-BL) model is introduced to efficiently classify the reconstructed features. Random feature mapping and closed-form solutions of output weights are utilized to alleviate overfitting in few-shot learning. Experiments conducted on three benchmark datasets, namely Indian Pines, Salinas, and WHU-Hi-HanChuan, show that CADR-BL outperforms several mainstream few-shot classification methods in terms of overall accuracy, average accuracy, and Kappa coefficient. Notably, CADR-BL maintains robust performance even with extremely limited training samples, and is less sensitive to variations in sample size than other comparative methods, demonstrating strong generalization ability. The proposed method provides a reliable technical reference for few-shot HSI classification in applications such as precision agriculture, environmental monitoring, and resource exploration. Full article
Show Figures

Figure 1

23 pages, 7993 KB  
Article
A Pyramid-Enhanced Swin Transformer for Robust Hyperspectral–Multispectral Image Fusion and Super-Resolution
by Yu Lu, Lin Hu, Jiankai Hu, Shu Gan, Xiping Yuan, Wang Li and Hailong Zhao
Remote Sens. 2026, 18(8), 1255; https://doi.org/10.3390/rs18081255 - 21 Apr 2026
Abstract
Due to the inherent limitations of both hyperspectral and multispectral imagery, balancing high spatial resolution with high spectral fidelity has become one of the fundamental challenges in remote sensing image processing. A prevailing strategy is to fuse these two types of data to [...] Read more.
Due to the inherent limitations of both hyperspectral and multispectral imagery, balancing high spatial resolution with high spectral fidelity has become one of the fundamental challenges in remote sensing image processing. A prevailing strategy is to fuse these two types of data to reconstruct images that jointly preserve their respective advantages. However, existing reconstruction approaches still suffer from complex coupling between spatial and spectral information, and limited feature extraction capabilities. To address these issues, this study proposes PMSwinNet (Pyramid Multi-scale Swin Transformer Network), a novel architecture that integrates pyramid-based feature enhancement with Transformer mechanisms. The PMSwinNet incorporates multi-scale pyramid feature fusion and window-based self-attention. Through a progressive multi-stage design and three complementary components—feature extraction and reconstruction modules—the Transformer branch leverages window partitioning and shifting operations to capture long-range spatial dependencies and local contextual cues, while the pyramid features extract both global and local information across multiple spatial scales. In addition, a high-frequency branch is introduced, which employs lightweight convolutions to enhance edges, textures, and other high-frequency details, effectively suppressing blurring and artifacts during reconstruction. Experimental evaluations on multiple public hyperspectral datasets demonstrate that the PMSwinNet outperforms state-of-the-art methods, particularly in terms of detail preservation, spectral distortion suppression, and robustness. Full article
19 pages, 5438 KB  
Article
Chlorophyll-a Retrieval in Turbid Inland Waters Using BC-1A Multispectral Observations: A Case Study of Taihu Lake
by Wen Jiang, Qiyun Guo, Chen Cao and Shijie Liu
Sensors 2026, 26(8), 2535; https://doi.org/10.3390/s26082535 - 20 Apr 2026
Abstract
Turbid Class II inland waters such as Taihu Lake exhibit a “spectral uplift” effect driven by suspended particulate matter (SPM) scattering and colored dissolved organic matter (CDOM) absorption, which can obscure chlorophyll-a (Chl-a) signals in the visible–red-edge region and challenge retrieval under small-sample, [...] Read more.
Turbid Class II inland waters such as Taihu Lake exhibit a “spectral uplift” effect driven by suspended particulate matter (SPM) scattering and colored dissolved organic matter (CDOM) absorption, which can obscure chlorophyll-a (Chl-a) signals in the visible–red-edge region and challenge retrieval under small-sample, collinear feature settings. Using multispectral observations from the BC-1A satellite (carrying the Lightweight Hyperspectral Remote Sensing Imager, LHRSI) and synchronous satellite–ground in situ measurements acquired over Taihu Lake in late autumn, this study proposes Chl-a-oriented PCA–RF (COP-RF), a leakage-safe inversion framework integrating correlation screening, principal component analysis (PCA), and random forest (RF) regression. Candidate band-combination features are generated, and PCA is applied for orthogonal compression to mitigate collinearity before RF learning. A stratified five-fold cross-validation based on Chl-a quantile bins is adopted, with screening, standardization, and PCA fitted only on training folds. COP-RF achieves stable performance under the current dataset (R2=0.671, RMSE =1.80μg/L, MAE =1.25μg/L). Spatial inversion shows higher Chl-a near shores and bays and lower values in the lake center, consistent with Sentinel-2 hotspot ranks. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

17 pages, 5384 KB  
Review
Hyperspectral Sensing Enabled by Optics-Free Sensor Architectures
by Yicheng Wang, Xueyi Wang, Xintong Guo and Yining Mu
Nanomanufacturing 2026, 6(2), 8; https://doi.org/10.3390/nanomanufacturing6020008 - 20 Apr 2026
Abstract
Hyperspectral sensing allows for the capture of spatially resolved spectral data, a capability critical for applications spanning from remote sensing to biomedical diagnostics. Nevertheless, the widespread adoption of this technology is hindered by the bulk and complexity of traditional systems based on diffractive [...] Read more.
Hyperspectral sensing allows for the capture of spatially resolved spectral data, a capability critical for applications spanning from remote sensing to biomedical diagnostics. Nevertheless, the widespread adoption of this technology is hindered by the bulk and complexity of traditional systems based on diffractive optics. To overcome these hurdles, substantial research efforts have been dedicated to system miniaturization via component scaling and computational imaging. This review outlines the technological progression of compact hyperspectral imaging, ranging from miniaturized dispersive elements and tunable filters to computational snapshot designs using optical multiplexing. Although these approaches decrease system volume, they generally treat the sensor as a passive intensity recorder requiring external encoding. Therefore, we focus here on the rising paradigm of sensor-level integration made possible by nanomanufacturing. We examine optics-free architectures where spectral discrimination is embedded directly into the pixel, distinguishing between pixel-level nanophotonic filtering and intrinsic material-based selectivity. We specifically highlight emerging platforms such as compositionally engineered and cavity-enhanced perovskites, as well as electrically tunable organic or two-dimensional (2D) material heterostructures. To conclude, this review discusses persistent challenges regarding fabrication uniformity and stability, providing an outlook on the future of scalable and fully integrated hyperspectral vision systems. Full article
Show Figures

Figure 1

26 pages, 16144 KB  
Article
Temperature Determination and Scene Change Artifact Mitigation When Using Fourier-Transform Spectroscopy on Targets with Time-Varying Temperature
by Kody A. Wilson, Michael L. Dexter, Benjamin F. Akers and Anthony L. Franz
Sensors 2026, 26(8), 2512; https://doi.org/10.3390/s26082512 - 18 Apr 2026
Viewed by 208
Abstract
Fourier-transform spectroscopy is a widely used technique for determining the spectral and thermal properties of a target. However, target temperature variations during measurement can compromise the spectral accuracy. Temperature fluctuations induce oscillations superimposed on the target spectrum. These oscillations, referred to as scene-change [...] Read more.
Fourier-transform spectroscopy is a widely used technique for determining the spectral and thermal properties of a target. However, target temperature variations during measurement can compromise the spectral accuracy. Temperature fluctuations induce oscillations superimposed on the target spectrum. These oscillations, referred to as scene-change artifacts, degrade the spectral accuracy. The literature is divided, with theoretical predictions suggesting negligible artifacts and growing experimental evidence reporting significant artifacts. This paper presents a theory and experimental validation of scene-change artifacts originating from target temperature variations. Traditionally, the interferogram offset is assumed to be constant, an invalid assumption for a changing scene. The error is subsequently Fourier-transformed, producing scene-change artifacts. Accurately estimating the truth spectrum is often challenging. To address this, we propose the signal-to-scene-change-artifact ratio, a metric that quantifies the impact of scene-change artifacts without knowledge of the truth spectrum. The artifacts will be eliminated by estimating the interferogram offset using smooth offset correction. Furthermore, the interferogram offset enables determination of the target’s temperature with a greater accuracy and an increased temporal resolution compared to using the spectra. These results will demonstrate that a smooth offset correction can improve the spectrum and temperature accuracy on thermally variant targets when measured with a Fourier-transform spectrometer. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Graphical abstract

23 pages, 5622 KB  
Article
Principal Component-Based Spectral Standardization for Optical Spectrometers
by Qiguang Yang, Xu Liu, Wan Wu, Rajendra Bhatt, Yolanda Shea, Xiaozhen Xiong, Ming Zhao, Paul Smith, Greg Kopp and Peter Pilewskie
Remote Sens. 2026, 18(8), 1209; https://doi.org/10.3390/rs18081209 - 17 Apr 2026
Viewed by 185
Abstract
A Principal Component-Based Spectral Standardization (PCSS) method was developed to standardize hyperspectral radiance spectra onto a fixed wavelength grid. This enables the direct comparison of radiance or reflectance spectra across different spatial pixels of an imaging spectrometer or between different instruments. The method [...] Read more.
A Principal Component-Based Spectral Standardization (PCSS) method was developed to standardize hyperspectral radiance spectra onto a fixed wavelength grid. This enables the direct comparison of radiance or reflectance spectra across different spatial pixels of an imaging spectrometer or between different instruments. The method was validated using simulated Climate Absolute Radiance and Refractivity Observatory (CLARREO) Pathfinder (CPF) spectra. The PCSS approach demonstrated high accuracy: the average root-mean-square uncertainty across all CPF channels remained below 0.07%, with maximum individual-channel uncertainties under 1%. Compared to methods based on spectral interpolation, PCSS produced significantly lower biases with tighter error distributions, particularly in spectrally rich regions. Measured Hyper Spectral Imager for Climate Science (HySICS) balloon data provided further validation. PCSS successfully estimated wavelength shifts that closely matched measured data, even when utilizing approximated Jacobians, demonstrating the method’s robustness. Because it relies on a pre-computed lookup table for model parameters, PCSS bypasses the need for intensive radiative transfer calculations, making it highly computationally efficient. Beyond CPF, this method can easily be adapted for other hyperspectral sensors by substituting their respective wavelength grids and instrument line shape functions, offering a powerful tool to improve cross-calibration between different satellite sensors. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

23 pages, 16273 KB  
Article
Design of a High Dynamic Range Acquisition System for Airborne VNIR Push-Broom Hyperspectral Camera
by Haoyang Feng, Yueming Wang, Daogang He, Changxing Zhang and Chunlai Li
Sensors 2026, 26(8), 2474; https://doi.org/10.3390/s26082474 - 17 Apr 2026
Viewed by 119
Abstract
Achieving a high frame rate and high dynamic range (HDR) under complex illumination remains a significant challenge for airborne push-broom visible-near-infrared (VNIR) hyperspectral cameras. Problematic scenarios typically include high-contrast scenes, such as ocean whitecaps alongside deep water or concurrently sunlit and shadowed urban [...] Read more.
Achieving a high frame rate and high dynamic range (HDR) under complex illumination remains a significant challenge for airborne push-broom visible-near-infrared (VNIR) hyperspectral cameras. Problematic scenarios typically include high-contrast scenes, such as ocean whitecaps alongside deep water or concurrently sunlit and shadowed urban surfaces. To address this, a real-time HDR acquisition system based on a dual-gain complementary metal–oxide–semiconductor (CMOS) image sensor is proposed. Specifically, a four-pixel HDR fusion method is developed, utilizing an optical calibration setup to accurately determine the fusion parameters and configure the spectral region of interest (ROI) for reduced data volume. The complete workflow, encompassing spectral–spatial four-pixel binning and piecewise dual-gain fusion, is implemented on a field-programmable gate array (FPGA) using a dual-port RAM-based buffering strategy and a low-latency five-stage pipeline. Experimental results demonstrate a minimal processing latency of 0.0183 ms and a maximum frame rate of 290 frames/s. By extending the output bit depth from 11 to 15 bits, the system achieves a digital dynamic range of the final output of 2.03 × 104:1, representing a 9.58-fold improvement over the original low-gain data. The fused HDR data maintain high linearity and good spectral fidelity, with spectral angle mapper (SAM) values at the 10−3 level. Featuring a compact and low-power design, this system provides a practical engineering solution for efficient airborne VNIR hyperspectral acquisition. Full article
(This article belongs to the Section Sensing and Imaging)
25 pages, 18342 KB  
Article
Parameter- and Compute-Efficient Spatial–Spectral Transformer Framework for Pixel-Level Classification of Foreign Plastic Objects on Broiler Meat Using NIR–Hyperspectral Imaging
by Zirak Khan, Seung-Chul Yoon and Suchendra M. Bhandarkar
Sensors 2026, 26(8), 2459; https://doi.org/10.3390/s26082459 - 16 Apr 2026
Viewed by 286
Abstract
Foreign plastic objects (FPOs) in poultry products present significant food safety risks and cause economic losses for the industry. Conventional detection methods, including X-rays and color imaging, often struggle to identify small or low-density plastics. Hyperspectral imaging (HSI) offers both spatial and spectral [...] Read more.
Foreign plastic objects (FPOs) in poultry products present significant food safety risks and cause economic losses for the industry. Conventional detection methods, including X-rays and color imaging, often struggle to identify small or low-density plastics. Hyperspectral imaging (HSI) offers both spatial and spectral information but suffers from high computational cost when applied for FPO identification in industrial environments. This study introduces a parameter-efficient and computationally efficient spatial–spectral transformer framework for pixel-level classification of FPOs on broiler meat using NIR-HSI (1000–1700 nm). The framework integrates three innovations: (1) center-focused linear attention (CFLA) to reduce computational complexity from O(n2) to O(n); (2) patch-local mixed-axis 2D rotary position embedding to preserve geometric relationships within hyperspectral patches; and (3) low-rank factorized projection (LRP) matrices to reduce parameters by approximately 50% within projection weight matrices. The framework was trained and evaluated on a dataset of 52 chicken fillets, comprising 295,340 labeled target hyperspectral pixels from 12 common polymer types and 1 fillet class. The model achieved 99.39% overall accuracy, 99.57% average accuracy, and a 99.31 Kappa coefficient across 248,540 test pixels. Per-class precision, recall, and F1-score exceeded 98.05%, 98.59%, and 98.76%, respectively, across all classes. Efficiency analyses showed an 83% reduction in multiply–accumulate operations (MACs), a 22% reduction in trainable parameters, and a model size reduction from 1.72 MB to 1.35 MB relative to the baseline configuration. These gains also translated into practical inference benefits, with the final model achieving a throughput of 212,971.5 hyperspectral patch cubes/s and a 4.19× speedup over the baseline. These results demonstrate that the proposed framework combines strong classification performance with high efficiency, supporting high-throughput inference for real-time monitoring and enabling contamination source traceability and preventive quality control in industrial poultry processing. The approach provides a benchmark for applying transformer-based models to food safety inspection tasks. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

31 pages, 7470 KB  
Article
Improved Quantification of Methane Point-Source Emissions from Hyperspectral Imagery Using a Spectrally Corrected Levenberg–Marquardt Matched Filter
by Zhuo He, Yan Ma, Zhengqiang Li, Ying Zhang, Cheng Fan, Lili Qie, Zihan Zhang, Zheng Shi, Tong Lu, Yuanyuan Gao, Xingyu Yao, Xiaofan Li, Chenwei Lan and Qian Yao
Remote Sens. 2026, 18(8), 1195; https://doi.org/10.3390/rs18081195 - 16 Apr 2026
Viewed by 288
Abstract
Spaceborne hyperspectral imaging spectrometers enable refined retrieval and quantification of methane point-source emissions. However, the conventional matched filter (MF) systematically underestimates methane enhancements under high-concentration conditions and remains sensitive to spectral inconsistencies across varying observation scenarios. To address these limitations, we improve MF-based [...] Read more.
Spaceborne hyperspectral imaging spectrometers enable refined retrieval and quantification of methane point-source emissions. However, the conventional matched filter (MF) systematically underestimates methane enhancements under high-concentration conditions and remains sensitive to spectral inconsistencies across varying observation scenarios. To address these limitations, we improve MF-based retrieval from two aspects: the observation model and the unit absorption spectrum (UAS) representation. First, a Levenberg–Marquardt matched filter (LMMF) is developed by extending the MF framework to a nonlinear retrieval formulation while retaining its data-driven and background-statistics-based characteristics. Specifically, the exponential absorption term is preserved, and methane enhancement is iteratively solved in the nonlinear domain, enabling a more physically consistent retrieval without requiring precise external prior knowledge. Building upon this framework, a spectrally corrected LMMF (SC-LMMF) is further proposed by introducing a lookup-table-based dynamic UAS correction to account for variations in observation geometry, surface elevation, and atmospheric state. Comprehensive validation using idealized and noise-perturbed simulations, end-to-end simulations, and controlled-release experiments demonstrates that the LMMF mitigates high-concentration underestimation relative to the MF. The SC-LMMF further reduces cross-scene systematic biases, shifting retrievals toward a near 1:1 relationship. In controlled-release experiments, the SC-LMMF increased the coefficient of determination (R2) by approximately 50% while reducing the root mean square error (RMSE) and mean absolute error (MAE) by approximately 70% relative to the MF. Overall, the proposed framework enhances the robustness and quantitative consistency of methane point-source retrievals across multisource hyperspectral satellite observations. Full article
Show Figures

Figure 1

19 pages, 1655 KB  
Article
Development of a Method for Detecting Responses of Different Oat Cultivars to Fusarium Head Blight Infection in Greenhouse Conditions Using Hyperspectral Image Analysis
by Maksims Fiļipovičs, Jevgenija Ņečajeva, Pāvels Suskis and Jūratė Ramanauskienė
Agriculture 2026, 16(8), 878; https://doi.org/10.3390/agriculture16080878 - 15 Apr 2026
Viewed by 295
Abstract
Hyperspectral (HS) analysis was used to measure the dynamics of Fusarium head blight (FHB) disease severity on panicles of three oat cultivars, ‘Husky’, ‘Ivory’, and ‘Lelde’, under greenhouse conditions. Inoculation with Fusarium spp. spore material was conducted (i) on the seeds and (ii) [...] Read more.
Hyperspectral (HS) analysis was used to measure the dynamics of Fusarium head blight (FHB) disease severity on panicles of three oat cultivars, ‘Husky’, ‘Ivory’, and ‘Lelde’, under greenhouse conditions. Inoculation with Fusarium spp. spore material was conducted (i) on the seeds and (ii) plants at the mid-flowering stage (BBCH 65). Disease development on oat panicles was assessed visually, and imaged with an HS camera from the end of the flowering stage (BBCH 69) to the early–middle ripe stage (BBCH 83–85). To verify that FHB symptoms were caused by Fusarium spp. pathogens, a microbiological test was performed. At the end of the trial, mycotoxin analysis of the kernels was conducted. The collected HS data from diseased and control plant panicles were used to estimate the head blight index (HBI). A Python-based software was developed to assess HBI at the pixel level. Both visual assessment and HS analysis confirmed statistically significant differences in disease severity between all treatment options. The highest disease severity results were obtained in the last disease assessment run (BBCH 83–85) for the inoculated head treatment. Microbiological test results confirmed that FHB symptoms in oat kernels were mostly caused by F. sporotrichioides. The correlation coefficient between the visually assessed FHB disease severity results and HS analysis results was 0.969. The correlation coefficient between T-2/HT-2 mycotoxins and HS disease severity results was 0.971, which suggests the potential for using HS analysis in field monitoring for mycotoxin content detection. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

34 pages, 6876 KB  
Article
A NIST-Traceable Lab-to-Sky Spectral and Radiometric Calibration for NASA’s High-Altitude Airborne Hyperspectral Pushbroom Imager for Cloud and Aerosol Research and Development (PICARD)
by Gary D. Hoffmann, Thomas Ellis, Haiping Su, Alok Shrestha, Julia A. Barsi, Roseanne Dominguez, Eric Fraim, James Jacobson, Steven Platnick, G. Thomas Arnold, Kerry Meyer and Jessica L. McCarty
Remote Sens. 2026, 18(8), 1168; https://doi.org/10.3390/rs18081168 - 14 Apr 2026
Viewed by 421
Abstract
The Pushbroom Imager for Cloud and Aerosol Research and Development (PICARD) visible through shortwave infrared imaging spectrometer was developed to carry a calibration laboratory environment to high altitudes, while also providing high-dynamic-range bright cloud-top radiance measurements across a field of view just under [...] Read more.
The Pushbroom Imager for Cloud and Aerosol Research and Development (PICARD) visible through shortwave infrared imaging spectrometer was developed to carry a calibration laboratory environment to high altitudes, while also providing high-dynamic-range bright cloud-top radiance measurements across a field of view just under 50 degrees. The in-flight performance of this new spectroradiometer was validated in comparison to multiple reference data sources and targets using imagery collected aboard NASA’s ER-2 high-altitude aircraft during the Western Diversity Time Series (WDTS) airborne science campaign in April 2023 and the September 2024 Plankton, Aerosol, Cloud, and ocean Ecosystem (PACE) Postlaunch Airborne eXperiment (PACE-PAX), both operating out of southern California. PICARD measurements from flights over Railroad Valley Playa, Nevada, USA, were compared to high-resolution radiance spectra of the dry lakebed provided by the Radiometric Calibration Network (RadCalNet) Working Group. Direct comparison to satellite cloud radiometry was enabled by the ER-2 flying in coordination with simultaneous overpasses of the Terra, Aqua, and NOAA-20 Earth-observing satellites during WDTS and with the PACE observatory during PACE-PAX. To account for large spectral differences between incandescent laboratory sources and solar illumination, PICARD calibration relies on measurements using the Goddard Laser for Absolute Measurements of Radiance (GLAMR) to characterize and minimize spectral stray light from the instrument’s twin Offner grating spectrometers. Good agreement in comparison to reference measurements demonstrates PICARD’s ability to provide imagery for environmental science or for testing new sensor designs and retrieval algorithms for cloud and aerosol research with verified laboratory calibrations at high altitudes. Full article
Show Figures

Figure 1

19 pages, 2505 KB  
Article
Automated Label-Free Classification of Circulating Tumor Cells and White Blood Cells Using Hyperspectral Imaging and Deep Learning on Microfluidic SACA Chip System
by Shun-Chi Wu, Jon-Nan Chiu, Yi-Wen Chen, Chen-Hsi Hung, Mang Ou-Yang and Fan-Gang Tseng
Micromachines 2026, 17(4), 472; https://doi.org/10.3390/mi17040472 - 14 Apr 2026
Viewed by 294
Abstract
Circulating tumor cells (CTCs) are essential biomarkers for cancer prognosis, yet their extreme rarity and biological heterogeneity pose significant challenges for label-free detection. This study presents an automated, non-invasive classification framework integrating a self-assembly cell array (SACA) microfluidic chip with hyperspectral imaging (HSI) [...] Read more.
Circulating tumor cells (CTCs) are essential biomarkers for cancer prognosis, yet their extreme rarity and biological heterogeneity pose significant challenges for label-free detection. This study presents an automated, non-invasive classification framework integrating a self-assembly cell array (SACA) microfluidic chip with hyperspectral imaging (HSI) and deep learning. By utilizing the SACA chip’s 5 µm gap design, patient-derived blood samples were organized into a flattened monolayer, ensuring high-purity spectral acquisition by minimizing cell overlapping. We implemented two deep-learning pipelines: an Attention-Based Adaptive Spectral–Spatial Kernel ResNet (A2S2K-ResNet) for pixel-level feature extraction and a modified ResNet50 for structural image analysis. While spectral classification achieved ~80% accuracy for cultured cell lines, its performance on patient-derived CTCs was hindered by subtle spectral overlap with white blood cells (WBCs). To overcome this, a multi-band ensemble strategy using majority voting across seven optimized spectral bands (470–900 nm) was developed. This hybrid approach significantly enhanced detection robustness, achieving an overall accuracy of >93.5% and precision exceeding 92%. These results demonstrate that combining microfluidic spatial control with multi-band deep learning offers a reliable, label-free pipeline for clinical liquid biopsy and real-time cancer monitoring. Full article
(This article belongs to the Special Issue Microfluidic Chips for Biomedical Applications)
Show Figures

Figure 1

23 pages, 3481 KB  
Article
Segment-Based Spectral Characterisation of Municipal Solid Waste in African Landfills Using HISUI Hyperspectral Imagery
by Leeme Arther Baruti, Yasuhiro Sugisaki, Hirofumi Nakayama and Takayuki Shimaoka
Remote Sens. 2026, 18(8), 1156; https://doi.org/10.3390/rs18081156 - 13 Apr 2026
Viewed by 201
Abstract
Municipal solid waste management remains a major environmental challenge across Africa, where rapid urbanisation has outpaced formal waste infrastructure and routine landfill monitoring is often absent. Rather than proposing a classification algorithm, this study investigates whether spaceborne hyperspectral imagery can reveal robust spectral [...] Read more.
Municipal solid waste management remains a major environmental challenge across Africa, where rapid urbanisation has outpaced formal waste infrastructure and routine landfill monitoring is often absent. Rather than proposing a classification algorithm, this study investigates whether spaceborne hyperspectral imagery can reveal robust spectral fingerprints of landfill surfaces suitable for automated detection. Eight landfill sites across seven African countries were analysed using Hyperspectral Imager Suite (HISUI) data (400–2500 nm, 20 m resolution). A segment-based framework was applied after masking low signal-to-noise regions, combining brightness analysis, L2-normalised spectral shape comparison using Spectral Contrast Angle (SCA), and derivative spectroscopy across 109,275 pixels from six land-cover classes. Brightness-based discrimination exhibited strong inter-site variability, limiting its general applicability. In contrast, shape-based metrices revealed consistent separability between landfill-active surfaces and soil or urban classes in the shortwave infrared (SWIR), particularly within the 1538–1750 nm and 2075–2474 nm regions. Derivative analysis further identified stable extrema near approximately 1700 nm and 2200–2300 nm across all sites, indicating reproducible curvature-based fingerprints associated with exposed municipal solid waste. These results demonstrate that landfill surfaces exhibit intrinsic SWIR spectral characteristics that persist across diverse African environments. This study establishes the first multi-site hyperspectral library of African landfill surfaces, providing a physical basis for developing generalised landfill detection frameworks. Full article
20 pages, 5303 KB  
Article
LGDAF-Net: A Lightweight CNN–Transformer Framework for Cross-Domain Few-Shot Hyperspectral Image Classification
by Guang Yang, Jiaoli Fang, Daming Zhu and Xiaoqing Zuo
Electronics 2026, 15(8), 1606; https://doi.org/10.3390/electronics15081606 - 12 Apr 2026
Viewed by 321
Abstract
Cross-domain few-shot hyperspectral image (HSI) classification is challenging due to limited labeled samples and distribution shifts across sensors and acquisition scenes, which often degrade feature representation and classification performance. This study proposes a lightweight hierarchical CNN–Transformer framework, termed LGDAF-Net (Lightweight Global and Local [...] Read more.
Cross-domain few-shot hyperspectral image (HSI) classification is challenging due to limited labeled samples and distribution shifts across sensors and acquisition scenes, which often degrade feature representation and classification performance. This study proposes a lightweight hierarchical CNN–Transformer framework, termed LGDAF-Net (Lightweight Global and Local Dual Attention Fusion Network), for effective cross-domain few-shot HSI classification. The framework progressively enhances spectral–spatial representation through three stages: spectral–spatial feature recalibration, local spatial structure perception, and global contextual modeling. Specifically, a spectral–spatial dual-attention enhancement module (SESA) is introduced to emphasize informative spectral responses and suppress redundancy. A Local Attention Spatial Perception Module (LASPM) is designed to capture fine-grained spatial structures, while a lightweight Transformer-based Global Attention Context Modeling Module (GACM) models long-range spatial dependencies. In addition, kernel triplet loss and domain adversarial learning are incorporated to improve feature discrimination and promote cross-domain feature alignment. Experimental results on three benchmark datasets demonstrate that the proposed method achieves competitive performance compared with existing methods. Full article
(This article belongs to the Special Issue AI-Driven Image Processing: Theory, Methods, and Applications)
Show Figures

Figure 1

Back to TopTop