Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (843)

Search Parameters:
Keywords = hyperspectral imaging system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 5384 KB  
Review
Hyperspectral Sensing Enabled by Optics-Free Sensor Architectures
by Yicheng Wang, Xueyi Wang, Xintong Guo and Yining Mu
Nanomanufacturing 2026, 6(2), 8; https://doi.org/10.3390/nanomanufacturing6020008 - 20 Apr 2026
Abstract
Hyperspectral sensing allows for the capture of spatially resolved spectral data, a capability critical for applications spanning from remote sensing to biomedical diagnostics. Nevertheless, the widespread adoption of this technology is hindered by the bulk and complexity of traditional systems based on diffractive [...] Read more.
Hyperspectral sensing allows for the capture of spatially resolved spectral data, a capability critical for applications spanning from remote sensing to biomedical diagnostics. Nevertheless, the widespread adoption of this technology is hindered by the bulk and complexity of traditional systems based on diffractive optics. To overcome these hurdles, substantial research efforts have been dedicated to system miniaturization via component scaling and computational imaging. This review outlines the technological progression of compact hyperspectral imaging, ranging from miniaturized dispersive elements and tunable filters to computational snapshot designs using optical multiplexing. Although these approaches decrease system volume, they generally treat the sensor as a passive intensity recorder requiring external encoding. Therefore, we focus here on the rising paradigm of sensor-level integration made possible by nanomanufacturing. We examine optics-free architectures where spectral discrimination is embedded directly into the pixel, distinguishing between pixel-level nanophotonic filtering and intrinsic material-based selectivity. We specifically highlight emerging platforms such as compositionally engineered and cavity-enhanced perovskites, as well as electrically tunable organic or two-dimensional (2D) material heterostructures. To conclude, this review discusses persistent challenges regarding fabrication uniformity and stability, providing an outlook on the future of scalable and fully integrated hyperspectral vision systems. Full article
Show Figures

Figure 1

23 pages, 16273 KB  
Article
Design of a High Dynamic Range Acquisition System for Airborne VNIR Push-Broom Hyperspectral Camera
by Haoyang Feng, Yueming Wang, Daogang He, Changxing Zhang and Chunlai Li
Sensors 2026, 26(8), 2474; https://doi.org/10.3390/s26082474 - 17 Apr 2026
Viewed by 89
Abstract
Achieving a high frame rate and high dynamic range (HDR) under complex illumination remains a significant challenge for airborne push-broom visible-near-infrared (VNIR) hyperspectral cameras. Problematic scenarios typically include high-contrast scenes, such as ocean whitecaps alongside deep water or concurrently sunlit and shadowed urban [...] Read more.
Achieving a high frame rate and high dynamic range (HDR) under complex illumination remains a significant challenge for airborne push-broom visible-near-infrared (VNIR) hyperspectral cameras. Problematic scenarios typically include high-contrast scenes, such as ocean whitecaps alongside deep water or concurrently sunlit and shadowed urban surfaces. To address this, a real-time HDR acquisition system based on a dual-gain complementary metal–oxide–semiconductor (CMOS) image sensor is proposed. Specifically, a four-pixel HDR fusion method is developed, utilizing an optical calibration setup to accurately determine the fusion parameters and configure the spectral region of interest (ROI) for reduced data volume. The complete workflow, encompassing spectral–spatial four-pixel binning and piecewise dual-gain fusion, is implemented on a field-programmable gate array (FPGA) using a dual-port RAM-based buffering strategy and a low-latency five-stage pipeline. Experimental results demonstrate a minimal processing latency of 0.0183 ms and a maximum frame rate of 290 frames/s. By extending the output bit depth from 11 to 15 bits, the system achieves a digital dynamic range of the final output of 2.03 × 104:1, representing a 9.58-fold improvement over the original low-gain data. The fused HDR data maintain high linearity and good spectral fidelity, with spectral angle mapper (SAM) values at the 10−3 level. Featuring a compact and low-power design, this system provides a practical engineering solution for efficient airborne VNIR hyperspectral acquisition. Full article
(This article belongs to the Section Sensing and Imaging)
19 pages, 2505 KB  
Article
Automated Label-Free Classification of Circulating Tumor Cells and White Blood Cells Using Hyperspectral Imaging and Deep Learning on Microfluidic SACA Chip System
by Shun-Chi Wu, Jon-Nan Chiu, Yi-Wen Chen, Chen-Hsi Hung, Mang Ou-Yang and Fan-Gang Tseng
Micromachines 2026, 17(4), 472; https://doi.org/10.3390/mi17040472 - 14 Apr 2026
Viewed by 266
Abstract
Circulating tumor cells (CTCs) are essential biomarkers for cancer prognosis, yet their extreme rarity and biological heterogeneity pose significant challenges for label-free detection. This study presents an automated, non-invasive classification framework integrating a self-assembly cell array (SACA) microfluidic chip with hyperspectral imaging (HSI) [...] Read more.
Circulating tumor cells (CTCs) are essential biomarkers for cancer prognosis, yet their extreme rarity and biological heterogeneity pose significant challenges for label-free detection. This study presents an automated, non-invasive classification framework integrating a self-assembly cell array (SACA) microfluidic chip with hyperspectral imaging (HSI) and deep learning. By utilizing the SACA chip’s 5 µm gap design, patient-derived blood samples were organized into a flattened monolayer, ensuring high-purity spectral acquisition by minimizing cell overlapping. We implemented two deep-learning pipelines: an Attention-Based Adaptive Spectral–Spatial Kernel ResNet (A2S2K-ResNet) for pixel-level feature extraction and a modified ResNet50 for structural image analysis. While spectral classification achieved ~80% accuracy for cultured cell lines, its performance on patient-derived CTCs was hindered by subtle spectral overlap with white blood cells (WBCs). To overcome this, a multi-band ensemble strategy using majority voting across seven optimized spectral bands (470–900 nm) was developed. This hybrid approach significantly enhanced detection robustness, achieving an overall accuracy of >93.5% and precision exceeding 92%. These results demonstrate that combining microfluidic spatial control with multi-band deep learning offers a reliable, label-free pipeline for clinical liquid biopsy and real-time cancer monitoring. Full article
(This article belongs to the Special Issue Microfluidic Chips for Biomedical Applications)
Show Figures

Figure 1

23 pages, 4041 KB  
Article
Detection of Phosphorus Deficiency Using Hyperspectral Imaging for Early Characterization of Asymptomatic Growth and Photosynthetic Symptoms in Maize
by Sutee Kiddee, Chalongrat Daengngam, Surachet Wongarrayapanich, Jing Yi Lau, Acga Cheng and Lompong Klinnawee
Agronomy 2026, 16(8), 772; https://doi.org/10.3390/agronomy16080772 - 8 Apr 2026
Viewed by 1314
Abstract
Phosphorus (P) deficiency severely limits maize growth and yield, yet early detection remains challenging, as visible symptoms appear only after prolonged starvation. This study evaluated the capability of hyperspectral imaging (HSI) combined with machine learning to detect P deficiency in maize seedlings at [...] Read more.
Phosphorus (P) deficiency severely limits maize growth and yield, yet early detection remains challenging, as visible symptoms appear only after prolonged starvation. This study evaluated the capability of hyperspectral imaging (HSI) combined with machine learning to detect P deficiency in maize seedlings at both symptomatic and pre-symptomatic stages. Two greenhouse experiments were conducted: a long-term pot system under high and low P conditions and a short-term hydroponic experiment with three P concentrations of 500, 100, and 0 μmol/L phosphate (Pi). After long-term P deficiency, significant reductions in shoot biomass and Pi content were observed, while root biomass increased and nutrient profiles were altered. Hyperspectral signatures revealed distinct wavelength-specific differences across visible, red-edge, and near-infrared (NIR) regions, with P-deficient leaves showing lower reflectance in green and NIR regions but higher reflectance in the red band. A multilayer perceptron machine learning model achieved 99.65% accuracy in discriminating between P treatments. In the short-term experiment, P deficiency significantly reduced tissue Pi content within one week without affecting pigment composition or photosynthetic parameters. Despite the absence of visible symptoms, hyperspectral measurements detected subtle spectral changes, particularly in older leaves, enabling classification accuracies of 80.71–84.56% in the first week and 85.88–90.98% in the second week of P treatment. Conventional vegetation indices showed weak correlations with Pi content and failed to detect early P deficiency. These findings demonstrate that HSI combined with machine learning can effectively detect P deficiency before visible symptoms emerge, offering a non-destructive, rapid diagnostic tool for precision nutrient management in maize production systems. Full article
(This article belongs to the Special Issue Nutrient Enrichment and Crop Quality in Sustainable Agriculture)
Show Figures

Figure 1

14 pages, 2627 KB  
Article
Comparative Assessment of Hyperspectral Image Segmentation Algorithms for Fruit Defect Detection Under Different Illumination Conditions
by Anastasia Zolotukhina, Anton Sudarev, Georgiy Nesterov and Demid Khokhlov
J. Imaging 2026, 12(4), 160; https://doi.org/10.3390/jimaging12040160 - 8 Apr 2026
Viewed by 297
Abstract
This study presents a comparative analysis of hyperspectral image segmentation algorithms for fruit defect detection under different illumination conditions. The research evaluates the performance of four segmentation methods (Spectral Angle Mapper, Random Forest, Support Vector Machine, and Neural Network) using three distinct illumination [...] Read more.
This study presents a comparative analysis of hyperspectral image segmentation algorithms for fruit defect detection under different illumination conditions. The research evaluates the performance of four segmentation methods (Spectral Angle Mapper, Random Forest, Support Vector Machine, and Neural Network) using three distinct illumination modes (local, simultaneous and sequential). The experimental setup employed hyperspectral imaging to assess tomato fruit samples, with data acquisition performed across the 450–850 nm spectral range. Quantitative metrics, including accuracy, error rate, precision, recall, F1-score, and Intersection over Union (IoU), were used to evaluate algorithm performance. Key findings indicate that Random Forest demonstrated superior performance across most metrics, particularly under simultaneous illumination conditions. The highest accuracy was achieved by Random Forest under sequential illumination (0.9971), while the best combination of segmentation metrics was obtained under simultaneous illumination, with an F1-score of 0.8996 and an IoU of 0.8176. The Neural Network showed competitive results. The Spectral Angle Mapper proved sensitive to illumination variations but excelled in specific scenarios requiring minimal memory usage. By demonstrating that acquisition protocol optimization can substantially improve segmentation performance, our results support the development of accurate, non-contact, high-throughput inspection systems and contribute to reducing postharvest losses and improving supply chain quality control. Full article
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)
Show Figures

Figure 1

24 pages, 32520 KB  
Article
A UAV-Based Dual-Spectroradiometer Method for Hyperspectral Reflectance Measurement
by Haoheng Mi, Yu Zhang, Hong Guan, Kang Jiang and Yongchao Zhao
Remote Sens. 2026, 18(7), 1093; https://doi.org/10.3390/rs18071093 - 5 Apr 2026
Viewed by 397
Abstract
Unmanned aerial vehicles (UAVs) provide a flexible platform for surface reflectance measurement at spatial scales between ground observations and satellite remote sensing. This study develops a UAV-based spectroradiometric system for surface reflectance retrieval under natural illumination conditions using non-imaging hyperspectral sensors. The system [...] Read more.
Unmanned aerial vehicles (UAVs) provide a flexible platform for surface reflectance measurement at spatial scales between ground observations and satellite remote sensing. This study develops a UAV-based spectroradiometric system for surface reflectance retrieval under natural illumination conditions using non-imaging hyperspectral sensors. The system integrates two stabilized spectroradiometers mounted on a UAV to simultaneously measure hemispherical downwelling irradiance and upwelling surface radiance at flight altitude, enabling reflectance retrieval through a radiance–irradiance ratio framework without relying on ground calibration targets or radiative transfer model inversion. Field experiments were conducted over agricultural plots, and the UAV-derived reflectance was quantitatively validated against ground-based dual-spectroradiometer measurements. The results demonstrate stable irradiance measurements during flight and good agreement between UAV- and ground-derived reflectance across the 400–900 nm spectral range. The proposed system offers a practical and reliable solution for hyperspectral reflectance retrieval using UAV platforms. Full article
Show Figures

Figure 1

33 pages, 2402 KB  
Review
Toward Advanced Sensing and Data-Driven Approaches for Maturity Assessment of Indeterminate Peanut Cropping Systems: Review of Current State and Prospects
by Sathish Raymond Emmanuel Sahayaraj, Abhilash K. Chandel, Pius Jjagwe, Ranadheer Reddy Vennam, Maria Balota and Arunachalam Manimozhian
Sensors 2026, 26(7), 2208; https://doi.org/10.3390/s26072208 - 2 Apr 2026
Viewed by 605
Abstract
Determining the optimal harvest time is among the most critical economic decisions for peanut (Arachis hypogaea L.) growers, directly influencing yield, quality, and market value. Unlike many other crops, peanuts are indeterminate, continuing to flower and produce pods throughout their life cycle. [...] Read more.
Determining the optimal harvest time is among the most critical economic decisions for peanut (Arachis hypogaea L.) growers, directly influencing yield, quality, and market value. Unlike many other crops, peanuts are indeterminate, continuing to flower and produce pods throughout their life cycle. As a result, pod development and maturation are asynchronous, making harvest timing particularly challenging. Conventional maturity estimation techniques, including the hull scrape method, pod blasting, and visual maturity profiling, are invasive, labor-intensive, time-consuming, and spatially limited. Moreover, differences in cultivar maturity rates and agroclimatic conditions exacerbate inconsistencies in maturity prediction. These challenges highlight the urgent need for scalable, objective, and data-driven methods to support growers in achieving optimal harvest outcomes. This review synthesizes the current understanding of peanut pod maturity and evaluates existing traditional and non-invasive approaches for maturity estimation. It aims to identify the limitations of conventional techniques and explore the integration of advanced sensing technologies, artificial intelligence (AI), and geospatial analytics to enhance precision and scalability in peanut maturity assessment and harvest decision-making. This review examines traditional destructive techniques such as the hull scrape method and pod blasting, followed by emerging non-invasive methods employing proximal and remote sensing platforms. Applications of vegetation indices, multispectral and hyperspectral imaging, and AI-based data analytics are discussed in the context of maturity prediction. Additionally, the potential of multimodal remote sensing data fusion and digital frameworks integrating spatial big data analytics, centralized data management, and cloud-based graphical interfaces is explored as a pathway toward end-to-end decision-support systems. Recent advances in non-invasive sensing and AI-assisted modeling have demonstrated significant improvements in scalability, precision, and automation compared with traditional manual approaches. However, their effectiveness remains constrained by the limited inclusion of agroclimatic, phenological, and cultivar-specific variables. Furthermore, the translation of model outputs into actionable, field-level harvest decisions is still underdeveloped, underscoring the need for integrated, user-centric digital infrastructure. Achieving a robust and transferable digital peanut maturity estimation system will require comprehensive ground-truth data across cultivars, regions, and growing seasons. Multidisciplinary collaborations among agronomists, data scientists, growers, and technology providers will be essential for developing practical, field-ready solutions. Integrating AI, multimodal sensing, and geospatial analytics holds immense potential to transform peanut maturity estimation. Such innovations promise to enhance harvest precision, economic returns, and sustainability while reducing manual effort and uncertainty, ultimately improving the efficiency and quality of life for peanut producers worldwide. Full article
(This article belongs to the Special Issue Feature Papers in Smart Agriculture 2026)
Show Figures

Figure 1

63 pages, 1750 KB  
Review
Smart Greenhouses in the Era of IoT and AI: A Comprehensive Review of AI Applications, Spectral Sensing, Multimodal Data Fusion, and Intelligent Systems
by Wiam El Ouaham, Mohamed Sadik, Abdelhadi Ennajih, Youssef Mouzouna, Houda Orchi and Samir Elouaham
Agriculture 2026, 16(7), 761; https://doi.org/10.3390/agriculture16070761 - 30 Mar 2026
Viewed by 658
Abstract
Smart greenhouses (SGHs) are controlled-environment agricultural systems that leverage digital technologies to optimize crop production and resource management. In particular, recent advances in artificial intelligence (AI) and the Internet of Things (IoT) have enabled the development of intelligent monitoring, predictive modeling, and automated [...] Read more.
Smart greenhouses (SGHs) are controlled-environment agricultural systems that leverage digital technologies to optimize crop production and resource management. In particular, recent advances in artificial intelligence (AI) and the Internet of Things (IoT) have enabled the development of intelligent monitoring, predictive modeling, and automated decision-support systems within these environments. Against this backdrop, this comprehensive review synthesizes over 130 studies published between 2020 and 2025, with a focus on AI-driven monitoring, predictive modeling, and decision-support frameworks in SGH environments. More specifically, key application domains include microclimate regulation, crop growth assessment, disease and pest detection, yield estimation, and robotic harvesting. Moreover, particular attention is given to the interplay between AI methodologies and their data sources, encompassing IoT sensor networks, RGB, multispectral, and hyperspectral imaging, as well as multimodal data-fusion approaches. In addition, publicly available datasets, model architectures, and performance metrics are consolidated to support reproducibility and cross-study comparison. Nevertheless, persistent challenges are critically discussed, including data heterogeneity, limited model generalization across sites, interpretability constraints, and practical barriers to deployment. Finally, emerging research directions are identified, notably multimodal learning, edge-AI integration, standardized benchmarks, and scalable system architectures, with the overarching objective of guiding the development of robust, sustainable, and operationally feasible AI-enabled SGH systems. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

36 pages, 6199 KB  
Systematic Review
Intelligent and Automated Technologies for Textile Recycling Pre-Processing: A Systematic Literature Review
by Daniel Lopes, Eduardo J. Solteiro Pires, Vítor Filipe, Manuel F. Silva and Luís F. Rocha
Technologies 2026, 14(4), 200; https://doi.org/10.3390/technologies14040200 - 27 Mar 2026
Viewed by 576
Abstract
Textile-to-textile recycling is strongly constrained by upstream pre-processing, where post-consumer clothing must be identified, separated, and prepared under high variability in materials, appearance, and contamination. This paper presents a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)-guided systematic literature review of intelligent [...] Read more.
Textile-to-textile recycling is strongly constrained by upstream pre-processing, where post-consumer clothing must be identified, separated, and prepared under high variability in materials, appearance, and contamination. This paper presents a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)-guided systematic literature review of intelligent and automated technologies for textile recycling pre-processing covering the interval between 2015 to 2025. After screening and quality assessment, 21 primary studies published between 2020 and 2025 were included. The literature is synthesized across three task families: (i) identificationof fiber/material, composition, or color; (ii) sorting, considered only when explicit separation strategies are defined to operationalize identification outcomes into routing actions or output streams; and (iii) contaminant detection and/or removal, targeting non-recyclable items. Results show that identification dominates the field (19/21 studies), supported by Red–Green–Blue (RGB) and red–green–blue plus depth (RGB-D) imaging and material-signature sensing, including near-infrared (NIR) spectroscopy, hyperspectral imaging (HSI), and Raman spectroscopy. In contrast, sorting as a defined separation stage is less frequent (4/21), and contaminant-related automation remains sparse (3/21). Most studies are validated in laboratory conditions, with limited semi-industrial evidence, highlighting a persistent perception-to-action gap. Overall, the review indicates that robust separation strategies, representative datasets, and end-to-end system integration remain key bottlenecks for scalable automated textile recycling pre-processing. Full article
Show Figures

Figure 1

22 pages, 3135 KB  
Article
Computational Imaging Method for Thermal Infrared Hyperspectral Imaging Based on a Snapshot Divided-Aperture System
by Tianzhen Ma, Zhijing He, Bin Wu, Yutian Lei, Yijie Wang, Xinze Liu, Bingmei Guo, Jiawei Lu, Bo Cheng, Shikai Zan, Chunlai Li and Liyin Yuan
Sensors 2026, 26(6), 1982; https://doi.org/10.3390/s26061982 - 22 Mar 2026
Viewed by 422
Abstract
To address the technical challenge of simultaneously achieving snapshot imaging capability and high spectral resolution in thermal infrared spectral imaging, this paper proposes a computational imaging method based on a snapshot divided-aperture imaging system. In this method, a self-developed divided-aperture snapshot multispectral camera [...] Read more.
To address the technical challenge of simultaneously achieving snapshot imaging capability and high spectral resolution in thermal infrared spectral imaging, this paper proposes a computational imaging method based on a snapshot divided-aperture imaging system. In this method, a self-developed divided-aperture snapshot multispectral camera is utilized to simultaneously capture nine low-spectral-resolution images in a single exposure. The precise registration of the sub-channel images is accomplished via a star-point array calibration method. To construct the spectral reconstruction dataset, a Fourier-transform infrared hyperspectral camera (FTIR HCam) is employed to simultaneously acquire hyperspectral data from real-world scenes. Based on this, a neural network model is applied to reconstruct 127-channel hyperspectral information from the low-dimensional multispectral measurements. Experimental results demonstrate that the proposed method effectively achieves hyperspectral reconstruction while maintaining system compactness and snapshot imaging capability, thus providing a viable technical approach for hyperspectral sensing in dynamic thermal infrared scenarios. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

34 pages, 41427 KB  
Article
Weed Species Identification Using Hyperspectral Imaging and Machine Learning
by Rimma M. Ualiyeva, Mariya M. Kaverina, Anastasiya V. Osipova, Nurgul N. Iksat and Sayan B. Zhangazin
Plants 2026, 15(6), 916; https://doi.org/10.3390/plants15060916 - 16 Mar 2026
Viewed by 506
Abstract
Reliable identification of weed species is essential for effective and sustainable weed management. In this study, we explored the use of hyperspectral imaging to distinguish nine weed species based on their spectral signatures. Although the species showed similarities in their spectral curves due [...] Read more.
Reliable identification of weed species is essential for effective and sustainable weed management. In this study, we explored the use of hyperspectral imaging to distinguish nine weed species based on their spectral signatures. Although the species showed similarities in their spectral curves due to comparable growing conditions, clear differences emerged related to morphological traits and pigment composition. We analysed the spectral data using five classification algorithms: Random Forest, Support Vector Machine, Artificial Neural Network, Maximum Entropy, and SIMCA. Model performance was assessed using per-class and overall accuracy. Random Forest outperformed the other methods, achieving 93.5% accuracy despite limited and imbalanced training data. This work contributes to the development of a spectral library for weed species and demonstrates the value of machine learning for species identification across different crops and environmental conditions. Expanding such spectral databases can enhance the speed and accuracy of weed monitoring, reduce herbicide reliance, and reduce environmental impact. The proposed approach shows strong potential for integration into precision agriculture and agroecological monitoring systems, supporting more efficient and environmentally responsible farmland management. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

26 pages, 4974 KB  
Article
Soil Suborder Discrimination Using Machine Learning Is Improved by SWIR Imaging Compared with Full VIS–NIR–SWIR Spectra
by Daiane de Fatima da Silva Haubert, Nicole Ghinzelli Vedana, Weslei Augusto Mendonça, Karym Mayara de Oliveira, Caio Almeida de Oliveira, João Vitor Ferreira Gonçalves, José Alexandre M. Demattê, Roney Berti de Oliveira, Amanda Silveira Reis, Renan Falcioni and Marcos Rafael Nanni
Remote Sens. 2026, 18(6), 898; https://doi.org/10.3390/rs18060898 - 15 Mar 2026
Viewed by 391
Abstract
Rapid, standardised discrimination of soil taxonomic units remains challenging when relying solely on conventional field descriptions and laboratory analyses, particularly at high sampling densities. This study evaluated whether proximal spectroscopy and hyperspectral imaging can support the classification of Brazilian Soil Classification System (SiBCS) [...] Read more.
Rapid, standardised discrimination of soil taxonomic units remains challenging when relying solely on conventional field descriptions and laboratory analyses, particularly at high sampling densities. This study evaluated whether proximal spectroscopy and hyperspectral imaging can support the classification of Brazilian Soil Classification System (SiBCS) suborders and pedogenetic horizons when surface and subsurface spectra are treated separately. Six intact soil monoliths (0.12 × 1.60 m) were collected in Paraná State, southern Brazil, representing one Organossolo (Ooy), three Latossolos (LVd, LVd1, and LVd2) and two Argissolos (PVAd and PVd). For each monolith, 800 spectra were acquired per sensor with a non-imaging VIS–NIR–SWIR spectroradiometer (350–2500 nm), and 800 spectra per sensor per monolith were extracted from the SWIR hyperspectral images (1200–2450 nm). Principal component analysis (PCA) was used to summarise spectral variability, and supervised classification was performed via k-nearest neighbours, random forest, decision tree and gradient boosting for suborders (10-fold cross-validation), and a neural network was used for within-profile horizon classification. PCA indicated that most of the spectral variance was captured by a dominant axis, with clearer separation among suborders in the SWIR space than in the full VIS–NIR–SWIR range. With respect to suborder classification, subsurface spectra outperformed surface spectra, and SWIR outperformed VIS–NIR–SWIR: the best accuracies were 0.96 for subsurface SWIR (gradient boosting; AUC = 0.99; MCC = 0.95) and 0.89 for surface SWIR (k-nearest neighbours; AUC = 0.98; MCC = 0.87). Within-profile horizon classification via VIS–NIR–SWIR achieved accuracies of 0.84–0.97 with the Neural Network, with most misclassifications occurring between adjacent horizons. Overall, subsurface SWIR information provided the most reliable basis for taxonomic discrimination, whereas horizon classification was feasible but reflected gradual spectral transitions along the profile. Full article
Show Figures

Figure 1

36 pages, 3158 KB  
Review
Precision Agriculture for Nutraceutical Crops: A Comprehensive Scientific Review
by Giuseppina Maria Concetta Fasciana, Michele Massimo Mammano, Salvatore Amato, Carlo Greco and Santo Orlando
Agronomy 2026, 16(6), 615; https://doi.org/10.3390/agronomy16060615 - 13 Mar 2026
Viewed by 555
Abstract
Precision Agriculture (PA) is increasingly applied to nutraceutical cropping systems, where agronomic productivity must be integrated with the stabilization of phytochemical quality and environmental sustainability. This structured narrative review synthesizes scientific evidence (primarily 2010–2025) on the use of Unmanned Aerial Vehicle (UAV)-based multispectral [...] Read more.
Precision Agriculture (PA) is increasingly applied to nutraceutical cropping systems, where agronomic productivity must be integrated with the stabilization of phytochemical quality and environmental sustainability. This structured narrative review synthesizes scientific evidence (primarily 2010–2025) on the use of Unmanned Aerial Vehicle (UAV)-based multispectral and thermal sensing, LiDAR-derived canopy characterization, Internet of Things (IoT) monitoring, and artificial intelligence (AI)-driven analytics in medicinal, aromatic, and functional crops. The literature indicates that PA enhances high-resolution monitoring of crop–environment interactions, supporting site-specific irrigation, nutrient management, and stress detection. Under validated conditions, these interventions are associated with improved yield stability, resource-use efficiency, and modulation of secondary metabolite accumulation. However, reported outcomes vary substantially across species, agroecological contexts, and experimental scales, and most studies remain plot-scale or pilot-scale, limiting large-scale generalization. Moringa oleifera Lam. is examined as a model species for Mediterranean and semi-arid systems. Evidence suggests that integrated spectral, structural, and environmental monitoring can support optimized irrigation scheduling, canopy uniformity, and phytochemical consistency. Nonetheless, genotype-specific calibration, multi-season validation, standardized metabolomic benchmarking, and cross-regional transferability remain significant research gaps. Overall, PA represents a scientifically promising but still maturing framework for nutraceutical agriculture. Future progress will require rigorous multi-site validation, improved model robustness, standardized sustainability metrics, and comprehensive economic assessments to ensure scalability and long-term impact. Full article
(This article belongs to the Collection AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

33 pages, 11613 KB  
Article
Full-Link Background Radiation Suppression and Detection Capability Optimization of Mid-Wave Infrared Hyperspectral Remote Sensing in Complex Scenarios
by Yun Wang, Bingqi Qiu, Huairong Kang, Xuanbin Liu, Mengyang Chai, Huijie Han and Yinnian Liu
Photonics 2026, 13(3), 271; https://doi.org/10.3390/photonics13030271 - 11 Mar 2026
Viewed by 368
Abstract
To address the technical bottlenecks of strong background radiation interference and weak target signals in mid-wave infrared (MWIR) hyperspectral mineral detection over complex terrain, this paper proposes a “full-link background radiation suppression” methodological framework. A coupled illumination-terrain-atmosphere-sensor radiative transfer model is constructed to [...] Read more.
To address the technical bottlenecks of strong background radiation interference and weak target signals in mid-wave infrared (MWIR) hyperspectral mineral detection over complex terrain, this paper proposes a “full-link background radiation suppression” methodological framework. A coupled illumination-terrain-atmosphere-sensor radiative transfer model is constructed to systematically quantify how multidimensional parameters—such as observation geometry, surface temperature, elevation, aerosol optical depth, and water vapor content—influence the target background radiation contrast. The findings reveal that daytime observation, lower surface temperature, higher altitude, dry atmosphere, and moderate solar and observation zenith angles are key factors for maximizing the signal-to-noise ratio. Comprehensive optimization analysis demonstrates that observations during midday in autumn and winter achieve optimal performance, with the target background relative contrast potentially enhanced by up to 6.29 times compared to unfavorable conditions such as summer nights. This work elucidates the physical mechanisms governing MWIR hyperspectral detection efficacy in complex scenarios, provides direct parameter-optimization strategies for intelligent mission planning of spaceborne imaging systems, and holds significant value for advancing mineral remote sensing from “passive acquisition” to “cognitive detection”. Full article
Show Figures

Figure 1

12 pages, 5741 KB  
Data Descriptor
Hyperspectral Images of Vine Leaves Treated with Antifungal Products
by Ramón Sánchez, Carlos Rad, Carlos Cambra, Rocío Barros and Álvaro Herrero
Data 2026, 11(3), 53; https://doi.org/10.3390/data11030053 - 7 Mar 2026
Viewed by 349
Abstract
Hyperspectral imagery provides detailed insights for vineyard vegetation assessment, enabling improved pesticide management within precision agriculture. For this reason, the dataset presented here includes hyperspectral images acquired from grapevine leaves treated with two copper-based formulations: ZZ Cuprocol (containing 70% w/v copper [...] Read more.
Hyperspectral imagery provides detailed insights for vineyard vegetation assessment, enabling improved pesticide management within precision agriculture. For this reason, the dataset presented here includes hyperspectral images acquired from grapevine leaves treated with two copper-based formulations: ZZ Cuprocol (containing 70% w/v copper oxychloride) and Cuprantol Duo (composed of 14% w/w copper oxychloride and 14% w/w copper hydroxide). In addition, a commonly used contact pesticide in both intensive and traditional viticulture, Folpet—free of copper but containing sulfur and chlorine—was also evaluated in its commercial formulation Vitipec Azul (Cimoxanil 6% w/w, Folpet 37.5% w/w, Ascenza, Portugal). For each product, six different dilution levels were prepared along with a distilled water control. Leaf samples were collected and analyzed during the 2023 growing season from three shoot locations (basal, middle, and apical) and from both orientations of the vine canopy: east and west. Following pesticide treatment, leaf hyperspectral images were captured using a 300-band Pika L camera (Resonon, Bozeman, MT, USA), mounted on a mechanical scanning platform synchronized with the imaging system. Full article
Show Figures

Figure 1

Back to TopTop