Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline

Search Results (453)

Search Parameters:
Keywords = hyperspectral UAV

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1768 KB  
Review
Evolution of Deep Learning Approaches in UAV-Based Crop Leaf Disease Detection: A Web of Science Review
by Dorijan Radočaj, Petra Radočaj, Ivan Plaščak and Mladen Jurišić
Appl. Sci. 2025, 15(19), 10778; https://doi.org/10.3390/app151910778 - 7 Oct 2025
Abstract
The integration of unmanned aerial vehicles (UAVs) and deep learning (DL) has significantly advanced crop disease detection by enabling scalable, high-resolution, and near real-time monitoring within precision agriculture. This systematic review analyzes peer-reviewed literature indexed in the Web of Science Core Collection as [...] Read more.
The integration of unmanned aerial vehicles (UAVs) and deep learning (DL) has significantly advanced crop disease detection by enabling scalable, high-resolution, and near real-time monitoring within precision agriculture. This systematic review analyzes peer-reviewed literature indexed in the Web of Science Core Collection as articles or proceeding papers through 2024. The main selection criterion was combining “unmanned aerial vehicle*” OR “UAV” OR “drone” with “deep learning”, “agriculture” and “leaf disease” OR “crop disease”. Results show a marked surge in publications after 2019, with China, the United States, and India leading research contributions. Multirotor UAVs equipped with RGB sensors are predominantly used due to their affordability and spatial resolution, while hyperspectral imaging is gaining traction for its enhanced spectral diagnostic capability. Convolutional neural networks (CNNs), along with emerging transformer-based and hybrid models, demonstrate high detection performance, often achieving F1-scores above 95%. However, critical challenges persist, including limited annotated datasets for rare diseases, high computational costs of hyperspectral data processing, and the absence of standardized evaluation frameworks. Addressing these issues will require the development of lightweight DL architectures optimized for edge computing, improved multimodal data fusion techniques, and the creation of publicly available, annotated benchmark datasets. Advancements in these areas are vital for translating current research into practical, scalable solutions that support sustainable and data-driven agricultural practices worldwide. Full article
Show Figures

Figure 1

31 pages, 1983 KB  
Review
Integrating Remote Sensing and Autonomous Robotics in Precision Agriculture: Current Applications and Workflow Challenges
by Magdalena Łągiewska and Ewa Panek-Chwastyk
Agronomy 2025, 15(10), 2314; https://doi.org/10.3390/agronomy15102314 - 30 Sep 2025
Viewed by 568
Abstract
Remote sensing technologies are increasingly integrated with autonomous robotic platforms to enhance data-driven decision-making in precision agriculture. Rather than replacing conventional platforms such as satellites or UAVs, autonomous ground robots complement them by enabling high-resolution, site-specific observations in real time, especially at the [...] Read more.
Remote sensing technologies are increasingly integrated with autonomous robotic platforms to enhance data-driven decision-making in precision agriculture. Rather than replacing conventional platforms such as satellites or UAVs, autonomous ground robots complement them by enabling high-resolution, site-specific observations in real time, especially at the plant level. This review analyzes how remote sensing sensors—including multispectral, hyperspectral, LiDAR, and thermal—are deployed via robotic systems for specific agricultural tasks such as canopy mapping, weed identification, soil moisture monitoring, and precision spraying. Key benefits include higher spatial and temporal resolution, improved monitoring of under-canopy conditions, and enhanced task automation. However, the practical deployment of such systems is constrained by terrain complexity, power demands, and sensor calibration. The integration of artificial intelligence and IoT connectivity emerges as a critical enabler for responsive, scalable solutions. By focusing on how autonomous robots function as mobile sensor platforms, this article contributes to the understanding of their role within modern precision agriculture workflows. The findings support future development pathways aimed at increasing operational efficiency and sustainability across diverse crop systems. Full article
Show Figures

Figure 1

24 pages, 2583 KB  
Review
Every Pixel You Take: Unlocking Urban Vegetation Insights Through High- and Very-High-Resolution Remote Sensing
by Germán Catalán, Carlos Di Bella, Paula Meli, Francisco de la Barrera, Rodrigo Vargas-Gaete, Rosa Reyes-Riveros, Sonia Reyes-Packe and Adison Altamirano
Urban Sci. 2025, 9(9), 385; https://doi.org/10.3390/urbansci9090385 - 22 Sep 2025
Viewed by 413
Abstract
Urban vegetation plays a vital role in mitigating the impacts of urbanization, improving biodiversity, and providing key ecosystem services. However, the spatial distribution, ecological dynamics, and social implications of urban vegetation remain insufficiently understood, particularly in underrepresented regions. This systematic review aims to [...] Read more.
Urban vegetation plays a vital role in mitigating the impacts of urbanization, improving biodiversity, and providing key ecosystem services. However, the spatial distribution, ecological dynamics, and social implications of urban vegetation remain insufficiently understood, particularly in underrepresented regions. This systematic review aims to synthesize global research trends in very-high-resolution (VHR) remote sensing of urban vegetation between 2000 and 2024. A total of 123 peer-reviewed empirical studies were analyzed using bibliometric and thematic approaches, focusing on the spatial resolution (<10 m), sensor type, research objectives, and geographic distribution. The findings reveal a predominance of biophysical studies (72%) over social-focused studies (28%), with major thematic clusters related to urban climate, vegetation structure, and technological applications such as UAVs and machine learning. The research is heavily concentrated in the Global North, particularly China and the United States, while regions like Latin America and Africa remain underrepresented. This review identifies three critical gaps: (1) limited research in the Global South, (2) insufficient integration of ecological and social dimensions, and (3) underuse of advanced technologies such as hyperspectral imaging and AI-driven analysis. Addressing these gaps is essential for promoting equitable, technology-informed urban planning. This review provides a comprehensive overview of the state of the field and offers directions for future interdisciplinary research in urban remote sensing. Full article
Show Figures

Figure 1

29 pages, 35542 KB  
Article
A Novel Remote Sensing Framework Integrating Geostatistical Methods and Machine Learning for Spatial Prediction of Diversity Indices in the Desert Steppe
by Zhaohui Tang, Chuanzhong Xuan, Tao Zhang, Xinyu Gao, Suhui Liu, Yaobang Song and Fang Guo
Agriculture 2025, 15(18), 1926; https://doi.org/10.3390/agriculture15181926 - 11 Sep 2025
Viewed by 435
Abstract
Accurate assessments are vital for the effective conservation of desert steppe ecosystems, which are essential for maintaining biodiversity and ecological balance. Although geostatistical methods are commonly used for spatial modeling, they have limitations in terms of feature extraction and capturing non-linear relationships. This [...] Read more.
Accurate assessments are vital for the effective conservation of desert steppe ecosystems, which are essential for maintaining biodiversity and ecological balance. Although geostatistical methods are commonly used for spatial modeling, they have limitations in terms of feature extraction and capturing non-linear relationships. This study therefore proposes a novel remote sensing framework that integrates geostatistical methods and machine learning to predict the Shannon–Wiener index in desert steppe. Five models, Kriging interpolation, Random Forest, Support Vector Machine, 3D Convolutional Neural Network and Graph Attention Network, were employed for parameter inversion. The Helmert variance component estimation method was introduced to integrate the model outputs by iteratively evaluating residuals and assigning relative weights, enabling both optimal prediction and model contribution quantification. The ensemble model yielded a high prediction accuracy with an R2 of 0.7609. This integration strategy improves the accuracy of index prediction, and enhances the interpretability of the model regarding weight contributions in space. The proposed framework provides a reliable, scalable solution for biodiversity monitoring and supports scientific decision-making for grassland conservation and ecological restoration. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

18 pages, 3048 KB  
Article
Estimation of Wheat Leaf Water Content Based on UAV Hyper-Spectral Remote Sensing and Machine Learning
by Yunlong Wu, Shouqi Yuan, Junjie Zhu, Yue Tang and Lingdi Tang
Agriculture 2025, 15(17), 1898; https://doi.org/10.3390/agriculture15171898 - 7 Sep 2025
Viewed by 492
Abstract
Leaf water content is a critical metric during the growth and development of winter wheat. Rapid and efficient monitoring of leaf water content in winter wheat is essential for achieving precision irrigation and assessing crop quality. Unmanned aerial vehicle (UAV)-based hyperspectral remote sensing [...] Read more.
Leaf water content is a critical metric during the growth and development of winter wheat. Rapid and efficient monitoring of leaf water content in winter wheat is essential for achieving precision irrigation and assessing crop quality. Unmanned aerial vehicle (UAV)-based hyperspectral remote sensing technology has enormous application potential in the field of crop monitoring. In this study, UAV was used as the platform to conduct six canopy hyperspectral data samplings and field-measured leaf water content (LWC) across four growth stages of winter wheat. Then, six spectral transformations were performed on the original spectral data and combined with the correlation analysis with wheat leaf water content (LWC). Multiple scattering correction (MSC), standard normal variate (SNV), and first derivative (FD) were selected as the subsequent transformation methods. Additionally, competitive adaptive reweighted sampling (CARS) and the Hilbert–Schmidt independence criterion lasso (HSICLasso) were employed for feature selection to eliminate redundant information from the spectral data. Finally, three machine learning algorithms—partial least squares regression (PLSR), support vector regression (SVR), and random forest (RF)—were combined with different data preprocessing methods, and 50 random partition datasets and model evaluation experiments were conducted to compare the accuracy of different combination models in assessing wheat LWC. The results showed that there are significant differences in the predictive performance of different combination models. By comparing the prediction accuracy on the test set, the optimal combinations of the three models are MSC + CARS + SVR (R2 = 0.713, RMSE = 0.793, RPD = 2.097), SNV + CARS + PLSR (R2 = 0.692, RMSE = 0.866, RPD = 2.053), and FD + CARS + RF (R2 = 0.689, RMSE = 0.848, RPD = 2.002). All three models can accurately and stably predict winter wheat LWC, and the CARS feature extraction method can improve the prediction accuracy and enhance the stability of the model, among which the SVR algorithm has better robustness and generalization ability. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

28 pages, 1950 KB  
Review
Remote Sensing Approaches for Water Hyacinth and Water Quality Monitoring: Global Trends, Techniques, and Applications
by Lakachew Y. Alemneh, Daganchew Aklog, Ann van Griensven, Goraw Goshu, Seleshi Yalew, Wubneh B. Abebe, Minychl G. Dersseh, Demesew A. Mhiret, Claire I. Michailovsky, Selamawit Amare and Sisay Asress
Water 2025, 17(17), 2573; https://doi.org/10.3390/w17172573 - 31 Aug 2025
Viewed by 1915
Abstract
Water hyacinth (Eichhornia crassipes), native to South America, is a highly invasive aquatic plant threatening freshwater ecosystems worldwide. Its rapid proliferation negatively impacts water quality, biodiversity, and navigation. Remote sensing offers an effective means to monitor such aquatic environments by providing extensive spatial [...] Read more.
Water hyacinth (Eichhornia crassipes), native to South America, is a highly invasive aquatic plant threatening freshwater ecosystems worldwide. Its rapid proliferation negatively impacts water quality, biodiversity, and navigation. Remote sensing offers an effective means to monitor such aquatic environments by providing extensive spatial and temporal coverage with improved resolution. This systematic review examines remote sensing applications for monitoring water hyacinth and water quality in studies published from 2014 to 2024. Seventy-eight peer-reviewed articles were selected from the Web of Science, Scopus, and Google Scholar following strict criteria. The research spans 25 countries across five continents, focusing mainly on lakes (61.5%), rivers (21%), and wetlands (10.3%). Approximately 49% of studies addressed water quality, 42% focused on water hyacinth, and 9% covered both. The Sentinel-2 Multispectral Instrument (MSI) was the most used sensor (35%), followed by the Landsat 8 Operational Land Imager (OLI) (26%). Multi-sensor fusion, especially Sentinel-2 MSI with Unmanned Aerial Vehicles (UAVs), was frequently applied to enhance monitoring capabilities. Detection accuracies ranged from 74% to 98% using statistical, machine learning, and deep learning techniques. Key challenges include limited ground-truth data and inadequate atmospheric correction. The integration of high-resolution sensors with advanced analytics shows strong promise for effective inland water monitoring. Full article
(This article belongs to the Section Ecohydrology)
Show Figures

Graphical abstract

20 pages, 3795 KB  
Article
Leaf Area Index Estimation of Grassland Based on UAV-Borne Hyperspectral Data and Multiple Machine Learning Models in Hulun Lake Basin
by Dazhou Wu, Saru Bao, Yi Tong, Yifan Fan, Lu Lu, Songtao Liu, Wenjing Li, Mengyong Xue, Bingshuai Cao, Quan Li, Muha Cha, Qian Zhang and Nan Shan
Remote Sens. 2025, 17(16), 2914; https://doi.org/10.3390/rs17162914 - 21 Aug 2025
Viewed by 809
Abstract
Leaf area index (LAI) is a crucial parameter reflecting the crown structure of the grassland. Accurately obtaining LAI is of great significance for estimating carbon sinks in grassland ecosystems. However, spectral noise interference and pronounced spatial heterogeneity within vegetation canopies constitute significant impediments [...] Read more.
Leaf area index (LAI) is a crucial parameter reflecting the crown structure of the grassland. Accurately obtaining LAI is of great significance for estimating carbon sinks in grassland ecosystems. However, spectral noise interference and pronounced spatial heterogeneity within vegetation canopies constitute significant impediments to achieving high-precision LAI retrieval. This study used hyperspectral sensor mounted on an unmanned aerial vehicle (UAV) to estimate LAI in a typical grassland, Hulun Lake Basin. Multiple machine learning (ML) models were constructed to reveal a relationship between hyperspectral data and grassland LAI using two input datasets, namely spectral transformations and vegetation indices (VIs), while SHAP (SHapley Additive ExPlanation) interpretability analysis was further employed to identify high-contribution features in the ML models. The analysis revealed that grassland LAI has good correlations with the original spectrum at 550 nm and 750 nm–1000 nm, first and second derivatives at 506 nm–574 nm, 649 nm–784 nm, and vegetation indices including the triangular vegetation index (TVI), enhanced vegetation index 2 (EVI2), and soil-adjusted vegetation index (SAVI). In the models using spectral transformations and VIs, the random forest (RF) models outperformed other models (testing R2 = 0.89/0.88, RMSE = 0.20/0.21, and RRMSE = 27.34%/28.98%). The prediction error of the random forest model exhibited a positive correlation with measured LAI magnitude but demonstrated an inverse relationship with quadrat-level species richness, quantified by Margalef’s richness index (MRI). We also found that at the quadrat level, the spectral response curve pattern is influenced by attributes within the quadrat, like dominant species and vegetation cover, and that LAI has positive relationship with quadrat vegetation cover. The LAI inversion results in this study were also compared to main LAI products, showing a good correlation (r = 0.71). This study successfully established a high-fidelity inversion framework for hyperspectral-derived LAI estimation in mid-to-high latitude grasslands of the Hulun Lake Basin, supporting the spatial refinement of continental-scale carbon sink models at a regional scale. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

23 pages, 14694 KB  
Article
PLCNet: A 3D-CNN-Based Plant-Level Classification Network Hyperspectral Framework for Sweetpotato Virus Disease Detection
by Qiaofeng Zhang, Wei Wang, Han Su, Gaoxiang Yang, Jiawen Xue, Hui Hou, Xiaoyue Geng, Qinghe Cao and Zhen Xu
Remote Sens. 2025, 17(16), 2882; https://doi.org/10.3390/rs17162882 - 19 Aug 2025
Viewed by 716
Abstract
Sweetpotato virus disease (SPVD) poses a significant threat to global sweetpotato production; therefore, early, accurate field-scale detection is necessary. To address the limitations of the currently utilized assays, we propose PLCNet (Plant-Level Classification Network), a rapid, non-destructive SPVD identification framework using UAV-acquired hyperspectral [...] Read more.
Sweetpotato virus disease (SPVD) poses a significant threat to global sweetpotato production; therefore, early, accurate field-scale detection is necessary. To address the limitations of the currently utilized assays, we propose PLCNet (Plant-Level Classification Network), a rapid, non-destructive SPVD identification framework using UAV-acquired hyperspectral imagery. High-resolution data from early sweetpotato growth stages were processed via three feature selection methods—Random Forest (RF), Minimum Redundancy Maximum Relevance (mRMR), and Local Covariance Matrix (LCM)—in combination with 24 vegetation indices. Variance Inflation Factor (VIF) analysis reduced multicollinearity, yielding an optimized SPVD-sensitive feature set. First, using the RF-selected bands and vegetation indices, we benchmarked four classifiers—Support Vector Machine (SVM), Gradient Boosting Decision Tree (GBDT), Residual Network (ResNet), and 3D Convolutional Neural Network (3D-CNN). Under identical inputs, the 3D-CNN achieved superior performance (OA = 96.55%, Macro F1 = 95.36%, UA_mean = 0.9498, PA_mean = 0.9504), outperforming SVM, GBDT, and ResNet. Second, with the same spectral–spatial features and 3D-CNN backbone, we compared a pixel-level baseline (CropdocNet) against our plant-level PLCNet. CropdocNet exhibited spatial fragmentation and isolated errors, whereas PLCNet’s two-stage pipeline—deep feature extraction followed by connected-component analysis and majority voting—aggregated voxel predictions into coherent whole-plant labels, substantially reducing noise and enhancing biological interpretability. By integrating optimized feature selection, deep learning, and plant-level post-processing, PLCNet delivers a scalable, high-throughput solution for precise SPVD monitoring in agricultural fields. Full article
Show Figures

Figure 1

26 pages, 7726 KB  
Article
Multi-Branch Channel-Gated Swin Network for Wetland Hyperspectral Image Classification
by Ruopu Liu, Jie Zhao, Shufang Tian, Guohao Li and Jingshu Chen
Remote Sens. 2025, 17(16), 2862; https://doi.org/10.3390/rs17162862 - 17 Aug 2025
Viewed by 519
Abstract
Hyperspectral classification of wetland environments remains challenging due to high spectral similarity, class imbalance, and blurred boundaries. To address these issues, we propose a novel Multi-Branch Channel-Gated Swin Transformer network (MBCG-SwinNet). In contrast to previous CNN-based designs, our model introduces a Swin Transformer [...] Read more.
Hyperspectral classification of wetland environments remains challenging due to high spectral similarity, class imbalance, and blurred boundaries. To address these issues, we propose a novel Multi-Branch Channel-Gated Swin Transformer network (MBCG-SwinNet). In contrast to previous CNN-based designs, our model introduces a Swin Transformer spectral branch to enhance global contextual modeling, enabling improved spectral discrimination. To effectively fuse spatial and spectral features, we design a residual feature interaction chain comprising a Residual Spatial Fusion (RSF) module, a channel-wise gating mechanism, and a multi-scale feature fusion (MFF) module, which together enhance spatial adaptivity and feature integration. Additionally, a DenseCRF-based post-processing step is employed to refine classification boundaries and suppress salt-and-pepper noise. Experimental results on three UAV-based hyperspectral wetland datasets from the Yellow River Delta (Shandong, China)—NC12, NC13, and NC16—demonstrate that MBCG-SwinNet achieves superior classification performance, with overall accuracies of 97.62%, 82.37%, and 97.32%, respectively—surpassing state-of-the-art methods. The proposed architecture offers a robust and scalable solution for hyperspectral image classification in complex ecological settings. Full article
Show Figures

Figure 1

22 pages, 5692 KB  
Article
RiceStageSeg: A Multimodal Benchmark Dataset for Semantic Segmentation of Rice Growth Stages
by Jianping Zhang, Tailai Chen, Yizhe Li, Qi Meng, Yanying Chen, Jie Deng and Enhong Sun
Remote Sens. 2025, 17(16), 2858; https://doi.org/10.3390/rs17162858 - 16 Aug 2025
Viewed by 855
Abstract
The accurate identification of rice growth stages is critical for precision agriculture, crop management, and yield estimation. Remote sensing technologies, particularly multimodal approaches that integrate high spatial and hyperspectral resolution imagery, have demonstrated great potential in large-scale crop monitoring. Multimodal data fusion offers [...] Read more.
The accurate identification of rice growth stages is critical for precision agriculture, crop management, and yield estimation. Remote sensing technologies, particularly multimodal approaches that integrate high spatial and hyperspectral resolution imagery, have demonstrated great potential in large-scale crop monitoring. Multimodal data fusion offers complementary and enriched spectral–spatial information, providing novel pathways for crop growth stage recognition in complex agricultural scenarios. However, the lack of publicly available multimodal datasets specifically designed for rice growth stage identification remains a significant bottleneck that limits the development and evaluation of relevant methods. To address this gap, we present RiceStageSeg, a multimodal benchmark dataset captured by unmanned aerial vehicles (UAVs), designed to support the development and assessment of segmentation models for rice growth monitoring. RiceStageSeg contains paired centimeter-level RGB and 10-band multispectral (MS) images acquired during several critical rice growth stages, including jointing and heading. Each image is accompanied by fine-grained, pixel-level annotations that distinguish between the different growth stages. We establish baseline experiments using several state-of-the-art semantic segmentation models under both unimodal (RGB-only, MS-only) and multimodal (RGB + MS fusion) settings. The experimental results demonstrate that multimodal feature-level fusion outperforms unimodal approaches in segmentation accuracy. RiceStageSeg offers a standardized benchmark to advance future research in multimodal semantic segmentation for agricultural remote sensing. The dataset will be made publicly available on GitHub v0.11.0 (accessed on 1 August 2025). Full article
Show Figures

Figure 1

23 pages, 4597 KB  
Article
High-Throughput UAV Hyperspectral Remote Sensing Pinpoints Bacterial Leaf Streak Resistance in Wheat
by Alireza Sanaeifar, Ruth Dill-Macky, Rebecca D. Curland, Susan Reynolds, Matthew N. Rouse, Shahryar Kianian and Ce Yang
Remote Sens. 2025, 17(16), 2799; https://doi.org/10.3390/rs17162799 - 13 Aug 2025
Viewed by 875
Abstract
Bacterial leaf streak (BLS), caused by Xanthomonas translucens pv. undulosa, has become an intermittent yet economically significant disease of wheat in the Upper Midwest during the last decade. Because chemical and cultural controls remain ineffective, breeders rely on developing resistant varieties, yet [...] Read more.
Bacterial leaf streak (BLS), caused by Xanthomonas translucens pv. undulosa, has become an intermittent yet economically significant disease of wheat in the Upper Midwest during the last decade. Because chemical and cultural controls remain ineffective, breeders rely on developing resistant varieties, yet visual ratings in inoculated nurseries are labor-intensive, subjective, and time-consuming. To accelerate this process, we combined unmanned-aerial-vehicle hyperspectral imaging (UAV-HSI) with a carefully tuned chemometric workflow that delivers rapid, objective estimates of disease severity. Principal component analysis cleanly separated BLS, leaf rust, and Fusarium head blight, with the first component explaining 97.76% of the spectral variance, demonstrating in-field pathogen discrimination. Pre-processing of the hyperspectral cubes, followed by robust Partial Least Squares (RPLS) regression, improved model reliability by managing outliers and heteroscedastic noise. Four variable-selection strategies—Variable Importance in Projection (VIP), Interval PLS (iPLS), Recursive Weighted PLS (rPLS), and Genetic Algorithm (GA)—were evaluated; rPLS provided the best balance between parsimony and accuracy, trimming the predictor set from 244 to 29 bands. Informative wavelengths clustered in the near-infrared and red-edge regions, which are linked to chlorophyll loss and canopy water stress. The best model, RPLS with optimal preprocessing and variable selection based on the rPLS method, showed high predictive accuracy, achieving a cross-validated R2 of 0.823 and cross-validated RMSE of 7.452, demonstrating its effectiveness for detecting and quantifying BLS. We also explored the spectral overlap with Sentinel-2 bands, showing how UAV-derived maps can nest within satellite mosaics to link plot-level scouting to landscape-scale surveillance. Together, these results lay a practical foundation for breeders to speed the selection of resistant lines and for agronomists to monitor BLS dynamics across multiple spatial scales. Full article
Show Figures

Figure 1

29 pages, 6375 KB  
Article
“Ground–Aerial–Satellite” Atmospheric Correction Method Based on UAV Hyperspectral Data for Coastal Waters
by Xinyuan Su, Jianyong Cui, Jinying Zhang, Jie Guo, Mingming Xu and Wenwen Gao
Remote Sens. 2025, 17(16), 2768; https://doi.org/10.3390/rs17162768 - 9 Aug 2025
Viewed by 833
Abstract
In ocean color remote sensing, most of the radiative energy received by sensors comes from the atmosphere, requiring highly accurate atmospheric correction. Although atmospheric correction models based on ground measurements—especially the Ground-Aerial-Satellite Atmospheric Correction (GASAC) method that integrates multi-scale synchronous data—are theoretically optimal, [...] Read more.
In ocean color remote sensing, most of the radiative energy received by sensors comes from the atmosphere, requiring highly accurate atmospheric correction. Although atmospheric correction models based on ground measurements—especially the Ground-Aerial-Satellite Atmospheric Correction (GASAC) method that integrates multi-scale synchronous data—are theoretically optimal, their application in nearshore areas is limited by the lack of synchronous samples, pixel mismatches, and nonlinear atmospheric effects. This study focuses on Tangdao Bay in Qingdao, Shandong Province, China, and proposes an innovative GASAC method for nearshore waters using synchronized surface spectrometer data and UAV hyperspectral imagery collected during Sentinel-2 satellite overpasses. The method first resolves pixel mismatch issues in UAV data through Pixel-by-Pixel Matching (MPP) and applies the Empirical Line Model (ELM) for high-accuracy ground-aerial atmospheric correction. Then, based on spectrally unified UAV and satellite data, a large amount of high-quality spatial atmospheric reference data is obtained. Finally, a Transformer model optimized by an Exponential-Trigonometric Optimization (ETO) algorithm is used to fit nonlinear atmospheric effects and perform aerial-to-satellite correction, forming a stepwise GASAC framework. The results show that GASAC achieves high accuracy and good generalization in local areas, with predicted remote sensing reflectance reaching R2 = 0.962 and RMSE = 12.54 × 10−4 sr−1, improving by 5.2% and 23.5%, respectively, over the latest deep learning baseline. In addition, the corrected data achieved R2 = 0.866 in a Chl-a retrieval model based on in situ measurements, demonstrating strong application potential. This study offers a precise and generalizable atmospheric correction method for satellite imagery in nearshore water quality monitoring, with important value for coastal aquatic ecological sensing. Full article
Show Figures

Figure 1

19 pages, 4142 KB  
Article
Onboard Real-Time Hyperspectral Image Processing System Design for Unmanned Aerial Vehicles
by Ruifan Yang, Min Huang, Wenhao Zhao, Zixuan Zhang, Yan Sun, Lulu Qian and Zhanchao Wang
Sensors 2025, 25(15), 4822; https://doi.org/10.3390/s25154822 - 5 Aug 2025
Viewed by 1044
Abstract
This study proposes and implements a dual-processor FPGA-ARM architecture to resolve the critical contradiction between massive data volumes and real-time processing demands in UAV-borne hyperspectral imaging. The integrated system incorporates a shortwave infrared hyperspectral camera, IMU, control module, heterogeneous computing core, and SATA [...] Read more.
This study proposes and implements a dual-processor FPGA-ARM architecture to resolve the critical contradiction between massive data volumes and real-time processing demands in UAV-borne hyperspectral imaging. The integrated system incorporates a shortwave infrared hyperspectral camera, IMU, control module, heterogeneous computing core, and SATA SSD storage. Through hardware-level task partitioning—utilizing FPGA for high-speed data buffering and ARM for core computational processing—it achieves a real-time end-to-end acquisition–storage–processing–display pipeline. The compact integrated device exhibits a total weight of merely 6 kg and power consumption of 40 W, suitable for airborne platforms. Experimental validation confirms the system’s capability to store over 200 frames per second (at 640 × 270 resolution, matching the camera’s maximum frame rate), quick-look imaging capability, and demonstrated real-time processing efficacy via relative radio-metric correction tasks (processing 5000 image frames within 1000 ms). This framework provides an effective technical solution to address hyperspectral data processing bottlenecks more efficiently on UAV platforms for dynamic scenario applications. Future work includes actual flight deployment to verify performance in operational environments. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 5891 KB  
Article
Potential of Multi-Source Multispectral vs. Hyperspectral Remote Sensing for Winter Wheat Nitrogen Monitoring
by Xiaokai Chen, Yuxin Miao, Krzysztof Kusnierek, Fenling Li, Chao Wang, Botai Shi, Fei Wu, Qingrui Chang and Kang Yu
Remote Sens. 2025, 17(15), 2666; https://doi.org/10.3390/rs17152666 - 1 Aug 2025
Viewed by 673
Abstract
Timely and accurate monitoring of crop nitrogen (N) status is essential for precision agriculture. UAV-based hyperspectral remote sensing offers high-resolution data for estimating plant nitrogen concentration (PNC), but its cost and complexity limit large-scale application. This study compares the performance of UAV hyperspectral [...] Read more.
Timely and accurate monitoring of crop nitrogen (N) status is essential for precision agriculture. UAV-based hyperspectral remote sensing offers high-resolution data for estimating plant nitrogen concentration (PNC), but its cost and complexity limit large-scale application. This study compares the performance of UAV hyperspectral data (S185 sensor) with simulated multispectral data from DJI Phantom 4 Multispectral (P4M), PlanetScope (PS), and Sentinel-2A (S2) in estimating winter wheat PNC. Spectral data were collected across six growth stages over two seasons and resampled to match the spectral characteristics of the three multispectral sensors. Three variable selection strategies (one-dimensional (1D) spectral reflectance, optimized two-dimensional (2D), and three-dimensional (3D) spectral indices) were combined with Random Forest Regression (RFR), Support Vector Machine Regression (SVMR), and Partial Least Squares Regression (PLSR) to build PNC prediction models. Results showed that, while hyperspectral data yielded slightly higher accuracy, optimized multispectral indices, particularly from PS and S2, achieved comparable performance. Among models, SVM and RFR showed consistent effectiveness across strategies. These findings highlight the potential of low-cost multispectral platforms for practical crop N monitoring. Future work should validate these models using real satellite imagery and explore multi-source data fusion with advanced learning algorithms. Full article
(This article belongs to the Special Issue Perspectives of Remote Sensing for Precision Agriculture)
Show Figures

Figure 1

17 pages, 4557 KB  
Article
Potential of LiDAR and Hyperspectral Sensing for Overcoming Challenges in Current Maritime Ballast Tank Corrosion Inspection
by Sergio Pallas Enguita, Jiajun Jiang, Chung-Hao Chen, Samuel Kovacic and Richard Lebel
Electronics 2025, 14(15), 3065; https://doi.org/10.3390/electronics14153065 - 31 Jul 2025
Viewed by 575
Abstract
Corrosion in maritime ballast tanks is a major driver of maintenance costs and operational risks for maritime assets. Inspections are hampered by complex geometries, hazardous conditions, and the limitations of conventional methods, particularly visual assessment, which struggles with subjectivity, accessibility, and early detection, [...] Read more.
Corrosion in maritime ballast tanks is a major driver of maintenance costs and operational risks for maritime assets. Inspections are hampered by complex geometries, hazardous conditions, and the limitations of conventional methods, particularly visual assessment, which struggles with subjectivity, accessibility, and early detection, especially under coatings. This paper critically examines these challenges and explores the potential of Light Detection and Ranging (LiDAR) and Hyperspectral Imaging (HSI) to form the basis of improved inspection approaches. We discuss LiDAR’s utility for accurate 3D mapping and providing a spatial framework and HSI’s potential for objective material identification and surface characterization based on spectral signatures along a wavelength range of 400-1000nm (visible and near infrared). Preliminary findings from laboratory tests are presented, demonstrating the basic feasibility of HSI for differentiating surface conditions (corrosion, coatings, bare metal) and relative coating thickness, alongside LiDAR’s capability for detailed geometric capture. Although these results do not represent a deployable system, they highlight how LiDAR and HSI could address key limitations of current practices and suggest promising directions for future research into integrated sensor-based corrosion assessment strategies. Full article
Show Figures

Figure 1

Back to TopTop