Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (90)

Search Parameters:
Keywords = UAV-based hyperspectral imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 5340 KiB  
Article
Potential of Multi-Source Multispectral vs. Hyperspectral Remote Sensing for Winter Wheat Nitrogen Monitoring
by Xiaokai Chen, Yuxin Miao, Krzysztof Kusnierek, Fenling Li, Chao Wang, Botai Shi, Fei Wu, Qingrui Chang and Kang Yu
Remote Sens. 2025, 17(15), 2666; https://doi.org/10.3390/rs17152666 (registering DOI) - 1 Aug 2025
Abstract
Timely and accurate monitoring of crop nitrogen (N) status is essential for precision agriculture. UAV-based hyperspectral remote sensing offers high-resolution data for estimating plant nitrogen concentration (PNC), but its cost and complexity limit large-scale application. This study compares the performance of UAV hyperspectral [...] Read more.
Timely and accurate monitoring of crop nitrogen (N) status is essential for precision agriculture. UAV-based hyperspectral remote sensing offers high-resolution data for estimating plant nitrogen concentration (PNC), but its cost and complexity limit large-scale application. This study compares the performance of UAV hyperspectral data (S185 sensor) with simulated multispectral data from DJI Phantom 4 Multispectral (P4M), PlanetScope (PS), and Sentinel-2A (S2) in estimating winter wheat PNC. Spectral data were collected across six growth stages over two seasons and resampled to match the spectral characteristics of the three multispectral sensors. Three variable selection strategies (one-dimensional (1D) spectral reflectance, optimized two-dimensional (2D), and three-dimensional (3D) spectral indices) were combined with Random Forest Regression (RFR), Support Vector Machine Regression (SVMR), and Partial Least Squares Regression (PLSR) to build PNC prediction models. Results showed that, while hyperspectral data yielded slightly higher accuracy, optimized multispectral indices, particularly from PS and S2, achieved comparable performance. Among models, SVM and RFR showed consistent effectiveness across strategies. These findings highlight the potential of low-cost multispectral platforms for practical crop N monitoring. Future work should validate these models using real satellite imagery and explore multi-source data fusion with advanced learning algorithms. Full article
(This article belongs to the Special Issue Perspectives of Remote Sensing for Precision Agriculture)
25 pages, 5776 KiB  
Article
Early Detection of Herbicide-Induced Tree Stress Using UAV-Based Multispectral and Hyperspectral Imagery
by Russell Main, Mark Jayson B. Felix, Michael S. Watt and Robin J. L. Hartley
Forests 2025, 16(8), 1240; https://doi.org/10.3390/f16081240 - 28 Jul 2025
Viewed by 296
Abstract
There is growing interest in the use of herbicide for the silvicultural practice of tree thinning (i.e., chemical thinning or e-thinning) in New Zealand. Potential benefits of this approach include improved stability of the standing crop in high winds, and safer and lower-cost [...] Read more.
There is growing interest in the use of herbicide for the silvicultural practice of tree thinning (i.e., chemical thinning or e-thinning) in New Zealand. Potential benefits of this approach include improved stability of the standing crop in high winds, and safer and lower-cost operations, particularly in steep or remote terrain. As uptake grows, tools for monitoring treatment effectiveness, particularly during the early stages of stress, will become increasingly important. This study evaluated the use of UAV-based multispectral and hyperspectral imagery to detect early herbicide-induced stress in a nine-year-old radiata pine (Pinus radiata D. Don) plantation, based on temporal changes in crown spectral signatures following treatment with metsulfuron-methyl. A staggered-treatment design was used, in which herbicide was applied to a subset of trees in six blocks over several weeks. This staggered design allowed a single UAV acquisition to capture imagery of trees at varying stages of herbicide response, with treated trees ranging from 13 to 47 days after treatment (DAT). Visual canopy assessments were carried out to validate the onset of visible symptoms. Spectral changes either preceded or coincided with the development of significant visible canopy symptoms, which started at 25 DAT. Classification models developed using narrow band hyperspectral indices (NBHI) allowed robust discrimination of treated and non-treated trees as early as 13 DAT (F1 score = 0.73), with stronger results observed at 18 DAT (F1 score = 0.78). Models that used multispectral indices were able to classify treatments with a similar accuracy from 18 DAT (F1 score = 0.78). Across both sensors, pigment-sensitive indices, particularly variants of the Photochemical Reflectance Index, consistently featured among the top predictors at all time points. These findings address a key knowledge gap by demonstrating practical, remote sensing-based solutions for monitoring and characterising herbicide-induced stress in field-grown radiata pine. The 13-to-18 DAT early detection window provides an operational baseline and a target for future research seeking to refine UAV-based detection of chemical thinning. Full article
(This article belongs to the Section Forest Health)
Show Figures

Figure 1

27 pages, 2978 KiB  
Article
Dynamic Monitoring and Precision Fertilization Decision System for Agricultural Soil Nutrients Using UAV Remote Sensing and GIS
by Xiaolong Chen, Hongfeng Zhang and Cora Un In Wong
Agriculture 2025, 15(15), 1627; https://doi.org/10.3390/agriculture15151627 - 27 Jul 2025
Viewed by 311
Abstract
We propose a dynamic monitoring and precision fertilization decision system for agricultural soil nutrients, integrating UAV remote sensing and GIS technologies to address the limitations of traditional soil nutrient assessment methods. The proposed method combines multi-source data fusion, including hyperspectral and multispectral UAV [...] Read more.
We propose a dynamic monitoring and precision fertilization decision system for agricultural soil nutrients, integrating UAV remote sensing and GIS technologies to address the limitations of traditional soil nutrient assessment methods. The proposed method combines multi-source data fusion, including hyperspectral and multispectral UAV imagery with ground sensor data, to achieve high-resolution spatial and spectral analysis of soil nutrients. Real-time data processing algorithms enable rapid updates of soil nutrient status, while a time-series dynamic model captures seasonal variations and crop growth stage influences, improving prediction accuracy (RMSE reductions of 43–70% for nitrogen, phosphorus, and potassium compared to conventional laboratory-based methods and satellite NDVI approaches). The experimental validation compared the proposed system against two conventional approaches: (1) laboratory soil testing with standardized fertilization recommendations and (2) satellite NDVI-based fertilization. Field trials across three distinct agroecological zones demonstrated that the proposed system reduced fertilizer inputs by 18–27% while increasing crop yields by 4–11%, outperforming both conventional methods. Furthermore, an intelligent fertilization decision model generates tailored fertilization plans by analyzing real-time soil conditions, crop demands, and climate factors, with continuous learning enhancing its precision over time. The system also incorporates GIS-based visualization tools, providing intuitive spatial representations of nutrient distributions and interactive functionalities for detailed insights. Our approach significantly advances precision agriculture by automating the entire workflow from data collection to decision-making, reducing resource waste and optimizing crop yields. The integration of UAV remote sensing, dynamic modeling, and machine learning distinguishes this work from conventional static systems, offering a scalable and adaptive framework for sustainable farming practices. Full article
(This article belongs to the Section Agricultural Soils)
Show Figures

Figure 1

32 pages, 5287 KiB  
Article
UniHSFormer X for Hyperspectral Crop Classification with Prototype-Routed Semantic Structuring
by Zhen Du, Senhao Liu, Yao Liao, Yuanyuan Tang, Yanwen Liu, Huimin Xing, Zhijie Zhang and Donghui Zhang
Agriculture 2025, 15(13), 1427; https://doi.org/10.3390/agriculture15131427 - 2 Jul 2025
Viewed by 347
Abstract
Hyperspectral imaging (HSI) plays a pivotal role in modern agriculture by capturing fine-grained spectral signatures that support crop classification, health assessment, and land-use monitoring. However, the transition from raw spectral data to reliable semantic understanding remains challenging—particularly under fragmented planting patterns, spectral ambiguity, [...] Read more.
Hyperspectral imaging (HSI) plays a pivotal role in modern agriculture by capturing fine-grained spectral signatures that support crop classification, health assessment, and land-use monitoring. However, the transition from raw spectral data to reliable semantic understanding remains challenging—particularly under fragmented planting patterns, spectral ambiguity, and spatial heterogeneity. To address these limitations, we propose UniHSFormer-X, a unified transformer-based framework that reconstructs agricultural semantics through prototype-guided token routing and hierarchical context modeling. Unlike conventional models that treat spectral–spatial features uniformly, UniHSFormer-X dynamically modulates information flow based on class-aware affinities, enabling precise delineation of field boundaries and robust recognition of spectrally entangled crop types. Evaluated on three UAV-based benchmarks—WHU-Hi-LongKou, HanChuan, and HongHu—the model achieves up to 99.80% overall accuracy and 99.28% average accuracy, outperforming state-of-the-art CNN, ViT, and hybrid architectures across both structured and heterogeneous agricultural scenarios. Ablation studies further reveal the critical role of semantic routing and prototype projection in stabilizing model behavior, while parameter surface analysis demonstrates consistent generalization across diverse configurations. Beyond high performance, UniHSFormer-X offers a semantically interpretable architecture that adapts to the spatial logic and compositional nuance of agricultural imagery, representing a forward step toward robust and scalable crop classification. Full article
Show Figures

Figure 1

24 pages, 41032 KiB  
Article
Multi-Parameter Water Quality Inversion in Heterogeneous Inland Waters Using UAV-Based Hyperspectral Data and Deep Learning Methods
by Hongran Li, Nuo Wang, Zixuan Du, Deyu Huang, Mengjie Shi, Zhaoman Zhong and Dongqing Yuan
Remote Sens. 2025, 17(13), 2191; https://doi.org/10.3390/rs17132191 - 25 Jun 2025
Viewed by 350
Abstract
Water quality monitoring is crucial for ecological protection and water resource management. However, traditional monitoring methods suffer from limitations in temporal, spatial, and spectral resolution, which constrain the effective evaluation of urban rivers and multi-scale aquatic systems. To address challenges such as ecological [...] Read more.
Water quality monitoring is crucial for ecological protection and water resource management. However, traditional monitoring methods suffer from limitations in temporal, spatial, and spectral resolution, which constrain the effective evaluation of urban rivers and multi-scale aquatic systems. To address challenges such as ecological heterogeneity, multi-scale complexity, and data noise, this paper proposes a deep learning framework, TL-Net, based on unmanned aerial vehicle (UAV) hyperspectral imagery, to estimate four water quality parameters: total nitrogen (TN), dissolved oxygen (DO), total suspended solids (TSS), and chlorophyll a (Chla); and to produce their spatial distribution maps. This framework integrates Transformer and long short-term memory (LSTM) networks, introduces a cross-temporal attention mechanism to enhance feature correlation, and incorporates an adaptive feature fusion module for dynamically weighted integration of local and global information. The experimental results demonstrate that TL-Net markedly outperforms conventional machine learning approaches, delivering consistently high predictive accuracy across all evaluated water quality parameters. Specifically, the model achieves an R2 of 0.9938 for TN, a mean absolute error (MAE) of 0.0728 for DO, a root mean square error (RMSE) of 0.3881 for total TSS, and a mean absolute percentage error (MAPE) as low as 0.2568% for Chla. A spatial analysis reveals significant heterogeneity in water quality distribution across the study area, with natural water bodies exhibiting relatively uniform conditions, while the concentrations of TN and TSS are substantially elevated in aquaculture areas due to aquaculture activities. Overall, TL-Net significantly improves multi-parameter water quality prediction, captures fine-scale spatial variability, and offers a robust and scalable solution for inland aquatic ecosystem monitoring. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

22 pages, 6402 KiB  
Article
A Study on Airborne Hyperspectral Tree Species Classification Based on the Synergistic Integration of Machine Learning and Deep Learning
by Dabing Yang, Jinxiu Song, Chaohua Huang, Fengxin Yang, Yiming Han and Ruirui Wang
Forests 2025, 16(6), 1032; https://doi.org/10.3390/f16061032 - 19 Jun 2025
Viewed by 417
Abstract
Against the backdrop of global climate change and increasing ecological pressure, the refined monitoring of forest resources and accurate tree species identification have become essential tasks for sustainable forest management. Hyperspectral remote sensing, with its high spectral resolution, shows great promise in tree [...] Read more.
Against the backdrop of global climate change and increasing ecological pressure, the refined monitoring of forest resources and accurate tree species identification have become essential tasks for sustainable forest management. Hyperspectral remote sensing, with its high spectral resolution, shows great promise in tree species classification. However, traditional methods face limitations in extracting joint spatial–spectral features, particularly in complex forest environments, due to the “curse of dimensionality” and the scarcity of labeled samples. To address these challenges, this study proposes a synergistic classification approach that combines the spatial feature extraction capabilities of deep learning with the generalization advantages of machine learning. Specifically, a 2D convolutional neural network (2DCNN) is integrated with a support vector machine (SVM) classifier to enhance classification accuracy and model robustness under limited sample conditions. Using UAV-based hyperspectral imagery collected from a typical plantation area in Fuzhou City, Jiangxi Province, and ground-truth data for labeling, a highly imbalanced sample split strategy (1:99) is adopted. The 2DCNN is further evaluated in conjunction with six classifiers—CatBoost, decision tree (DT), k-nearest neighbors (KNN), LightGBM, random forest (RF), and SVM—for comparison. The 2DCNN-SVM combination is identified as the optimal model. In the classification of Masson pine, Chinese fir, and eucalyptus, this method achieves an overall accuracy (OA) of 97.56%, average accuracy (AA) of 97.47%, and a Kappa coefficient of 0.9665, significantly outperforming traditional approaches. The results demonstrate that the 2DCNN-SVM model offers superior feature representation and generalization capabilities in high-dimensional, small-sample scenarios, markedly improving tree species classification accuracy in complex forest settings. This study validates the model’s potential for application in small-sample forest remote sensing and provides theoretical support and technical guidance for high-precision tree species identification and dynamic forest monitoring. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

26 pages, 4992 KiB  
Article
NDVI and Beyond: Vegetation Indices as Features for Crop Recognition and Segmentation in Hyperspectral Data
by Andreea Nițu, Corneliu Florea, Mihai Ivanovici and Andrei Racoviteanu
Sensors 2025, 25(12), 3817; https://doi.org/10.3390/s25123817 - 18 Jun 2025
Viewed by 537
Abstract
Vegetation indices have long been central to vegetation monitoring through remote sensing. The most popular one is the Normalized Difference Vegetation Index (NDVI), yet many vegetation indices (VIs) exist. In this paper, we investigate their distinctiveness and discriminative power in the context of [...] Read more.
Vegetation indices have long been central to vegetation monitoring through remote sensing. The most popular one is the Normalized Difference Vegetation Index (NDVI), yet many vegetation indices (VIs) exist. In this paper, we investigate their distinctiveness and discriminative power in the context of applications for agriculture based on hyperspectral data. More precisely, this paper merges two complementary perspectives: an unsupervised analysis with PRISMA satellite imagery to explore whether these indices are truly distinct in practice and a supervised classification over UAV hyperspectral data. We assess their discriminative power, statistical correlations, and perceptual similarities. Our findings suggest that while many VIs have a certain correlation with the NDVI, meaningful differences emerge depending on landscape and application context, thus supporting their effectiveness as discriminative features usable in remote crop segmentation and recognition applications. Full article
Show Figures

Figure 1

17 pages, 9972 KiB  
Article
Improving Agricultural Efficiency of Dry Farmlands by Integrating Unmanned Aerial Vehicle Monitoring Data and Deep Learning
by Tung-Ching Su, Tsung-Chiang Wu and Hsin-Ju Chen
Land 2025, 14(6), 1179; https://doi.org/10.3390/land14061179 - 29 May 2025
Viewed by 431
Abstract
This study aimed to address the challenge of monitoring and managing soil moisture in dryland agriculture with supplemental irrigation under increasingly extreme climate conditions. Using unmanned aerial vehicles (UAVs) equipped with hyperspectral sensors, we collected imagery of wheat fields on Kinmen Island at [...] Read more.
This study aimed to address the challenge of monitoring and managing soil moisture in dryland agriculture with supplemental irrigation under increasingly extreme climate conditions. Using unmanned aerial vehicles (UAVs) equipped with hyperspectral sensors, we collected imagery of wheat fields on Kinmen Island at various growth stages. The Modified Perpendicular Drought Index (MPDI) was calculated to quantify soil drought conditions. Simultaneously, soil samples were collected to measure the actual soil moisture content. These datasets were used to develop a Gradient Boosting Regression (GBR) model to estimate soil moisture across the entire field. The resulting AI-based model can guide decisions on the timing and scale of supplemental irrigation, ensuring water is applied only when needed during crop growth. Furthermore, MPDI values and wheat spike samples were used to construct another GBR model for yield prediction. When applying MPDI values from multispectral imagery collected at a similar stage in the following year, the model achieved a prediction accuracy of over 90%. The proposed approach offers a reliable solution for enhancing the resilience and productivity of dryland crops under climate stress and demonstrates the potential of integrating remote sensing and machine learning in precision water management. Full article
(This article belongs to the Special Issue Challenges and Future Trends in Land Cover/Use Monitoring)
Show Figures

Figure 1

22 pages, 2463 KiB  
Article
Early Detection of Pine Wilt Disease by Combining Pigment and Moisture Content Indices Using UAV-Based Hyperspectral Imagery
by Rui Hou, Biyao Zhang, Guofei Fang, Sihan Yang, Lei Guo, Wenjiang Huang, Jing Yao, Quanjun Jiao, Hong Sun and Jiayu Yan
Remote Sens. 2025, 17(11), 1833; https://doi.org/10.3390/rs17111833 - 23 May 2025
Viewed by 591
Abstract
Pine wilt disease (PWD) is characterized by rapid transmission, high mortality rates, and difficulty in control, resulting in severe and destructive impacts on both the ecological environment and socioeconomic development in China. Due to the lack of significant symptoms in infected trees during [...] Read more.
Pine wilt disease (PWD) is characterized by rapid transmission, high mortality rates, and difficulty in control, resulting in severe and destructive impacts on both the ecological environment and socioeconomic development in China. Due to the lack of significant symptoms in infected trees during the early stages of the disease, improving the accuracy of early detection has become a major challenge in PWD monitoring. In recent years, the rapid advancement of UAV-based hyperspectral remote sensing technology has provided a promising approach for the early detection of PWD. In this study, we selected classic canopy pigment and moisture content indices to construct a set of recognition indicators. The optimal canopy pigment index (CI) and canopy moisture content index (WASCOSBNDI) were then chosen through significance testing and derivative analysis. Based on the asynchronous variations in canopy moisture and pigment content during the development of PWD, the CI, WASCOSBNDI, and CI-WASCOSBNDI models were developed using a multi-threshold segmentation method to identify trees at different stages of infection. The results demonstrate that the CI-WASCOSBNDI model achieved the highest accuracy in detecting infection stages, with an overall classification accuracy of 92.78%. In comparison, the CI and WASCOSBNDI models achieved classification accuracies of 81.34% and 89.84%, respectively. For the early stage infected trees, which are the primary focus of this study, the CI-WASCOSBNDI model exhibited the best performance with an accuracy rate exceeding 70%, significantly outperforming the other models. Furthermore, the timing of infection in early stage trees significantly influenced the model’s detection accuracy, with trees closer to the disease outbreak period being more easily identified. These findings provide a reference for the accurate early monitoring of PWD using UAV hyperspectral imagery. Full article
Show Figures

Figure 1

24 pages, 6894 KiB  
Article
Early Yield Prediction of Oilseed Rape Using UAV-Based Hyperspectral Imaging Combined with Machine Learning Algorithms
by Hongyan Zhu, Chengzhi Lin, Zhihao Dong, Jun-Li Xu and Yong He
Agriculture 2025, 15(10), 1100; https://doi.org/10.3390/agriculture15101100 - 19 May 2025
Cited by 2 | Viewed by 579
Abstract
Oilseed rape yield critically reflects varietal superiority. Rapid field-scale estimation enables efficient high-throughput breeding. This study evaluates unmanned aerial vehicle (UAV) hyperspectral imagery’s potential for yield prediction at the pod stage by utilizing wavelength selection and vegetation indices. Meanwhile, optimized feature selection algorithms [...] Read more.
Oilseed rape yield critically reflects varietal superiority. Rapid field-scale estimation enables efficient high-throughput breeding. This study evaluates unmanned aerial vehicle (UAV) hyperspectral imagery’s potential for yield prediction at the pod stage by utilizing wavelength selection and vegetation indices. Meanwhile, optimized feature selection algorithms identified effective wavelengths (EWs) and vegetation indices (VIs) for yield estimation. The optimal yield estimation models based on EWs and VIs were established, respectively, by using multiple linear regression (MLR), partial least squares regression (PLSR), extreme learning machine (ELM), and a least squares support vector machine (LS-SVM). The main results were as follows: (i) The yield prediction of oilseed rape using EWs showed better prediction and robustness compared to the full-spectral model. In particular, the competitive adaptive reweighted sampling–extreme learning machine (CARS-ELM) model (Rpre = 0.8122, RMSEP = 170.4 kg/hm2) achieved the best prediction performance. (ii) The ELM model (Rpre = 0.7674 and RMSEP = 187.6 kg/hm2), using 14 combined VIs, showed excellent performance. These results indicate that the remote sensing image data obtained from the UAV hyperspectral remote sensing system can be used to enable the high-throughput acquisition of oilseed rape yield information in the field. This study provides technical guidance for the crop yield estimation and high-throughput detection of breeding information. Full article
Show Figures

Graphical abstract

28 pages, 2816 KiB  
Article
Enhancing Urban Understanding Through Fine-Grained Segmentation of Very-High-Resolution Aerial Imagery
by Umamaheswaran Raman Kumar, Toon Goedemé and Patrick Vandewalle
Remote Sens. 2025, 17(10), 1771; https://doi.org/10.3390/rs17101771 - 19 May 2025
Viewed by 706
Abstract
Despite the growing availability of very-high-resolution (VHR) remote sensing imagery, extracting fine-grained urban features and materials remains a complex task. Land use/land cover (LULC) maps generated from satellite imagery often fall short in providing the resolution needed for detailed urban studies. While hyperspectral [...] Read more.
Despite the growing availability of very-high-resolution (VHR) remote sensing imagery, extracting fine-grained urban features and materials remains a complex task. Land use/land cover (LULC) maps generated from satellite imagery often fall short in providing the resolution needed for detailed urban studies. While hyperspectral imagery offers rich spectral information ideal for material classification, its complex acquisition process limits its use on aerial platforms such as manned aircraft and unmanned aerial vehicles (UAVs), reducing its feasibility for large-scale urban mapping. This study explores the potential of using only RGB and LiDAR data from VHR aerial imagery as an alternative for urban material classification. We introduce an end-to-end workflow that leverages a multi-head segmentation network to jointly classify roof and ground materials while also segmenting individual roof components. The workflow includes a multi-offset self-ensemble inference strategy optimized for aerial data and a post-processing step based on digital elevation models (DEMs). In addition, we present a systematic method for extracting roof parts as polygons enriched with material attributes. The study is conducted on six cities in Flanders, Belgium, covering 18 material classes—including rare categories such as green roofs, wood, and glass. The results show a 9.88% improvement in mean intersection over union (mIOU) for building and ground segmentation, and a 3.66% increase in mIOU for material segmentation compared to a baseline pyramid attention network (PAN). These findings demonstrate the potential of RGB and LiDAR data for high-resolution material segmentation in urban analysis. Full article
(This article belongs to the Special Issue Applications of AI and Remote Sensing in Urban Systems II)
Show Figures

Figure 1

27 pages, 12293 KiB  
Article
Estimation of Leaf Chlorophyll Content of Maize from Hyperspectral Data Using E2D-COS Feature Selection, Deep Neural Network, and Transfer Learning
by Riqiang Chen, Lipeng Ren, Guijun Yang, Zhida Cheng, Dan Zhao, Chengjian Zhang, Haikuan Feng, Haitang Hu and Hao Yang
Agriculture 2025, 15(10), 1072; https://doi.org/10.3390/agriculture15101072 - 16 May 2025
Viewed by 798
Abstract
Leaf chlorophyll content (LCC) serves as a vital biochemical indicator of photosynthetic activity and nitrogen status, critical for precision agriculture to optimize crop management. While UAV-based hyperspectral sensing offers maize LCC estimation potential, current methods struggle with overlapping spectral bands and suboptimal model [...] Read more.
Leaf chlorophyll content (LCC) serves as a vital biochemical indicator of photosynthetic activity and nitrogen status, critical for precision agriculture to optimize crop management. While UAV-based hyperspectral sensing offers maize LCC estimation potential, current methods struggle with overlapping spectral bands and suboptimal model accuracy. To address these limitations, we proposed an integrated maize LCC estimation framework combining UAV hyperspectral imagery, simulated hyperspectral data, E2D-COS feature selection, deep neural network (DNN), and transfer learning (TL). The E2D-COS algorithm with simulated data was used to identify structure-resistant spectral bands strongly correlated with maize LCC: Big trumpet stage: 418 nm, 453 nm, 506 nm, 587 nm, 640 nm, 688 nm, and 767 nm; Spinning stage: 418 nm, 453 nm, 541 nm, 559 nm, 688 nm, 723 nm, and 767 nm. Combining the E2D-COS feature selection with TL and DNN significantly improves the estimation accuracy: the R2 of the proposed Maize-LCNet model is improved by 0.06–0.11 and the RMSE is reduced by 0.57–1.06 g/cm compared with LCNet-field. Compared to the existing studies, this study not only clarifies the spectral bands that are able to estimate maize chlorophyll, but also presents a high-performance, lightweight (fewer input) approach to achieve the accurate estimation of LCC in maize, which can directly support growth monitoring nutrient management at specific growth stages, thus contributing to smart agricultural practices. Full article
Show Figures

Figure 1

20 pages, 11001 KiB  
Article
Investigation of Peanut Leaf Spot Detection Using Superpixel Unmixing Technology for Hyperspectral UAV Images
by Qiang Guan, Shicheng Qiao, Shuai Feng and Wen Du
Agriculture 2025, 15(6), 597; https://doi.org/10.3390/agriculture15060597 - 11 Mar 2025
Cited by 2 | Viewed by 706
Abstract
Leaf spot disease significantly impacts peanut growth. Timely, effective, and accurate monitoring of leaf spot severity is crucial for high-yield and high-quality peanut production. Hyperspectral technology from unmanned aerial vehicles (UAVs) is widely employed for disease detection in agricultural fields, but the low [...] Read more.
Leaf spot disease significantly impacts peanut growth. Timely, effective, and accurate monitoring of leaf spot severity is crucial for high-yield and high-quality peanut production. Hyperspectral technology from unmanned aerial vehicles (UAVs) is widely employed for disease detection in agricultural fields, but the low spatial resolution of imagery affects accuracy. In this study, peanuts with varying levels of leaf spot disease were detected using hyperspectral images from UAVs. Spectral features of crops and backgrounds were extracted using simple linear iterative clustering (SLIC), the homogeneity index, and k-means clustering. Abundance estimation was conducted using fully constrained least squares based on a distance strategy (D-FCLS), and crop regions were extracted through threshold segmentation. Disease severity was determined based on the average spectral reflectance of crop regions, utilizing classifiers such as XGBoost, the MLP, and the GA-SVM. Results indicate that crop spectra extracted using the superpixel-based unmixing method effectively captured spectral variability, leading to more accurate disease detection. By optimizing threshold values, a better balance between completeness and the internal variability of crop regions was achieved, allowing for the precise extraction of crop regions. Compared to other unmixing methods and manual visual interpretation techniques, the proposed method achieved excellent results, with an overall accuracy of 89.08% and a Kappa coefficient of 85.42% for the GA-SVM classifier. This method provides an objective, efficient, and accurate solution for detecting peanut leaf spot disease, offering technical support for field management with promising practical applications. Full article
Show Figures

Figure 1

19 pages, 6455 KiB  
Article
Assessment of Mango Canopy Water Content Through the Fusion of Multispectral Unmanned Aerial Vehicle (UAV) and Sentinel-2 Remote Sensing Data
by Jinlong Liu, Jing Huang, Mengjuan Wu, Tengda Qin, Haoyi Jia, Shaozheng Hao, Jia Jin, Yuqing Huang and Nathsuda Pumijumnong
Forests 2025, 16(1), 167; https://doi.org/10.3390/f16010167 - 17 Jan 2025
Cited by 3 | Viewed by 1059
Abstract
This study proposes an Additive Wavelet Transform (AWT)-based method to fuse Multispectral UAV (MS UAV, 5 cm resolution) and Sentinel-2 satellite imagery (10–20 m resolution), generating 5 cm resolution fused images with a focus on near-infrared and shortwave infrared bands to enhance the [...] Read more.
This study proposes an Additive Wavelet Transform (AWT)-based method to fuse Multispectral UAV (MS UAV, 5 cm resolution) and Sentinel-2 satellite imagery (10–20 m resolution), generating 5 cm resolution fused images with a focus on near-infrared and shortwave infrared bands to enhance the accuracy of mango canopy water content monitoring. The fused Sentinel-2 and MS UAV data were validated and calibrated using field-collected hyperspectral data to construct vegetation indices, which were then used with five machine learning (ML) models to estimate Fuel Moisture Content (FMC), Equivalent Water Thickness (EWT), and canopy water content (CWC). The results indicate that the addition of fused Sentinel-2 data significantly improved the estimation accuracy of all parameters compared to using MS UAV data alone, with the Genetic Algorithm Backpropagation Neural Network (GABP) model performing best (R2 = 0.745, 0.859, and 0.702 for FMC, EWT, and CWC, respectively), achieving R2 improvements of 0.066, 0.179, and 0.210. Slope, canopy coverage, and human activities were identified as key factors influencing the spatial variability of FMC, EWT, and CWC, with CWC being the most sensitive to environmental changes, providing a reliable representation of mango canopy water status. Full article
Show Figures

Figure 1

21 pages, 6508 KiB  
Article
NDVI Estimation Throughout the Whole Growth Period of Multi-Crops Using RGB Images and Deep Learning
by Jianliang Wang, Chen Chen, Jiacheng Wang, Zhaosheng Yao, Ying Wang, Yuanyuan Zhao, Yi Sun, Fei Wu, Dongwei Han, Guanshuo Yang, Xinyu Liu, Chengming Sun and Tao Liu
Agronomy 2025, 15(1), 63; https://doi.org/10.3390/agronomy15010063 - 29 Dec 2024
Cited by 4 | Viewed by 2979
Abstract
The Normalized Difference Vegetation Index (NDVI) is an important remote sensing index that is widely used to assess vegetation coverage, monitor crop growth, and predict yields. Traditional NDVI calculation methods often rely on multispectral or hyperspectral imagery, which are costly and complex to [...] Read more.
The Normalized Difference Vegetation Index (NDVI) is an important remote sensing index that is widely used to assess vegetation coverage, monitor crop growth, and predict yields. Traditional NDVI calculation methods often rely on multispectral or hyperspectral imagery, which are costly and complex to operate, thus limiting their applicability in small-scale farms and developing countries. To address these limitations, this study proposes an NDVI estimation method based on low-cost RGB (red, green, and blue) UAV (unmanned aerial vehicle) imagery combined with deep learning techniques. This study utilizes field data from five major crops (cotton, rice, maize, rape, and wheat) throughout their whole growth periods. RGB images were used to extract conventional features, including color indices (CIs), texture features (TFs), and vegetation coverage, while convolutional features (CFs) were extracted using the deep learning network ResNet50 to optimize the model. The results indicate that the model, optimized with CFs, significantly enhanced NDVI estimation accuracy. Specifically, the R2 values for maize, rape, and wheat during their whole growth periods reached 0.99, while those for rice and cotton were 0.96 and 0.93, respectively. Notably, the accuracy improvement in later growth periods was most pronounced for cotton and maize, with average R2 increases of 0.15 and 0.14, respectively, whereas wheat exhibited a more modest improvement of only 0.04. This method leverages deep learning to capture structural changes in crop populations, optimizing conventional image features and improving NDVI estimation accuracy. This study presents an NDVI estimation approach applicable to the whole growth period of common crops, particularly those with significant population variations, and provides a valuable reference for estimating other vegetation indices using low-cost UAV-acquired RGB images. Full article
(This article belongs to the Special Issue Unmanned Farms in Smart Agriculture)
Show Figures

Graphical abstract

Back to TopTop