Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (120)

Search Parameters:
Keywords = color-based vegetation index

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2735 KB  
Article
Leaf Area Estimation in High-Wire Tomato Cultivation Using Plant Body Scanning
by Hiroki Naito, Tokihiro Fukatsu, Kota Shimomoto, Fumiki Hosoi and Tomohiko Ota
AgriEngineering 2025, 7(7), 206; https://doi.org/10.3390/agriengineering7070206 - 1 Jul 2025
Viewed by 687
Abstract
Accurate estimation of the leaf area index (LAI), a key indicator of canopy development and light interception, is essential for improving productivity in greenhouse tomato cultivation. This study presents a non-destructive LAI estimation method using side-view images captured by a vertical scanning system. [...] Read more.
Accurate estimation of the leaf area index (LAI), a key indicator of canopy development and light interception, is essential for improving productivity in greenhouse tomato cultivation. This study presents a non-destructive LAI estimation method using side-view images captured by a vertical scanning system. The system recorded the full vertical profile of tomato plants grown under two deleafing strategies: modifying leaf height (LH) and altering leaf density (LD). Vegetative and leaf areas were extracted using color-based masking and semantic segmentation with the Segment Anything Model (SAM), a general-purpose deep learning tool. Regression models based on leaf or all vegetative pixel counts showed strong correlations with destructively measured LAI, particularly under LH conditions (R2 > 0.85; mean absolute percentage error ≈ 16%). Under LD conditions, accuracy was slightly lower due to occlusion and leaf orientation. Compared with prior 3D-based methods, the proposed 2D approach achieved comparable accuracy while maintaining low cost and a labor-efficient design. However, the system has not been tested in real production, and its generalizability across cultivars, environments, and growth stages remains unverified. This proof-of-concept study highlights the potential of side-view imaging for LAI monitoring and calls for further validation and integration of leaf count estimation. Full article
Show Figures

Figure 1

22 pages, 3331 KB  
Article
Maize Leaf Area Index Estimation Based on Machine Learning Algorithm and Computer Vision
by Wanna Fu, Zhen Chen, Qian Cheng, Yafeng Li, Weiguang Zhai, Fan Ding, Xiaohui Kuang, Deshan Chen and Fuyi Duan
Agriculture 2025, 15(12), 1272; https://doi.org/10.3390/agriculture15121272 - 12 Jun 2025
Cited by 1 | Viewed by 879
Abstract
Precise estimation of the leaf area index (LAI) is vital in efficient maize growth monitoring and precision farming. Traditional LAI measurement methods are often destructive and labor-intensive, while techniques relying solely on spectral data suffer from limitations such as spectral saturation. To overcome [...] Read more.
Precise estimation of the leaf area index (LAI) is vital in efficient maize growth monitoring and precision farming. Traditional LAI measurement methods are often destructive and labor-intensive, while techniques relying solely on spectral data suffer from limitations such as spectral saturation. To overcome these difficulties, the study integrated computer vision techniques with UAV-based remote sensing data to establish a rapid and non-invasive method for estimating the LAI in maize. Multispectral imagery of maize was acquired via UAV platforms across various phenological stages, and vegetation features were derived based on the Excess Green (ExG) Index and the Hue–Saturation–Value (HSV) color space. LAI standardization was performed through edge detection and the cumulative distribution function. The proposed LAI estimation model, named VisLAI, based solely on visible light imagery, demonstrated high accuracy, with R2 values of 0.84, 0.75, and 0.50, and RMSE values of 0.24, 0.35, and 0.44 across the big trumpet, tasseling–silking, and grain filling stages, respectively. When HSV-based optimization was applied, VisLAI achieved even better performance, with R2 values of 0.92, 0.90, and 0.85, and RMSE values of 0.19, 0.23, and 0.22 at the respective stages. The estimation results were validated against ground-truth data collected using the LAI-2200C plant canopy analyzer and compared with six machine learning algorithms, including Gradient Boosting (GB), Random Forest (RF), Ridge Regression (RR), Support Vector Regression (SVR), and Linear Regression (LR). Among these, GB achieved the best performance, with R2 values of 0.88, 0.88, and 0.65, and RMSE values of 0.22, 0.25, and 0.34. However, VisLAI consistently outperformed all machine learning models, especially during the grain filling stage, demonstrating superior robustness and accuracy. The VisLAI model proposed in this study effectively utilizes UAV-captured visible light imagery and computer vision techniques to achieve accurate, efficient, and non-destructive estimation of maize LAI. It outperforms traditional and machine learning-based approaches and provides a reliable solution for real-world maize growth monitoring and agricultural decision-making. Full article
Show Figures

Figure 1

21 pages, 5887 KB  
Article
Meta-Features Extracted from Use of kNN Regressor to Improve Sugarcane Crop Yield Prediction
by Luiz Antonio Falaguasta Barbosa, Ivan Rizzo Guilherme, Daniel Carlos Guimarães Pedronette and Bruno Tisseyre
Remote Sens. 2025, 17(11), 1846; https://doi.org/10.3390/rs17111846 - 25 May 2025
Viewed by 631
Abstract
Accurate crop yield prediction is essential for sugarcane growers, as it enables them to predict harvested biomass, guiding critical decisions regarding acquiring agricultural inputs such as fertilizers and pesticides, the timing and execution of harvest operations, and cane field renewal strategies. This study [...] Read more.
Accurate crop yield prediction is essential for sugarcane growers, as it enables them to predict harvested biomass, guiding critical decisions regarding acquiring agricultural inputs such as fertilizers and pesticides, the timing and execution of harvest operations, and cane field renewal strategies. This study is based on an experiment conducted by researchers from the Commonwealth Scientific and Industrial Research Organisation (CSIRO), who employed a UAV-mounted LiDAR and multispectral imaging sensors to monitor two sugarcane field trials subjected to varying nitrogen (N) fertilization regimes in the Wet Tropics region of Australia. The predictive performance of models utilizing multispectral features, LiDAR-derived features, and a fusion of both modalities was evaluated against a benchmark model based on the Normalized Difference Vegetation Index (NDVI). This work utilizes the dataset produced by this experiment, incorporating other regressors and features derived from those collected in the field. Typically, crop yield prediction relies on features derived from direct field observations, either gathered through sensor measurements or manual data collection. However, enhancing prediction models by incorporating new features extracted through regressions executed on the original dataset features can potentially improve predictive outcomes. These extracted features, nominated in this work as meta-features (MFs), extracted through regressions with different regressors on original features, and incorporated into the dataset as new feature predictors, can be utilized in further regression analyses to optimize crop yield prediction. This study investigates the potential of generating MFs as an innovation to enhance sugarcane crop yield predictions. MFs were generated based on the values obtained by different regressors applied to the features collected in the field, allowing for evaluating which approaches offered superior predictive performance within the dataset. The kNN meta-regressor outperforms other regressors because it takes advantage of the proximity of MFs, which was checked through a projection where the dispersion of points can be measured. A comparative analysis is presented with a projection based on the Uniform Manifold Approximation and Projection (UMAP) algorithm, showing that MFs had more proximity than the original features when projected, which demonstrates that MFs revealed a clear formation of well-defined clusters, with most points within each group sharing the same color, suggesting greater uniformity in the predicted values. Incorporating these MFs into subsequent regression models demonstrated improved performance, with R¯2 values higher than 0.9 for MF Grad Boost M3, MF GradientBoost M5, and all kNN MFs and reduced error margins compared to field-measured yield values. The R¯2 values obtained in this work ranged above 0.98 for the AdaBoost meta-regressor applied to MFs, which were obtained from kNN regression on five models created by the researchers of CSIRO, and around 0.99 for the kNN meta-regressor applied to MFs obtained from kNN regression on these five models. Full article
Show Figures

Figure 1

39 pages, 14246 KB  
Article
Comparison of PlanetScope and Sentinel-2 Spectral Channels and Their Alignment via Linear Regression for Enhanced Index Derivation
by Christian Massimiliano Baldin and Vittorio Marco Casella
Geosciences 2025, 15(5), 184; https://doi.org/10.3390/geosciences15050184 - 20 May 2025
Viewed by 2230
Abstract
Prior research has shown that for specific periods, vegetation indices from PlanetScope and Sentinel-2 (used as a reference) must be aligned to benefit from the experience of Sentinel-2 and utilize techniques such as data fusion. Even during the worst-case scenario, it is possible [...] Read more.
Prior research has shown that for specific periods, vegetation indices from PlanetScope and Sentinel-2 (used as a reference) must be aligned to benefit from the experience of Sentinel-2 and utilize techniques such as data fusion. Even during the worst-case scenario, it is possible through histogram matching to calibrate PlanetScope indices to achieve the same values as Sentinel-2 (useful also for proxy). Based on these findings, the authors examined the effectiveness of linear regression in aligning individual bands prior to computing indices to determine if the bands are shifted differently. The research was conducted on five important bands: Red, Green, Blue, NIR, and RedEdge. These bands allow for the computation of well-known vegetation indices like NDVI and NDRE, and soil indices like Iron Oxide Ratio and Coloration Index. Previous research showed that linear regression is not sufficient by itself to align indices in the worst-case scenario. However, this paper demonstrates its efficiency in achieving accurate band alignment. This finding highlights the importance of considering specific scaling requirements for bands obtained from different satellite sensors, such as PlanetScope and Sentinel-2. Contemporary images acquired by the two sensors during May and July demonstrated different behaviors in their bands; however, linear regression can align the datasets even during the problematic month of May. Full article
Show Figures

Figure 1

28 pages, 5599 KB  
Article
Multi-Source Feature Fusion Network for LAI Estimation from UAV Multispectral Imagery
by Lulu Zhang, Bo Zhang, Huanhuan Zhang, Wanting Yang, Xinkang Hu, Jianrong Cai, Chundu Wu and Xiaowen Wang
Agronomy 2025, 15(4), 988; https://doi.org/10.3390/agronomy15040988 - 20 Apr 2025
Cited by 2 | Viewed by 935
Abstract
The leaf area index (LAI) is a critical biophysical parameter that reflects crop growth conditions and the canopy photosynthetic potential, serving as a cornerstone in precision agriculture and dynamic crop monitoring. However, traditional LAI estimation methods relying on single-source remote sensing data and [...] Read more.
The leaf area index (LAI) is a critical biophysical parameter that reflects crop growth conditions and the canopy photosynthetic potential, serving as a cornerstone in precision agriculture and dynamic crop monitoring. However, traditional LAI estimation methods relying on single-source remote sensing data and often suffer from insufficient accuracy in high-density vegetation scenarios, limiting their capacity to reflect crop growth variability comprehensively. To overcome these limitations, this study introduces an innovative multi-source feature fusion framework utilizing unmanned aerial vehicle (UAV) multispectral imagery for precise LAI estimation in winter wheat. RGB and multispectral datasets were collected across seven different growth stages (from regreening to grain filling) in 2024. Through the extraction of color attributes, spatial structural information, and eight representative vegetation indices (VIs), a robust multi-source dataset was developed to integrate diverse data types. A convolutional neural network (CNN)-based feature extraction backbone, paired with a multi-source feature fusion network (MSF-FusionNet), was designed to effectively combine spectral and spatial information from both RGB and multispectral imagery. The experimental results revealed that the proposed method achieved superior estimation performance compared to single-source models, with an R2 of 0.8745 and RMSE of 0.5461, improving the R2 by 36.67% and 5.54% over the RGB and VI models, respectively. Notably, the fusion method enhanced the accuracy during critical growth phases, such as the regreening and jointing stages. Compared to traditional machine learning techniques, the proposed framework exceeded the performance of the XGBoost model, with the R2 rising by 4.51% and the RMSE dropping by 12.24%. Furthermore, our method facilitated the creation of LAI spatial distribution maps across key growth stages, accurately depicting the spatial heterogeneity and temporal dynamics in the field. These results highlight the efficacy and potential of integrating UAV multi-source data fusion with deep learning for precise LAI estimation in winter wheat, offering significant insights for crop growth evaluation and precision agricultural management. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

28 pages, 3329 KB  
Article
PhenoCam Guidelines for Phenological Measurement and Analysis in an Agricultural Cropping Environment: A Case Study of Soybean
by S. Sunoj, C. Igathinathane, Nicanor  Saliendra, John Hendrickson, David Archer and Mark Liebig
Remote Sens. 2025, 17(4), 724; https://doi.org/10.3390/rs17040724 - 19 Feb 2025
Viewed by 1147
Abstract
A PhenoCam is a near-surface remote sensing system traditionally used for monitoring phenological changes in diverse landscapes. Although initially developed for forest landscapes, these near-surface remote sensing systems are increasingly being adopted in agricultural settings, with deployment expanding from 106 sites in 2020 [...] Read more.
A PhenoCam is a near-surface remote sensing system traditionally used for monitoring phenological changes in diverse landscapes. Although initially developed for forest landscapes, these near-surface remote sensing systems are increasingly being adopted in agricultural settings, with deployment expanding from 106 sites in 2020 to 839 sites by February 2025. However, agricultural applications present unique challenges because of rapid crop development and the need for precise phenological monitoring. Despite the increasing number of PhenoCam sites, clear guidelines are missing on (i) the phenological analysis of images, (ii) the selection of a suitable color vegetation index (CVI), and (iii) the extraction of growth stages. This knowledge gap limits the full potential of PhenoCams in agricultural applications. Therefore, a study was conducted in two soybean (Glycine max L.) fields to formulate image analysis guidelines for PhenoCam images. Weekly visual assessments of soybean phenological stages were compared with PhenoCam images. A total of 15 CVIs were tested for their ability to reproduce the seasonal variation from RGB, HSB, and Lab color spaces. The effects of image acquisition time groups (10:00 h–14:00 h) and object position (ROI locations: far, middle, and near) on selected CVIs were statistically analyzed. Excess green minus excess red (EXGR), color index of vegetation (CIVE), green leaf index (GLI), and normalized green red difference index (NGRDI) were selected based on the least deviation from their loess-smoothed phenological curve at each image acquisition time. For the selected four CVIs, the time groups did not have a significant effect on CVI values, while the object position had significant effects at the reproductive phase. Among the selected CVIs, GLI and EXGR exhibited the least deviation within the image acquisition time and object position groups. Overall, we recommend employing a consistent image acquisition time to ensure sufficient light, capture the largest possible image ROI in the middle region of the field, and apply any of the selected CVIs in order of GLI, EXGR, NGRDI, and CIVE. These results provide a standardized methodology and serve as guidelines for PhenoCam image analysis in agricultural cropping environments. These guidelines can be incorporated into the standard protocol of the PhenoCam network. Full article
(This article belongs to the Special Issue Crops and Vegetation Monitoring with Remote/Proximal Sensing II)
Show Figures

Figure 1

17 pages, 3052 KB  
Article
Estimation of Daylily Leaf Area Index by Synergy Multispectral and Radar Remote-Sensing Data Based on Machine-Learning Algorithm
by Minhuan Hu, Jingshu Wang, Peng Yang, Ping Li, Peng He and Rutian Bi
Agronomy 2025, 15(2), 456; https://doi.org/10.3390/agronomy15020456 - 13 Feb 2025
Cited by 1 | Viewed by 994
Abstract
Rapid and accurate leaf area index (LAI) determination is important for monitoring daylily growth, yield estimation, and field management. Because of low estimation accuracy of empirical models based on single-source data, we proposed a machine-learning algorithm combining optical and microwave remote-sensing data as [...] Read more.
Rapid and accurate leaf area index (LAI) determination is important for monitoring daylily growth, yield estimation, and field management. Because of low estimation accuracy of empirical models based on single-source data, we proposed a machine-learning algorithm combining optical and microwave remote-sensing data as well as the random forest regression (RFR) importance score to select features. A high-precision LAI estimation model for daylilies was constructed by optimizing feature combinations. The RFR importance score screened the top five important features, including vegetation indices land surface water index (LSWI), generalized difference vegetation index (GDVI), normalized difference yellowness index (NDYI), and backscatter coefficients VV and VH. Vegetation index features characterized canopy moisture and the color of daylilies, and the backscatter coefficient reflected dielectric properties and geometric structure. The selected features were sensitive to daylily LAI. The RFR algorithm had good anti-noise performance and strong fitting ability; thus, its accuracy was better than the partial least squares regression and artificial neural network models. Synergistic optical and microwave data more comprehensively reflected the physical and chemical properties of daylilies, making the RFR-VI-BC05 model after feature selection better than the others ( r = 0.711, RMSE = 0.498, and NRMSE = 9.10%). This study expanded methods for estimating daylily LAI by combining optical and radar data, providing technical support for daylily management. Full article
Show Figures

Figure 1

27 pages, 24351 KB  
Article
UAV-Based Multiple Sensors for Enhanced Data Fusion and Nitrogen Monitoring in Winter Wheat Across Growth Seasons
by Jingjing Wang, Wentao Wang, Suyi Liu, Xin Hui, Haohui Zhang, Haijun Yan and Wouter H. Maes
Remote Sens. 2025, 17(3), 498; https://doi.org/10.3390/rs17030498 - 31 Jan 2025
Viewed by 1295
Abstract
Unmanned aerial vehicles (UAVs) equipped with multi-sensor remote sensing technologies provide an efficient approach for mapping spatial and temporal variations in vegetation traits, enabling advancements in precision monitoring and modeling. This study’s objective was to analyze UAV multiple sensors’ performance in monitoring winter [...] Read more.
Unmanned aerial vehicles (UAVs) equipped with multi-sensor remote sensing technologies provide an efficient approach for mapping spatial and temporal variations in vegetation traits, enabling advancements in precision monitoring and modeling. This study’s objective was to analyze UAV multiple sensors’ performance in monitoring winter wheat chlorophyll content (SPAD), plant nitrogen accumulation (PNA), and N nutrition index (NNI). A two-year field experiment with five N fertilizer treatments was carried out. The color indices (CIs, from RGB sensors), vegetation indices (VIs, from multispectral sensors), and temperature indices (TIs, from thermal sensors) were derived from the collected images. XGBoost (extreme gradient boosting) was applied to develop the models, using 2021 data for training and 2022 data for testing. The excess green minus excess red index, red green ratio index, and hue (from CIs), and green normalized difference vegetation index, normalized difference red-edge index, and normalized difference vegetation index (from VIs), showed high correlations with three N indicators. At the pre-heading stage, the best performing CIs correlated better than the VIs; this was reversed in the post-heading stage. CIs outperformed VIs in SPAD (CIs: R2(coefficient of determination) = 0.66, VIs: R2 = 0.61), PNA (CIs: R2 = 0.68, VIs: R2 = 0.64), and NNI (CIs: R2 = 0.64, VIs: R2 = 0.60) in the pre-heading stage, whereas VI-based models achieved slightly higher accuracies in post-heading and all stages compared to CIs. Models built with CIs + VIs significantly improved the models’ performance compared to single-sensor models. Adding TIs to CIs and CIs + VIs further improved the models’ performance slightly, especially at the post-heading stage, resulting in the best model performance with three sensors. These findings highlight the effectiveness of UAV systems in estimating wheat N and establish a framework for integrating RGB, multispectral, and thermal sensors to enhance model accuracy in precision vegetation monitoring. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

25 pages, 4935 KB  
Article
From Air to Space: A Comprehensive Approach to Optimizing Aboveground Biomass Estimation on UAV-Based Datasets
by Muhammad Nouman Khan, Yumin Tan, Lingfeng He, Wenquan Dong and Shengxian Dong
Forests 2025, 16(2), 214; https://doi.org/10.3390/f16020214 - 23 Jan 2025
Cited by 1 | Viewed by 1652
Abstract
Estimating aboveground biomass (AGB) is vital for sustainable forest management and helps to understand the contributions of forests to carbon storage and emission goals. In this study, the effectiveness of plot-level AGB estimation using height and crown diameter derived from UAV-LiDAR, calibration of [...] Read more.
Estimating aboveground biomass (AGB) is vital for sustainable forest management and helps to understand the contributions of forests to carbon storage and emission goals. In this study, the effectiveness of plot-level AGB estimation using height and crown diameter derived from UAV-LiDAR, calibration of GEDI-L4A AGB and GEDI-L2A rh98 heights, and spectral variables derived from UAV-multispectral and RGB data were assessed. These calibrated AGB and height values and UAV-derived spectral variables were used to fit AGB estimations using a random forest (RF) regression model in Fuling District, China. Using Pearson correlation analysis, we identified 10 of the most important predictor variables in the AGB prediction model, including calibrated GEDI AGB and height, Visible Atmospherically Resistant Index green (VARIg), Red Blue Ratio Index (RBRI), Difference Vegetation Index (DVI), canopy cover (CC), Atmospherically Resistant Vegetation Index (ARVI), Red-Edge Normalized Difference Vegetation Index (NDVIre), Color Index of Vegetation (CIVI), elevation, and slope. The results showed that, in general, the second model based on calibrated AGB and height, Sentinel-2 indices, slope and elevation, and spectral variables from UAV-multispectral and RGB datasets with evaluation metric (for training: R2 = 0.941 Mg/ha, RMSE = 13.514 Mg/ha, MAE = 8.136 Mg/ha) performed better than the first model with AGB prediction. The result was between 23.45 Mg/ha and 301.81 Mg/ha, and the standard error was between 0.14 Mg/ha and 10.18 Mg/ha. This hybrid approach significantly improves AGB prediction accuracy and addresses uncertainties in AGB prediction modeling. The findings provide a robust framework for enhancing forest carbon stock assessment and contribute to global-scale AGB monitoring, advancing methodologies for sustainable forest management and ecological research. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

21 pages, 6508 KB  
Article
NDVI Estimation Throughout the Whole Growth Period of Multi-Crops Using RGB Images and Deep Learning
by Jianliang Wang, Chen Chen, Jiacheng Wang, Zhaosheng Yao, Ying Wang, Yuanyuan Zhao, Yi Sun, Fei Wu, Dongwei Han, Guanshuo Yang, Xinyu Liu, Chengming Sun and Tao Liu
Agronomy 2025, 15(1), 63; https://doi.org/10.3390/agronomy15010063 - 29 Dec 2024
Cited by 4 | Viewed by 3346
Abstract
The Normalized Difference Vegetation Index (NDVI) is an important remote sensing index that is widely used to assess vegetation coverage, monitor crop growth, and predict yields. Traditional NDVI calculation methods often rely on multispectral or hyperspectral imagery, which are costly and complex to [...] Read more.
The Normalized Difference Vegetation Index (NDVI) is an important remote sensing index that is widely used to assess vegetation coverage, monitor crop growth, and predict yields. Traditional NDVI calculation methods often rely on multispectral or hyperspectral imagery, which are costly and complex to operate, thus limiting their applicability in small-scale farms and developing countries. To address these limitations, this study proposes an NDVI estimation method based on low-cost RGB (red, green, and blue) UAV (unmanned aerial vehicle) imagery combined with deep learning techniques. This study utilizes field data from five major crops (cotton, rice, maize, rape, and wheat) throughout their whole growth periods. RGB images were used to extract conventional features, including color indices (CIs), texture features (TFs), and vegetation coverage, while convolutional features (CFs) were extracted using the deep learning network ResNet50 to optimize the model. The results indicate that the model, optimized with CFs, significantly enhanced NDVI estimation accuracy. Specifically, the R2 values for maize, rape, and wheat during their whole growth periods reached 0.99, while those for rice and cotton were 0.96 and 0.93, respectively. Notably, the accuracy improvement in later growth periods was most pronounced for cotton and maize, with average R2 increases of 0.15 and 0.14, respectively, whereas wheat exhibited a more modest improvement of only 0.04. This method leverages deep learning to capture structural changes in crop populations, optimizing conventional image features and improving NDVI estimation accuracy. This study presents an NDVI estimation approach applicable to the whole growth period of common crops, particularly those with significant population variations, and provides a valuable reference for estimating other vegetation indices using low-cost UAV-acquired RGB images. Full article
(This article belongs to the Special Issue Unmanned Farms in Smart Agriculture)
Show Figures

Graphical abstract

20 pages, 7839 KB  
Article
Normalized Difference Vegetation Index Prediction for Blueberry Plant Health from RGB Images: A Clustering and Deep Learning Approach
by A. G. M. Zaman, Kallol Roy and Jüri Olt
AgriEngineering 2024, 6(4), 4831-4850; https://doi.org/10.3390/agriengineering6040276 - 16 Dec 2024
Viewed by 1663
Abstract
In precision agriculture (PA), monitoring individual plant health is crucial for optimizing yields and minimizing resources. The normalized difference vegetation index (NDVI), a widely used health indicator, typically relies on expensive multispectral cameras. This study introduces a method for predicting the NDVI of [...] Read more.
In precision agriculture (PA), monitoring individual plant health is crucial for optimizing yields and minimizing resources. The normalized difference vegetation index (NDVI), a widely used health indicator, typically relies on expensive multispectral cameras. This study introduces a method for predicting the NDVI of blueberry plants using RGB images and deep learning, offering a cost-effective alternative. To identify individual plant bushes, K-means and Gaussian Mixture Model (GMM) clustering were applied. RGB images were transformed into the HSL (hue, saturation, lightness) color space, and the hue channel was constrained using percentiles to exclude extreme values while preserving relevant plant hues. Further refinement was achieved through adaptive pixel-to-pixel distance filtering combined with the Davies–Bouldin Index (DBI) to eliminate pixels deviating from the compact cluster structure. This enhanced clustering accuracy and enabled precise NDVI calculations. A convolutional neural network (CNN) was trained and tested to predict NDVI-based health indices. The model achieved strong performance with mean squared losses of 0.0074, 0.0044, and 0.0021 for training, validation, and test datasets, respectively. The test dataset also yielded a mean absolute error of 0.0369 and a mean percentage error of 4.5851. These results demonstrate the NDVI prediction method’s potential for cost-effective, real-time plant health assessment, particularly in agrobotics. Full article
Show Figures

Figure 1

17 pages, 8026 KB  
Article
Estimation of Non-Photosynthetic Vegetation Cover Using the NDVI–DFI Model in a Typical Dry–Hot Valley, Southwest China
by Caiyi Fan, Guokun Chen, Ronghua Zhong, Yan Huang, Qiyan Duan and Ying Wang
ISPRS Int. J. Geo-Inf. 2024, 13(12), 440; https://doi.org/10.3390/ijgi13120440 - 7 Dec 2024
Cited by 1 | Viewed by 1499
Abstract
Non-photosynthetic vegetation (NPV) significantly impacts ecosystem degradation, drought, and wildfire risk due to its flammable and persistent litter. Yet, the accurate estimation of NPV in heterogeneous landscapes, such as dry–hot valleys, has been limited. This study utilized multi-source time-series remote sensing data from [...] Read more.
Non-photosynthetic vegetation (NPV) significantly impacts ecosystem degradation, drought, and wildfire risk due to its flammable and persistent litter. Yet, the accurate estimation of NPV in heterogeneous landscapes, such as dry–hot valleys, has been limited. This study utilized multi-source time-series remote sensing data from Sentinel-2 and GF-2, along with field surveys, to develop an NDVI-DFI ternary linear mixed model for quantifying NPV coverage (fNPV) in a typical dry–hot valley region in 2023. The results indicated the following: (1) The NDVI-DFI ternary linear mixed model effectively estimates photosynthetic vegetation coverage (fPV) and fNPV, aligning well with the conceptual framework and meeting key assumptions, demonstrating its applicability and reliability. (2) The RGB color composite image derived using the minimum inclusion endmember feature method (MVE) exhibited darker tones, suggesting that MVE tends to overestimate the vegetation fraction when distinguishing vegetation types from bare soil. On the other hand, the pure pixel index (PPI) method showed higher accuracy in estimation due to its higher spectral purity and better recognition of endmembers, making it more suitable for studying dry–hot valley areas. (3) Estimates based on the NDVI-DFI ternary linear mixed model revealed significant seasonal shifts between PV and NPV, especially in valleys and lowlands. From the rainy to the dry season, the proportion of NPV increased from 23.37% to 35.52%, covering an additional 502.96 km². In summary, these findings underscore the substantial seasonal variations in fPV and fNPV, particularly in low-altitude regions along the valley, highlighting the dynamic nature of vegetation in dry–hot environments. Full article
Show Figures

Figure 1

17 pages, 13998 KB  
Article
Assessing Huanglongbing Severity and Canopy Parameters of the Huanglongbing-Affected Citrus in Texas Using Unmanned Aerial System-Based Remote Sensing and Machine Learning
by Ittipon Khuimphukhieo, Jose Carlos Chavez, Chuanyu Yang, Lakshmi Akhijith Pasupuleti, Ismail Olaniyi, Veronica Ancona, Kranthi K. Mandadi, Jinha Jung and Juan Enciso
Sensors 2024, 24(23), 7646; https://doi.org/10.3390/s24237646 - 29 Nov 2024
Cited by 1 | Viewed by 1575
Abstract
Huanglongbing (HLB), also known as citrus greening disease, is a devastating disease of citrus. However, there is no known cure so far. Recently, under Section 24(c) of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), a special local need label was approved that [...] Read more.
Huanglongbing (HLB), also known as citrus greening disease, is a devastating disease of citrus. However, there is no known cure so far. Recently, under Section 24(c) of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), a special local need label was approved that allows the trunk injection of antimicrobials such as oxytetracycline (OTC) for HLB management in Florida. The objectives of this study were to use UAS-based remote sensing to assess the effectiveness of OTC on the HLB-affected citrus trees in Texas and to differentiate the levels of HLB severity and canopy health. We also leveraged UAS-based features, along with machine learning, for HLB severity classification. The results show that UAS-based vegetation indices (VIs) were not sufficiently able to differentiate the effects of OTC treatments of HLB-affected citrus in Texas. Yet, several UAS-based features were able to determine the severity levels of HLB and canopy parameters. Among several UAS-based features, the red-edge chlorophyll index (CI) was outstanding in distinguishing HLB severity levels and canopy color, while canopy cover (CC) was the best indicator in recognizing the different levels of canopy density. For HLB severity classification, a fusion of VIs and textural features (TFs) showed the highest accuracy for all models. Furthermore, random forest and eXtreme gradient boosting were promising algorithms in classifying the levels of HLB severity. Our results highlight the potential of using UAS-based features in assessing the severity of HLB-affected citrus. Full article
(This article belongs to the Special Issue Feature Papers in Smart Agriculture 2024)
Show Figures

Graphical abstract

19 pages, 53371 KB  
Article
Efficient UAV-Based Automatic Classification of Cassava Fields Using K-Means and Spectral Trend Analysis
by Apinya Boonrang, Pantip Piyatadsananon and Tanakorn Sritarapipat
AgriEngineering 2024, 6(4), 4406-4424; https://doi.org/10.3390/agriengineering6040250 - 22 Nov 2024
Cited by 1 | Viewed by 1063
Abstract
High-resolution images captured by Unmanned Aerial Vehicles (UAVs) play a vital role in precision agriculture, particularly in evaluating crop health and detecting weeds. However, the detailed pixel information in these images makes classification a time-consuming and resource-intensive process. Despite these challenges, UAV imagery [...] Read more.
High-resolution images captured by Unmanned Aerial Vehicles (UAVs) play a vital role in precision agriculture, particularly in evaluating crop health and detecting weeds. However, the detailed pixel information in these images makes classification a time-consuming and resource-intensive process. Despite these challenges, UAV imagery is increasingly utilized for various agricultural classification tasks. This study introduces an automatic classification method designed to streamline the process, specifically targeting cassava plants, weeds, and soil classification. The approach combines K-means unsupervised classification with spectral trend-based labeling, significantly reducing the need for manual intervention. The method ensures reliable and accurate classification results by leveraging color indices derived from RGB data and applying mean-shift filtering parameters. Key findings reveal that the combination of the blue (B) channel, Visible Atmospherically Resistant Index (VARI), and color index (CI) with filtering parameters, including a spatial radius (sp) = 5 and a color radius (sr) = 10, effectively differentiates soil from vegetation. Notably, using the green (G) channel, excess red (ExR), and excess green (ExG) with filtering parameters (sp = 10, sr = 20) successfully distinguishes cassava from weeds. The classification maps generated by this method achieved high kappa coefficients of 0.96, with accuracy levels comparable to supervised methods like Random Forest classification. This technique offers significant reductions in processing time compared to traditional methods and does not require training data, making it adaptable to different cassava fields captured by various UAV-mounted optical sensors. Ultimately, the proposed classification process minimizes manual intervention by incorporating efficient pre-processing steps into the classification workflow, making it a valuable tool for precision agriculture. Full article
(This article belongs to the Special Issue Computer Vision for Agriculture and Smart Farming)
Show Figures

Figure 1

11 pages, 1855 KB  
Article
Smartphone-Based Leaf Colorimetric Analysis of Grapevine (Vitis vinifera L.) Genotypes
by Péter Bodor-Pesti, Dóra Taranyi, Gábor Vértes, István Fazekas, Diána Ágnes Nyitrainé Sárdy, Tamás Deák, Zsuzsanna Varga and László Baranyai
Horticulturae 2024, 10(11), 1179; https://doi.org/10.3390/horticulturae10111179 - 7 Nov 2024
Viewed by 1491
Abstract
Leaf chlorophyll content is a key indicator of plant physiological status in viticulture; therefore, regular evaluation to obtain data for nutrient supply and canopy management is of vital importance. The measurement of pigmentation is most frequently carried out with hand-held instruments, destructive off-site [...] Read more.
Leaf chlorophyll content is a key indicator of plant physiological status in viticulture; therefore, regular evaluation to obtain data for nutrient supply and canopy management is of vital importance. The measurement of pigmentation is most frequently carried out with hand-held instruments, destructive off-site spectrophotometry, or remote sensing. Moreover, smartphone-based applications also ensure a promising way to collect colorimetric information that could correlate with pigmentation. In this study, four grapevine genotypes were investigated using smartphone-based RGB (Red, Green, Blue) and CIE-L*a*b* colorimetry and a portable chlorophyll meter. The objective of this study was to evaluate the correlation between leaf chlorophyll concentration and RGB- or CIE-L*a*b*-based color indices. A further aim was to find an appropriate model for discriminating between the genotypes by leaf coloration. For these purposes, fully developed leaves of ‘Chardonnay’, ‘Sauvignon blanc’, and ‘Pinot noir’ clones 666 and 777 were investigated with the Color Grab smartphone application to obtain RGB and CIE-L*a*b* values. Using these color values, chroma, hue, and a further 31 color indices were calculated. Chlorophyll concentrations were determined using an Apogee MC100 device, and the values were correlated with color values and color indices. The results showed that the chlorophyll concentration and color indices significantly differed between the genotypes. According to the results, certain color indices show a different direction in their relationship with leaf pigmentation for different grapevine genotypes. The same index showed a positive correlation for the leaf chlorophyll concentration for one variety and a negative correlation for another, which raises the possibility that the relationship is genotype-specific and not uniform within species. In light of this result, further study of the species specificity of the commonly used vegetation indices is warranted. Support Vector Machine (SVM) analysis of the samples based on color properties showed a 71.63% classification accuracy, proving that coloration is an important ampelographic feature for the identification and assessment of true-to-typeness. Full article
(This article belongs to the Section Viticulture)
Show Figures

Graphical abstract

Back to TopTop