Next Article in Journal
Comparative Analysis of Codon Usage Patterns in the Chloroplast Genomes of Fagopyrum Species
Previous Article in Journal
Biochar Herbicide Protection Pods for Mitigating Herbicide Sensitivity in Tomato Plants
Previous Article in Special Issue
Segmentation-Based Detection for Luffa Seedling Grading Using the Seg-FL Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integration of UAV Remote Sensing and Machine Learning for Taro Blight Monitoring

1
Cultivation and Construction Site of National Key Laboratory for Crop Genetics and Physiology in Jiangsu Province, Yangzhou University, Yangzhou 225009, China
2
Jiangsu Co-Innovation Center for Modern Production Technology of Grain Crops, Yangzhou University, Yangzhou 225009, China
3
Research Institute of Smart Agriculture, Yangzhou University, Yangzhou 225009, China
*
Author to whom correspondence should be addressed.
Agronomy 2025, 15(5), 1189; https://doi.org/10.3390/agronomy15051189
Submission received: 16 April 2025 / Revised: 6 May 2025 / Accepted: 12 May 2025 / Published: 14 May 2025

Abstract

:
Taro blight is a major disease affecting taro cultivation. Traditional methods for disease prevention rely on manual identification, which is limited by subjectivity and scope. An unmanned aerial vehicle (UAV) was utilized to capture spectral images of natural taro fields, facilitating the efficient monitoring of taro blight. Field survey data were integrated with these images to develop a model for monitoring taro blight severity. The back propagation neural network (BPNN) model showed optimal performance during the early and middle stages of taro formation when hyperspectral parameters were used as input variables. In the early stage, the BPNN model achieved a coefficient of determination (R2) of 0.92 and an RMSE of 0.054 on the training set, and it obtained an R2 of 0.89 with a root mean square error (RMSE) of 0.074 on the validation set. The random forest regression (RFR) model performed best during the early stage of taro formation with multispectral vegetation indices as input variables. The models exhibited robust predictive capabilities across various stages, especially during the early stage of taro formation. The results demonstrate that UAV remote sensing, combined with characteristic parameters and disease indices, presents a precise taro blight monitoring method that can substantially improve disease management in taro cultivation.

1. Introduction

Taro is native to swampy areas, requiring stringent soil moisture conditions with high humidity and warmth. The middle and lower reaches of the Yangtze River are characterized by abundant rainfall and suitable temperatures, making them a primary taro-planting region in China [1]. Pathogens are more likely to spread and occur frequently under rainy and humid conditions [2]. Taro diseases mainly affect leaves and petioles, leading to tissue corrosion [3], thus impacting below-ground growth and reducing yield. Notably, taro blight and taro anthrax are the most destructive diseases. Globally, the annual decline in taro yield due to blight may exceed 20% with severe cases resulting in losses of up to 50%. Sensitive varieties may lead to complete crop failure. Taro blight is regarded as one of the most severe diseases during the taro growth cycle.
In recent years, low-altitude and aerospace remote sensing technologies, along with the rapid growth of the unmanned aerial vehicle (UAV) industry, have promoted the widespread adoption of UAV remote sensing for monitoring crop pests and diseases in agriculture. UAVs are used as versatile detection platforms that accommodate various sensors to capture detailed field images across multiple wave bands and resolutions, meeting diverse monitoring and operational needs. Studies have shown that crops in various conditions exhibit consistent spectral responses. These responses are analyzed to develop crop disease susceptibility models using vegetation spectral characteristics and multiple vegetation indices [4,5,6,7,8,9,10]. Most UAV sensors are lightweight and durable, allowing temporal image datasets and databases to be created for continuous monitoring and intervention during crop growth stages [11]. Despite these advancements, taro disease monitoring is still primarily dependent on manual sampling and field control. Limited research has been conducted on disease inspection and prediction using remote sensing, image processing, and other smart agricultural technologies.
Remote sensing techniques have been extensively used for crop disease monitoring, spanning from individual leaves and canopies to entire fields and regions. Qiao et al. [12] compared near-surface hyperspectral images with low-altitude imagery for monitoring wheat powdery mildew. A significant correlation was observed between spectral reflectance at both altitudes and disease incidence during the grouting stage. Backoulou et al. [13] used multispectral remote sensing to extract variables and classify wheat field areas affected by aphids, drought, and field management. An overall classification accuracy of 97% was achieved, including 98% for aphid identification. Liu et al. [14] analyzed wheat stripe rust images obtained from near-ground and low-altitude perspectives. Effective rust estimation models were developed, confirming the potential of low-altitude aerial photography for large-scale disease monitoring. Wang et al. [15] developed an effective classifier for identifying rice bakanae disease using UAV imagery. A recognition accuracy above 93% was achieved, emphasizing the advantages of UAVs for rapid and accurate disease detection. Xavier et al. [16] used UAVs to capture multispectral images of cotton fields infected with ramularia leaf blight at different altitudes. A maximum accuracy of 79% was achieved, confirming the potential of multispectral UAVs for cotton disease monitoring. Similarly, recent studies have compared the trade-offs between multispectral and hyperspectral imaging. For instance, Portela et al. [17] demonstrated the effectiveness of UAV-based multispectral data for monitoring downy mildew progression in vineyards, emphasizing the cost-effectiveness and simpler data processing. In contrast, Li et al. [18] used UAV hyperspectral imagery combined with machine learning and feature optimization to assess Verticillium wilt severity in cotton, achieving high accuracy due to richer spectral detail. These findings highlight that multispectral sensors are more efficient for repeated, large-area monitoring, while hyperspectral approaches offer superior spectral resolution for detecting subtle physiological changes despite increased complexity and cost.
UAV remote sensing platforms are flexible and maneuverable, allowing data to be collected during single, multiple, or entire crop growth stages. Data collection intervals have been reduced to days or weeks, ensuring the continuity of remote sensing data over time. Liu et al. [19] collected hyperspectral imagery of winter wheat during three growth stages using an aerial aircraft. Multitemporal imagery was analyzed, showing that wheat infected with stripe rust had higher reflectance in the yellow edge and red valley bands but lower reflectance in the near-infrared band than healthy wheat. Li et al. [20] conducted a two-year study of citrus trees infected with huanglongbing (HLB) using multispectral and hyperspectral imagery. Significant differences were observed in the red-edge position between healthy and infected canopies. These differences enabled the stable and accurate classification using simple algorithms, such as MinDist and MahaDist. Severtson et al. [21] captured multispectral images of canola at 15 m and 120 m on the 69th, 96th, and 113th days after seeding. Correlation analysis was conducted between the images, leaf nitrogen levels, and aphid counts. Images captured at 120 m demonstrated higher accuracy, confirming the potential of UAV imagery for monitoring nutrient levels and aphid populations in canola. Sugiura et al. [22] used UAV-captured RGB images to predict late blight in potato fields. The R2 reached 0.77 in the first year and 0.73 in the second, exceeding the accuracy of conventional methods. Su et al. [23] used UAVs to acquire multispectral images of wheat infected with yellow rust at different growth stages. The optimal classification time was found to be 45 days after inoculation. RVI, NDVI, and OSAVI were identified as the most effective indices. Spatiotemporal analysis showed that yellow rust susceptibility varied over time, peaking at the heading stage. However, although UAV images efficiently capture vegetation status, they are indirect observations relying on spectral reflectance to infer plant physiology. Spectral differences can be caused by variations in regions, crop species, and disease symptoms. Disease classification and physiological indicators (e.g., chlorophyll content, lesion ratio) obtained from field surveys are essential labeled data for training and validating remote sensing models. Without field data, models constructed from UAV images lack objective evaluation and are prone to overfitting and poor generalization. Therefore, fusing remote sensing images with field data improves model detail and generalization.
Despite substantial progress in the application of remote sensing for crop disease and pest identification, several critical challenges remain that underscore the necessity of further research. First, the scope of existing studies is largely limited to major staple crops such as wheat, rice, and maize with insufficient attention paid to economically important but less-studied crops such as oil crops and most fruits and vegetables. For instance, while there is abundant research on diseases in major crops, studies on economically valuable crops like taro remain scarce—particularly in real field environments using remote sensing techniques. Drawing on advances made in other crops can therefore guide the development and refinement of disease monitoring methods for taro. Addressing this gap, the present study focuses on the establishment of a remote sensing inversion model tailored to taro blight, aiming to fill a critical technical void in this field. Second, most studies focus on single-disease scenarios, neglecting the fact that multiple pathogens often coexist and interact in real field environments. These interactions can produce overlapping or confounding spectral responses, which complicates disease diagnosis and reduces model accuracy. Therefore, differentiating and isolating spectral signatures of mixed infections is essential for early and accurate detection. Third, current research is mainly conducted in controlled environments—laboratories, experimental plots, or potted plants—whereas large-scale, real-world applications remain limited. Monitoring crop diseases at regional or national scales requires scalable systems that integrate multiple data sources, including near-ground, low-altitude UAV, and satellite-based imagery. The integration of these platforms is critical to advancing wide-area disease monitoring and developing robust, generalizable models. In this study, UAV remote sensing technology was used to monitor and invert the severity of taro blight, aiming to develop a model for predicting taro blight using a combination of modeling methods, and to validate model predictions with field measurements to evaluate its feasibility.

2. Materials and Methods

2.1. Experimental Design

2.1.1. Overview of the Test Area

The experiment was conducted in 2022 and 2023 at Guangpu Farm (32°19′ N, 119°52′ E) in Gaogang District, Taizhou City, China, as shown in Figure 1. The region enjoys four distinct seasons, with hot and rainy summers, abundant heat and precipitation, sandy loamy soils of medium fertility, and flat terrain. The test area was situated at an elevation of approximately 6 m above sea level with a subtropical monsoon humid climate. The region experienced an average temperature of 15 °C, annual precipitation of 1055 mm, and a frost-free period of 135 days. During the middle and late taro growth stages, ample sunlight, high temperatures, and suitable rainfall created favorable conditions for taro blight development and spread. The soil in the test area consisted of yellow loam with a bulk density of approximately 1.3 g/cm3.

2.1.2. Field Design

Jingjiang Xiangsha taro, widely cultivated in Taizhou, China, was the subject of this study. The field trial was conducted in an open field with ridges 40 cm wide, furrows 40 cm wide, and plant spacing of 30 cm. The experimental field measured 92 m in length and 15 m in width and contained 13 ridges. Taro was sown in late March using burrow sowing methods and managed with conventional cultivation practices.
The natural occurrence of taro blight was facilitated by favorable climatic conditions at the end of July. The blight spread throughout the field in early August and peaked in early September, coinciding with a critical phase for the germination, growth, and bulking of son-taro and grandson-taro. August was identified as the key period for the prevention and control of taro blight. Pesticides were applied from mid to late August with the schedule guided by disease incidence observed in the fields.
Consequently, field surveys were initiated in early August during the initial stages of disease incidence, coinciding with the early stage of taro formation. Surveys were conducted at 7–10-day intervals and terminated in early September, coinciding with the middle stage of taro formation, due to the rapid spread and prolonged duration of the disease. The experimental sites were arranged in a mesh configuration, and investigations were conducted at designated fixed points.

2.2. Field Blight Survey

Leaves, petioles, and corms were primarily infected by taro blight. The leaves were the primary site for identification, displaying distinct differences in appearance depending on the severity of the disease (Figure 2 and Figure 3). As shown in Figure 2a, healthy leaves appeared dark green with a smooth surface, clearly visible veins, and no signs of lesions or damage. In Figure 2b, brown or black irregularly shaped spots began to appear on the leaves, although the overall green coloration was still relatively prominent. Figure 2c shows leaves turning yellow with large areas of brown necrotic tissue. In the most severe cases, as seen in Figure 2d, the leaves became severely withered and deformed, turning dark brown or black, with leaf structures starting to rupture or decay. On a population level, Figure 3a shows a well-growing taro population with mostly green leaves and only minor yellowing, indicating good overall health. In contrast, Figure 3b depicts a population with moderate disease symptoms, where some leaves exhibited necrosis from the center to the tips, though most plants maintained some level of vitality. In the most severely affected population, shown in Figure 3c, most leaves were yellowed and withered with some plants even collapsing due to the severity of the disease.
Due to the widespread occurrence of the disease, 23 sampling points were evenly distributed across the fields. At each sampling point, 10 taro plants were randomly selected for the disease incidence survey. The severity of the disease was assessed based on the area affected by blight on the top fourth leaves of each plant. The grading standard for evaluating taro blight severity is shown in Table 1.
The disease index (DI) was calculated based on the results of the field disease incidence survey, following the established grading standards:
DI = S i n i / 5 N   ×   100 %
In the above equation, DI was the disease index; S i was disease level; n i was the number of plants in the corresponding disease level; i was each level of disease grading; and N was the total number of plants surveyed.

2.3. Image Data Acquisition

Spectral imagery data acquisition using UAVs was typically performed between 11:30 a.m. and 1:30 p.m. under clear and cloudless weather conditions. During acquisitions, the lens angle was kept perpendicular to the ground. Hyperspectral images were captured at an altitude of 80 m, and multispectral images were acquired at 12 m. The aircraft remained stable during acquisitions. Spectral canopy images of the entire taro field were acquired on the first, tenth, twentieth, and thirtieth days following the onset of disease incidence, respectively.

2.3.1. UAV Hyperspectral Image Acquisition Equipment

The flight platform used in this study was the DJI Matrice 600 Pro (Da-Jiang innovations science and technology Co., Ltd., Shenzhen, China), which is a six-rotor UAV with lightweight and high-loading capabilities during operations. It included the D-RTK GNSS positioning system for precise localization and used DJI GS Pro (SZ DJI Technology Co. Ltd., Shenzhen, China) software on the ground control station for flight configuration and route planning. The UAV was equipped with a triplex-redundancy A3 Pro flight control system, ensuring high reliability and safety. It also featured Lightbridge 2 HD digital image transmission and an intelligent flight battery with a battery management system. It was compatible with various gimbals and third-party software and hardware, offering extensive interfacing capabilities and strong performance. The main parameters of the UAV are shown in Table 2.
The hyperspectral image acquisition equipment used in this study was the GaiaSky-mini (Dualix spectral imaging technology Co., Ltd., Wuxi, China) airborne hyperspectrometer. This equipment features rapid image acquisition speed and high imaging quality, supporting two modes: hovering shooting and UAV scanning. The system can collect spectral data ranging from 400 to 1000 nm with a spectral resolution of 3.5 nm, meeting the requirements for large-area and long-endurance imaging. The main technical parameters of the equipment are shown in Table 3.

2.3.2. UAV Multispectral Image Acquisition Equipment

The UAV multispectral image acquisition system used the DJI Phantom 4 (SZ DJI Technology, Co., Shenzhen, China) as its flight platform. The UAV was equipped with an integrated multispectral imaging system and an OcuSync transmission system, supporting a control range of 7 km. This setup allowed real-time and efficient multispectral data acquisition over large areas. The UAV employed an RTK positioning system and supported PPK post-processing, removing constraints related to communication links and network coverage. The ground control platform allowed the configuration of flight parameters and paths through DJI GS Pro software, ensuring precise control and image capture. The main parameters of the UAV are shown in Table 4.
The airborne camera system consisted of six 1/2.9-inch CMOS sensors: one color sensor for visible light imaging and five monochromatic sensors for multispectral imaging. Each sensor featured 2.12 million total pixels and 2.08 million effective pixels. The camera was mounted on a three-axis gimbal to ensure clear and stable imaging. The main parameters of the camera system are shown in Table 5.

2.3.3. UAV Image Pre-Processing

The UAV hyperspectral images were calibrated using Spec View 3.1.2 (JiangSu Dualix Spectral Image Technology Co. Ltd., Wuxi, China) software. Raw data were imported into the software, and lens calibration, reflectance calibration, and atmospheric correction were performed in sequence. Georeferencing was conducted by integrating GPS/IMU data recorded during flight and supplemented with Ground Control Points (GCPs) collected in the field using high-precision RTK GPS. This ensured accurate spatial alignment between the hyperspectral imagery and field sampling points. After calibration, HiSpectralStitcher (Dualix Spectral Imaging Technology) software was used for image stitching. Image data were batch-loaded into the preview settings, and stitching parameters were adjusted. “Ray” was selected for the bundle function, and “transverseMercator” was used for the warp type. After previewing, “multiband stitching” under “process” was selected for image stitching. The ENVI format was chosen in the stitching settings for output. The appropriate output path was specified, and the stitching process was initiated.
The UAV multispectral images were calibrated and stitched using DJI Terra software. Bulk image data were imported into the software, and radiometric calibration was performed by navigating to “advanced settings” and selecting “radiation correction.” This calibration step used images of a standard reflectance panel taken before the flight along with recorded reflectance values to ensure radiometric accuracy. Georeferencing was accomplished through embedded GPS data and further refined using GCPs to improve spatial consistency with hyperspectral imagery and ground measurements. After calibration, multispectral data reconstruction was initiated by selecting “start reconstruction” and confirming the reconstruction parameters. The output included orthophotos and vegetation index maps of the experimental field, which provided high spatial fidelity for subsequent disease analysis.
After image pre-processing, spectral reflectance data at each sampling point in the experimental field were extracted using ENVI 5.3 (Exelis Visual Information Solutions, Boulder, CO, USA) software based on GPS coordinates. Band calculations were then conducted to derive vegetation indices used for disease monitoring.

2.4. Spectral Characteristic Parameters and Vegetation Indices Screening

2.4.1. Hyperspectral Characteristic Parameters Screening

Thirty hyperspectral characteristic parameters were selected based on existing studies and the spectral image characteristics of the taro canopy, as shown in Table 6.

2.4.2. Multispectral Vegetation Indices Screening

Twenty-four multispectral vegetation indices were selected based on existing studies and the spectral image characteristics of the taro canopy, as shown in Table 7.

2.5. Modeling Methods and Evaluation Indices

Three modeling algorithms were selected for this study: partial least squares regression (PLSR), random forest regression (RFR), and back propagation neural network (BPNN). These methods were chosen to leverage the complementary strengths of linear and nonlinear modeling techniques, thereby enhancing the robustness and generalizability of disease severity predictions. Spectral parameters with high correlation coefficients to disease indices were selected as inputs for each algorithm. Various modeling approaches were then applied to construct taro blight monitoring models.

2.5.1. Partial Least Squares Regression (PLSR) Algorithm

Partial least squares regression (PLSR), a widely used modeling method, is highly effective in predicting outcomes when the number of variables exceeds the number of samples. Autocorrelation and multicollinearity among variables are addressed in PLSR through principal component extraction. The principles of PLSR are shown as follows:
Assuming p independent variables X1, …, Xp and q dependent variables Y1, …, Yq, the standardized data for n samples are X0(n*p), Y0(n*q).
Solve w1, v1 to maximize the correlation between w1′X0 and v1′Y0. This corresponds to maximizing the covariance. For standardized data, this corresponds to the inner product of w1′X0 and v1Y0. The objective function is as follows:
max w 1 2 = 1 , v 1 2 = 1 w 1 X 0 Y 0 v 0
The objective function is solved using the Lagrange multiplier method to determine the optimal values of w1 and v1. In this context, w1 represents the weights of model effects, and v1 represents the weights of the dependent variables. In this way, the transformed component t1 = X0w1 is obtained.
Based on t1 obtained above, the regression model is obtained:
X 0 = t 1 α 1 + E 1 Y 0 = t 1 β 1 + F 1
Integrate multiple components using least squares solving. When the rank of X0 is r     min n     1 , p , it exists r components t1, …, tr, such that
X 0 = t 1 α 1 + + t r α r + E r Y 0 = t 1 β 1 + + t r β r + F r
To obtain the final model parameters, the additive model is integrated, and the parameters that can be pre-added are combined. The resulting model is expressed as follows:
Y ^ = X β ^

2.5.2. Random Forest Regression (RFR) Algorithm

Random forest regression (RFR) is an ensemble learning algorithm that falls under the bagging methodology. The final result is derived through voting or averaging the outputs of multiple weak classifiers, producing an integrated model with high accuracy and strong generalization capabilities.
During the training phase, random forest regression employs bootstrap sampling to partition the training set into multiple sub-training datasets, which are used to train distinct decision trees. In the prediction phase, random forest regression calculates the average of predictions from its internal decision trees to produce the final result. The algorithm’s logic is shown in Figure 4.

2.5.3. Back Propagation Neural Network (BPNN) Algorithm

The back propagation neural network (BPNN) is a fundamental neural network architecture widely applied across various domains. Its core characteristic involves the forward propagation of output results and backward propagation of errors. Neurons in each layer are not directly interconnected. Instead, they receive signals from adjacent neurons, which are processed through activation functions.
The commonly used activation functions in this algorithm are the Log-Sigmoid Function and the Tan-Sigmoid Function.
The Log-Sigmoid Function expression is
f x = 1 1   +   e x
The output value of this function is in the range of 0 to 1.
The Tan-Sigmoid Function expression is
f x = e x - e - x e x + e - x
The output value of this function is in the range of −1 to 1.
Activation functions are required in the hidden layer to introduce nonlinearity and prevent linear mapping. The Purelin function is used in the output layer to allow an unlimited range of output values.
Connections between neurons in adjacent layers are weighted, representing the signal strength along these paths. The BPNN minimizes errors by adjusting weights (W) and thresholds (b) to optimize the loss function. The adjustment of weights and thresholds constitutes the training process of the BPNN. The error calculation equation of the model is as follows:
E W , b = 1 2 k = 1 2 y k   - T k 2
The number of neurons in the hidden layer is determined by the input and output neurons and a conditioning constant. The calculation formula is as follows:
j = m + i + α
In the calculation equation, j is the number of neurons in the hidden layer, m is the number of neurons in the input layer, i is the number of neurons in the output layer, and α is a conditioning constant in the range of 1 to 10.

2.5.4. Model Validation

The model’s reliability and accuracy were assessed using R2 and RMSE as evaluation metrics. Higher R2 values and lower RMSE values indicated greater precision and reliability in the monitoring model.
R 2 = i = 0 n y ^ i   -   y - 2 i = 1 n y i   -   y - 2
RMSE = i = 1 n y i   -   y ^ i 2 n
In the calculation equation, n is the number of samples, x i is the measured value, y i is the estimated value, and y ¯ is the estimated mean value.

3. Results

3.1. DI Correlation Analysis of Taro Blight Based on Spectral Information

3.1.1. DI Correlation Analysis of Taro Blight Based on Hyperspectral Characteristic Parameters

The correlation between taro canopy hyperspectral parameters and disease indices was analyzed for the early and middle stages of taro formation. The results are shown in Table 8 and Table 9. In the early stage of taro formation, SDb, SDr/SDy, W3, and SR1 showed no significant correlation with disease indices, while Db and W1 demonstrated significant correlations. The remaining 24 parameters exhibited strong correlations with the blight disease index (DI). Notably, A3 showed the strongest correlation (r = 0.856) among all parameters. In the middle stage of taro formation, eight parameters, including SDb, SDr/SDb, (SDr − SDb)/(SDr + SDb), and A3, showed no significant correlation with disease indices. The other parameters displayed significant correlations with DI with ρg/ρr showing the strongest correlation (r = −0.695).
Based on hyperspectral parameters with significant correlations, the 10 parameters with the largest absolute correlation coefficients were selected for disease index modeling and analysis. For the early stage of taro formation, A3, W2, DP3, SL3, SDr/SDb, SDy, SDr, SL1, A2, and SR3 were selected. For the middle stage of taro formation, ρg/ρr, (ρg − ρr)/(ρg + ρr), A1, DP1, W1, SDr, Dr, DP2, A2, and SR2 were chosen for modeling.

3.1.2. Correlation Analysis of Taro Blight DI Based on Multispectral Vegetation Indices

Similarly, correlation analysis was conducted to evaluate the relationship between multispectral vegetation indices of the taro canopy and taro blight disease indices during the early and middle stages of taro formation. The results of this analysis are shown in Table 10 and Table 11. The results indicated a highly significant correlation between multispectral vegetation indices of the taro canopy and blight disease indices during the early and middle stages of taro formation. Among these indices, CI showed the highest correlation during the early stage of taro formation with a correlation coefficient of 0.861. In contrast, H exhibited the highest correlation during the middle stage with a correlation coefficient of 0.706.
For model building and analysis, the 10 parameters with the highest absolute correlation coefficients were selected for both the early and middle stages of taro formation. In the early stage of taro formation, CI, H, IF, MGRVI, EXR, RGR, NGRDI, IGR, GI, and MSR were selected. For the middle stage of taro formation, H, EXR, RGR, MGRVI, NGRDI, IF, MSR, GI, IGR, and GLI were chosen.

3.2. Taro Blight Estimation Model Based on Hyperspectral Characteristic Parameters

3.2.1. Building and Validation of Taro Blight Estimation Model Based on Partial Least Squares Regression (PLSR)

The partial least squares regression (PLSR) method was employed to compare and analyze 10 hyperspectral characteristic parameters. Estimation models were then constructed for the early and middle stages of taro formation using the screened characteristic parameters. The effectiveness of these estimation models (Table 12) showed significant differences across stages.
In the early stage of taro formation, W2 and A2 were chosen as variables for modeling. The resulting model for estimating canopy blight severity was represented as y = 2.3829 − 0.0223W2 − 0.0128A2 with an R2 of 0.79 and an RMSE of 0.086. These results demonstrated that the PLSR model, based on hyperspectral characteristic parameters, could be a viable method for disease estimation in the early stage of taro formation. In the middle stage of taro formation, A1, W1, and A2 were selected as the final variables, yielding an R2 of 0.59 and an RMSE of 0.081. The estimation model for taro canopy blight was represented as y = 1.1759 + 0.0067A1 − 0.0069W1 − 0.0059A2. The model effect in the early stage of taro formation was considered usable but required further verification.
The model was validated using samples other than contemporaneous modeling, as shown in Figure 5. In the early stage of taro formation (Figure 5a), the canopy blight severity estimation model achieved an R2 of 0.81 and an RMSE of 0.081. In the middle stage of taro formation (Figure 5b), the validation R2 was only 0.61, and the RMSE was 0.1, indicating low fit and stability. Consequently, the model was unsuitable for the inversion and estimation of canopy blight in taro.

3.2.2. Building and Validation of Taro Blight Estimation Models Based on Random Forest Regression (RFR)

The screened characteristic parameters were utilized to construct disease estimation models for both the early and middle stages of taro formation using the random forest regression (RFR) method. Table 13 presents the modeling results for hyperspectral parameters and multispectral indices across different stages. The RFR model demonstrated superior performance compared to the PLSR model in estimating taro blight during the early stage of taro formation when hyperspectral parameters were used as input variables. Specifically, the RFR model achieved an R2 value of 0.92 and an RMSE of 0.075. During the middle stage of taro formation, the RFR model achieved an R2 value of 0.58 and an RMSE of 0.088.
Samples other than contemporaneous modeling were validated, and the results are shown in Figure 6. The measured and estimated disease indices in the early stage of taro formation (Figure 6a) showed an R2 of 0.84 and an RMSE of 0.075. This indicates that the RFR prediction model, using hyperspectral characteristic parameters, provides stable prediction performance and is well suited for taro blight monitoring. During the middle stage of taro formation (Figure 6b), the validation samples showed an R2 of 0.62 and an RMSE of 0.096. The results indicate that the RFR model, compared to the PLSR model, exhibits higher accuracy and reliability in the early stage of taro formation. This improvement is particularly beneficial for monitoring taro blight disease in the early stage. However, the optimization achieved by the RFR model in the middle stage of taro formation was less significant.

3.2.3. Building and Validation of Taro Blight Estimation Model Based on Back Propagation Neural Network (BPNN)

The BPNN method was used to extensively train and debug the sample data, resulting in an optimal estimation model for assessing taro blight severity at various stages based on hyperspectral parameters. A corresponding validation model was also established. Table 14 shows that the BPNN model outperformed the PLSR and RFR models in prediction at both stages. The estimation model achieved an R2 value of 0.92 and an RMSE of 0.054 in the early stage of taro formation. In the middle stage of taro formation, the estimation model achieved an R2 value of 0.9 and an RMSE of 0.042.
Figure 7 illustrates the fitting results of the measured and estimated values for the validation sample. The validated model achieved an R2 of 0.89 and an RMSE of 0.073 in the early stage of taro formation (Figure 7a) and an R2 of 0.79 and an RMSE of 0.063 in the middle stage (Figure 7b). The results suggest that the BPNN model exhibited strong predictive ability and stability in both stages, producing satisfactory outcomes. The model demonstrated superior accuracy in predicting taro blight disease indices and produced reliable estimation results across stages.

3.3. Estimation Model of Taro Blight Based on Multispectral Vegetation Indices

3.3.1. Building and Validation of Taro Blight Estimation Model Based on PLSR

Ten multispectral vegetation indices were analyzed using the PLSR modeling method. Estimation models for disease indices (DI) were separately developed for the early and middle stages of taro formation. The analysis results are presented in Table 15. In the early stage of taro formation, CI, IF, and GI were selected as the final variables. The regression equation for taro blight disease indices was constructed as y = −1.3483 + 2.5339CI + 0.2059IF + 0.6873GI. The model showed an R2 value of 0.76 and an RMSE of 0.098. Slight changes in R2 and RMSE compared to hyperspectral parameters suggest that the model is suitable for estimating canopy taro blight. In the middle stage of taro formation, IF and GI were selected as variables for modeling. The model showed poor performance with R2 and RMSE values of 0.49 and 0.11, respectively. The final model for estimating taro blight was constructed as y = 2.093 − 0.1721IF − 1.0126GI.
To further validate the accuracy of the model, predictions were compared using samples other than those modeled, and the results are shown in Figure 8. The verification effect showed an R2 of 0.85 and an RMSE of 0.074 in the early stage of taro formation (Figure 8a). In the middle stage of taro formation (Figure 8b), the R2 of the validation set was only 0.45, while the RMSE reached 0.11, indicating poorer model performance. The results indicated that a blight estimation model, based on the PLSR method, could be constructed using canopy spectra in the early stage of taro formation. However, this method was not applicable in the middle stage of taro formation.

3.3.2. Building and Validation of Taro Blight Estimation Model Based on RFR

The RFR modeling method was used to construct disease estimation models for the early and middle stages of taro formation. The modeling results for multispectral indices at these stages are shown in Table 16. In the early stage of taro formation, the R2 value of the modeling sample was 0.89, and the RMSE was 0.069, showing significant improvement compared to the PLSR model. In the middle stage of taro formation, the RFR model showed an increase in the R2 value and a decrease in the RMSE compared to the PLSR model. Specifically, the R2 value was 0.34, and the RMSE was 0.036, improving the model’s accuracy to a practical level. The results indicated that the RFR model based on multispectral indices showed superior performance in both stages. This model accurately estimated taro blight disease indices in the corresponding stage.
Samples not used in modeling were compared with model predictions, and validation results are shown in Figure 9. In the early stage of taro formation (Figure 9a), the R2 and RMSE were 0.83 and 0.082, respectively, showing little difference in model stability. This method could be used for taro blight disease monitoring. The R2 and RMSE of the validation sample in the middle stage of taro formation (Figure 9b) were 0.74 and 0.099, respectively. In summary, the RFR model was used to invert disease indices for taro canopy spectra in the early stage of taro formation. In the middle stage of taro formation, the RFR model built with multispectral indices achieved higher prediction accuracy.

3.3.3. Building and Validation of Taro Blight Estimation Model Based on BPNN

Using the BPNN method, extensive training and debugging were performed on the sample data, resulting in an optimal estimation model for taro blight severity based on multispectral vegetation indices across various stages. The modeling results for the early and middle stages of taro formation are shown in Table 17. It was indicated that the R2 for the training model was 0.87 for both the early and middle stages of taro formation. The RMSE values were 0.074 and 0.057, respectively, for these stages. No significant differences were observed in the prediction models for taro blight disease indices across various stages. Both models showed comparable effectiveness in their respective stages. More satisfactory results were achieved in estimating the disease of taro blight.
Figure 10 presents the fitting results of the measured and estimated values for the validation sample. The validation results showed that the R2 values of the prediction models for the two stages were 0.82 and 0.81, and the RMSE values were 0.084 and 0.076, respectively. The results indicated that the BPNN model was highly stable and adaptable in both the early stage (Figure 10a) and the middle stage (Figure 10b) of taro formation. The model demonstrated consistent prediction accuracy regardless of the spectral parameter type, including hyperspectral characteristic parameters and multispectral vegetation indices.

3.4. Comparison of DI Modeling Based on Different Spectral Features

The modeling results for various models are shown in Table 18. A comprehensive comparison showed that the BPNN estimation model based on hyperspectral parameters had the highest accuracy in the early stage of taro formation with an R2 value of 0.92 and the lowest RMSE of 0.054. The RFR model based on hyperspectral parameters ranked second with an R2 value of 0.92 and an RMSE of 0.056. The validation results showed that the BPNN estimation model based on hyperspectral parameters had the best fit between estimated and true values, with an R2 of 0.89 and an RMSE of 0.074, confirming its accuracy and stability. The PLSR model based on multispectral indices ranked second with an R2 value of 0.85 and an RMSE of 0.074. The RFR model based on hyperspectral parameters ranked third with an R2 value of 0.84 and an RMSE of 0.075. The RFR model performed well overall in terms of accuracy and stability though it was slightly inferior to the BPNN estimation model.
In the middle stage of taro formation, the BPNN estimation model based on hyperspectral parameters showed the best performance. Its R2 value of 0.9 was the highest, and its RMSE of 0.042 was the lowest among all models. Secondly, the BPNN estimation model based on multispectral indices achieved an R2 value of 0.87 and an RMSE of 0.057. Validation results showed that the BPNN estimation model based on multispectral indices provided the best fit between estimated and true values with an R2 of 0.81 and an RMSE of 0.076. Secondly, the BPNN estimation model based on hyperspectral parameters achieved an R2 value of 0.79 and an RMSE of 0.063. The R2 values of both models were similar, but the RMSE of the hyperspectral parameter-based model was slightly lower, indicating better stability. The comprehensive results suggested that the BPNN estimation model based on hyperspectral parameters had greater predictive ability in the middle stage of taro formation, providing more accurate and reliable estimates.
A comparison between the two stages revealed that in terms of modeling effects, the R2 ranged from 0.76 to 0.92, and the RMSE ranged from 0.054 to 0.098 in the early stage of taro formation. In the middle stage of taro formation, the R2 ranged from 0.49 to 0.90, and the RMSE ranged from 0.042 to 0.11. The modeling effects in the early stage showed minimal variation, indicating usability, whereas in the middle stage, they had a broader range and reduced stability. For validation effects, the R2 in the early stage ranged from 0.81 to 0.89, and the RMSE ranged from 0.074 to 0.084. In the middle stage of taro formation, the R2 ranged from 0.45 to 0.81, and the RMSE ranged from 0.063 to 0.11. The fitting conditions of the validation set in the early stage were consistent and excellent, and the models were stable with minimal variation. The results showed that all models constructed with three algorithms and two primary parameter types exhibited high prediction capabilities in the early stage of taro formation. This stage was suitable for applying estimation models and could serve as a sensitive period for monitoring canopy taro diseases using UAV remote sensing data.

4. Discussion

4.1. Comparison of Taro Canopy Blight Monitoring Based on Spectral Features

Two types of feature data were used to construct the taro canopy blight monitoring model. After comparing the model performance, it was found that the DI prediction models based on different data sources performed best using the BPNN model. This result is consistent with the wheat stripe rust monitoring model, which used UAV images to extract vegetation indices and texture features (Guo et al. [49]). In conclusion, UAV-based taro blight monitoring was feasible and enabled field-scale disease monitoring. Considering that hyperspectral remote sensing equipment is expensive and difficult to operate, alternative disease monitoring methods were proposed, offering a more flexible, rapid, and cost-effective approach for taro blight monitoring.
The hyperspectral-based BPNN model achieved higher R2 and lower RMSE values in both the early and middle stages of taro formation, highlighting its effectiveness in modeling disease severity. This superior performance can be attributed to BPNN’s strong nonlinear mapping capability and robust generalization ability, which enable it to effectively capture the complex and nonlinear relationships between spectral features and disease progression. Although multispectral data showed slightly lower accuracy, the RFR model performed best in the early stage of taro formation, potentially due to its ensemble structure’s ability to handle moderate data variability, while BPNN was more suitable for the middle stage of taro formation where the disease characteristics became more complex. Given the high cost and complexity of hyperspectral sensors, multispectral approaches offer a more accessible alternative. UAV-based monitoring proved feasible and scalable for field-level disease detection, providing a balance between resolution, flexibility, and cost.
In this study, we focused on the early and middle stages of taro formation, which served as the primary basis for the constructed taro blight characterization model. However, taro blight typically exhibits a considerably broader range of hazardous periods [50]. Consequently, the indices reflecting disease severity vary across different growth stages. Thus, the application of our model to other growth stages is inherently limited, highlighting the need for detailed investigations into taro blight at these stages.
Furthermore, the occurrence and spread of taro blight in agricultural fields are influenced by multiple factors [51]. In this study, we primarily considered variations in spectral characteristics while excluding a comprehensive analysis of other relevant environmental factors. Consequently, there is a need to enrich and organize data sources to reduce prediction errors in the disease model and develop a more comprehensive monitoring system for taro blight. Potential issues, such as spectral saturation in later growth stages, may impact the accuracy of the spectral data. Additionally, the dependence on proper radiometric calibration of UAV sensors is crucial for ensuring reliable measurements. These factors highlight the importance of integrating a broader range of data and improving sensor calibration to enhance the effectiveness of disease monitoring and prediction systems.

4.2. Taro Blight Surveillance Based on UAV Imagery

In this study, the estimation of taro canopy blight was primarily based on UAV image data. Previous research has demonstrated the feasibility of using UAV images for crop disease monitoring [52]. Specifically, spectral features [53,54] were employed as key parameters in this study. Algorithms such as PLSR [49,53], RFR [55], and BPNN [56,57] were primarily used due to their promising results in previous experiments. However, the results showed variable accuracy for PLSR and RFR, depending on feature inputs. In contrast, the BPNN algorithm demonstrated higher accuracy and stability, which was possibly due to the limitations of using a single feature or data source. Therefore, integrating different features and collaborative modeling approaches should be explored to address these limitations. Additionally, the model results indicated the need for larger sample sizes and multi-year experiments in future studies to mitigate potential biases associated with smaller sample sizes.
Owing to the expansive nature of taro leaves and their substantial height, shading occurs frequently among taro plants. The taro canopy images and remote sensing data collected by the UAV platform provide information about taro populations in a specific area, correlating with the distribution characteristics of taro blight plants in the field [58]. The field survey of taro blight targeted the top four leaves of the plant as the primary sites of infection during the early and middle stages of the blight attack. As the severity of blight increased, spots extended to the petiole area, causing leaf hollowing or rotting, which hindered further monitoring. Consequently, this study did not investigate the disease in individual taro plants. Further trials and comparisons were necessary to assess disease incidence in individual taro plants.
UAV flight settings used in this study were developed based on actual conditions and prior experience, which were primarily tailored for the study area and adjacent regions. Additionally, these settings provide a valuable reference for various geographical areas or climatic zones. The quality of UAV images is significantly influenced by factors such as the angle and level of image capture as well as the methods employed for data fusion. Image accuracy was typically associated with UAV flight altitude, as previously reported by Liu et al. [14]. This study was conducted at a single flight altitude and did not attempt to design or evaluate surveillance outcomes across various shooting altitudes. Therefore, further research and studies are required to address this limitation.
Although this study focused on UAV-based hyperspectral and multispectral monitoring techniques for taro blight, integrating satellite remote sensing can enhance the scalability and sustainability of disease management. Satellite remote sensing effectively monitors vegetation health trends over large areas due to its wide coverage and long-term observation capability. However, its low spatial resolution restricts early infection detection at the plant level. To overcome this limitation, future research should explore a collaborative satellite–UAV remote sensing model. Nie et al. [59] integrated Sentinel-1 microwave and Sentinel-2 optical time-series images to generate preliminary disease risk maps on a regional scale, significantly reducing UAV resource consumption. This hierarchical strategy aligns with precision agriculture goals. After identifying potential disease areas via satellite, UAVs can conduct high-resolution validation from multiple perspectives, providing a technological reference for early taro blight monitoring.
This study demonstrates significant potential for practical application in agricultural workflows. A key method for system integration is weekly drone-based monitoring, enabling the continuous tracking of taro crops across different growth stages. By capturing remote sensing images and spectral data, the system can monitor the progression of taro blight, facilitating timely intervention. Disease intervention alerts are activated based on disease severity, prompting immediate action by the farmer or operator. This enhances the efficiency of disease control in agriculture.

5. Conclusions

Correlation analysis was conducted between hyperspectral and multispectral reflectance values of the taro canopy and the disease index. Comparison of correlation coefficients showed that in the early stage of taro formation, significant and highly significant positive correlations were found between canopy hyperspectral reflectance in the green (470–510 nm) and red (590–690 nm) bands and the disease index, respectively. Highly significant negative correlations were observed in the near-infrared band (713–1008 nm). Similarly, in the middle stage of taro formation, positive correlations were observed in the red band (620–682 nm), and negative correlations were observed in the near-infrared band (703–1008 nm), which were both highly significant. These results indicate that both periods are suitable for monitoring taro blight, and that the red and near-infrared bands are sensitive wavelengths for estimating the disease index.
Correlation analysis between multispectral reflectance values and the disease index showed that among the five bands, only the red band exhibited a highly significant positive correlation during both early and middle stages of taro formation. This indicates that the red band is a sensitive indicator of taro disease.
Correlation analyses were conducted between hyperspectral and multispectral reflectance values of the taro canopy and blight disease indices. Based on the comparison, ten parameters with the strongest correlations were selected for modeling purposes.
For hyperspectral characteristic parameters, the BPNN model achieved optimal performance. During the early and middle stages of taro formation, the training sets reached R2 values of 0.92 and 0.90, with RMSE values of 0.054 and 0.042, respectively. The validation sets showed R2 values of 0.89 and 0.79, and RMSE values of 0.074 and 0.063, respectively.
For multispectral vegetation indices, the RFR model performed best in the early stage with training and validation R2 values of 0.89 and 0.83, and RMSE values of 0.069 and 0.075, respectively. The BPNN model achieved the highest accuracy in the middle stage, with training and validation R2 values of 0.87 and 0.81, and RMSE values of 0.057 and 0.076, respectively.
These results demonstrate that both hyperspectral and multispectral data sources, when combined with appropriate machine learning algorithms, can effectively support taro blight monitoring during critical growth stages.

Author Contributions

Conceptualization, Y.W. and C.S.; methodology, Y.W., Z.S., Y.C. and C.S.; software, W.Z. and S.Z.; validation, Y.W.; formal analysis, Y.W.; investigation, Y.W. and Y.C.; resources, W.Z. and S.Z.; data curation, Y.W., Y.C. and Z.S.; writing—original draft preparation, Y.W. and Y.C.; writing—review and editing, Y.W., Y.C., Z.S. and C.S.; supervision, T.L. and C.S.; funding acquisition, C.S. and Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Yangzhou University Student Science and Technology Innovation Program (XCX20240679); Key Research and Development Program (Modern Agriculture) of Jiangsu Province (BE2022335).

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, X.; Chang, L. Development Status of Taro Industry in Taizhou City and Discussion on Its New Business Mode. J. Anhui Agric. Sci. 2018, 46, 196–199. [Google Scholar]
  2. Sahoo, M.R.; DasGupta, M.; Kole, P.C.; Bhat, J.S.; Mukherjee, A. Antioxidative Enzymes and Isozymes Analysis of Taro Genotypes and Their Implications in Phytophthora Blight Disease Resistance. Mycopathologia 2007, 163, 241–248. [Google Scholar] [CrossRef] [PubMed]
  3. Brooks, F.E. Detached-Leaf Bioassay for Evaluating Taro Resistance to Phytophthora Colocasiae. Plant Dis. 2008, 92, 126–131. [Google Scholar] [CrossRef]
  4. Huang, W.; Shi, Y.; Dong, Y.; Ye, H.; Wu, M.; Cui, B.; Liu, L. Progress and Prospects of Crop Diseases and Pests Monitoring by Remote Sensing. Smart Agric. 2019, 1, 1–11. [Google Scholar] [CrossRef]
  5. Zhu, Y.; Tang, L.; Liu, L.; Liu, B.; Zhang, X.; Qiu, X.; Tian, Y.; Cao, W. Research Progress on the Crop Growth Model CropGrow. Sci. Agric. Sin. 2020, 53, 3235–3256. [Google Scholar] [CrossRef]
  6. Cheng, T.; Rivard, B.; Sánchez-Azofeifa, G.A.; Feng, J.; Calvo-Polanco, M. Continuous Wavelet Analysis for the Detection of Green Attack Damage due to Mountain Pine Beetle Infestation. Remote Sens. Environ. 2010, 114, 899–910. [Google Scholar] [CrossRef]
  7. Zhang, J.; Luo, J.; Huang, W.; Wang, J. Continuous Wavelet Analysis Based Spectral Features Selection for Winter Wheat Yellow Rust Detection. Intell. Autom. Soft Comput. 2011, 17, 531–540. [Google Scholar] [CrossRef]
  8. Zhang, J.; Pu, R.; Huang, W.; Yuan, L.; Luo, J.; Wang, J. Using In-Situ Hyperspectral Data for Detecting and Discriminating Yellow Rust Disease from Nutrient Stresses. Field Crops Res. 2012, 134, 165–174. [Google Scholar] [CrossRef]
  9. Zhang, J.; Pu, R.; Wang, J.; Huang, W.; Yuan, L.; Luo, J. Detecting Powdery Mildew of Winter Wheat Using Leaf Level Hyperspectral Measurements. Comput. Electron. Agric. 2012, 85, 13–23. [Google Scholar] [CrossRef]
  10. Zhang, J.; Yuan, L.; Wang, J.; Luo, J.; Du, S.; Huang, W. Research Progress of Crop Diseases and Pests Monitoring Based on Remote Sensing. Trans. Chin. Soc. Agric. Eng. 2012, 28, 1–11. [Google Scholar]
  11. Liu, H.; Kang, R.; Ustin, S.; Zhang, X.; Fu, Q.; Sheng, L.; Sun, T. Study on the Prediction of Cotton Yield within Field Scale with Time Series Hyperspectral Imagery. Spectrosc. Spectral Anal. 2016, 36, 2585–2589. [Google Scholar]
  12. Qiao, H.; Zhou, Y.; Bai, Y.; Cheng, D.; Duan, X. The Primary Research of Detecting Wheat Powdery Mildew Using In-Field and Low Altitude Remote Sensing. Acta Phytophylacica Sin. 2006, 33, 341–344. [Google Scholar]
  13. Backoulou, G.F.; Elliott, N.C.; Giles, K.; Phoofolo, M.; Catana, V.; Mirik, M.; Michels, J. Spatially Discriminating Russian Wheat Aphid Induced Plant Stress from Other Wheat Stressing Factors. Comput. Electron. Agric. 2011, 78, 123–129. [Google Scholar] [CrossRef]
  14. Liu, W.; Yang, G.; Xu, F.; Qiao, H.; Fan, J.; Song, Y.; Zhou, Y. Comparisons of Detection of Wheat Stripe Rust Using Hyper-Spectrometer and UAV Aerial Photography. Acta Phytopathol. Sin. 2018, 48, 223–227. [Google Scholar] [CrossRef]
  15. Wang, Z.; Chu, G.; Zhang, H.; Liu, S.; Huang, X.; Gao, F.; Zhang, C.; Wang, J. Identification of Diseased Empty Rice Panicles Based on Haar-like Feature of UAV Optical Image. Trans. Chin. Soc. Agric. Eng. 2018, 34, 73–82. [Google Scholar] [CrossRef]
  16. Xavier, T.W.F.; Souto, R.N.V.; Statella, T.; Galbieri, R.; Santos, E.S.; Suli, G.S.; Zeilhofer, P. Identification of Ramularia Leaf Blight Cotton Disease Infection Levels by Multispectral, Multiscale UAV Imagery. Drones 2019, 3, 33. [Google Scholar] [CrossRef]
  17. Portela, F.; Sousa, J.J.; Araújo-Paredes, C.; Peres, E.; Morais, R.; Pádua, L. Monitoring the Progression of Downy Mildew on Vineyards Using Multi-Temporal Unmanned Aerial Vehicle Multispectral Data. Agronomy 2025, 15, 934. [Google Scholar] [CrossRef]
  18. Li, W.; Guo, Y.; Yang, W.; Huang, L.; Zhang, J.; Peng, J.; Lan, Y. Severity Assessment of Cotton Canopy Verticillium Wilt by Machine Learning Based on Feature Selection and Optimization Algorithm Using UAV Hyperspectral Data. Remote Sens. 2024, 16, 4637. [Google Scholar] [CrossRef]
  19. Liu, L.; Huang, M.; Huang, W.; Wang, J.; Zhao, C.; Zheng, L.; Tong, Q. Monitoring Stripe Rust Disease of Winter Wheat Using Multi-Temporal Hyperspectral Airborne Data. J. Remote Sens. 2004, 8, 275–281. [Google Scholar] [CrossRef]
  20. Li, X.; Lee, W.S.; Li, M.; Ehsani, R.; Mishra, A.K.; Yang, C.; Mangan, R.L. Spectral Difference Analysis and Airborne Imaging Classification for Citrus Greening Infected Trees. Comput. Electron. Agric. 2012, 83, 32–46. [Google Scholar] [CrossRef]
  21. Severtson, D.; Callow, N.; Flower, K.; Neuhaus, A.; Olejnik, M.; Nansen, C. Unmanned Aerial Vehicle Canopy Reflectance Data Detects Potassium Deficiency and Green Peach Aphid Susceptibility in Canola. Precis. Agric. 2016, 17, 659–677. [Google Scholar] [CrossRef]
  22. Sugiura, R.; Tsuda, S.; Tamiya, S.; Itoh, A.; Nishiwaki, K.; Murakami, N.; Shibuya, Y.; Hirafuji, M.; Nuske, S. Field Phenotyping System for the Assessment of Potato Late Blight Resistance Using RGB Imagery from an Unmanned Aerial Vehicle. Biosyst. Eng. 2016, 148, 1–10. [Google Scholar] [CrossRef]
  23. Su, J.; Liu, C.; Coombes, M.; Hu, X.; Wang, C.; Xu, X.; Li, Q.; Guo, L.; Chen, W.-H. Wheat Yellow Rust Monitoring by Learning from Multispectral UAV Aerial Imagery. Comput. Electron. Agric. 2018, 155, 157–166. [Google Scholar] [CrossRef]
  24. Edna Chebet, T.; Li, Y.; Sam, N.; Liu, Y. A Comparative Study of Fine-Tuning Deep Learning Models for Plant Disease Identification. Comput. Electron. Agric. 2019, 161, 272–279. [Google Scholar] [CrossRef]
  25. Li, Y.; Chang, Q.; Liu, X.; Yan, L.; Luo, D.; Wang, S. Estimation of Maize Leaf SPAD Value Based on Hyperspectrum and BP Neural Network. Trans. Chin. Soc. Agric. Eng. 2016, 32, 135–142. [Google Scholar] [CrossRef]
  26. Han, Y.; Liu, H.; Zhang, X.; Yu, Z.; Meng, X.; Kong, F.; Song, S.; Han, J. Prediction Model of Rice Panicles Blast Disease Degree Based on Canopy Hyperspectral Reflectance. Spectrosc. Spectral Anal. 2021, 41, 1220–1226. [Google Scholar]
  27. Gitelson, A.A.; Merzlyak, M.N.; Chivkunova, O.B. Optical Properties and Nondestructive Estimation of Anthocyanin Content in Plant Leaves. Photochem. Photobiol. 2001, 74, 38–45. [Google Scholar] [CrossRef]
  28. Zhou, J.; Yungbluth, D.; Vong, C.N.; Scaboo, A.; Zhou, J. Estimation of the Maturity Date of Soybean Breeding Lines Using UAV-Based Multispectral Imagery. Remote Sens. 2019, 11, 2075. [Google Scholar] [CrossRef]
  29. Liu, D.; Wen, D.; Zhu, J. Object-Oriented Land Use Information Extraction Based on UAV Images. Geospatial Inf. 2020, 18, 75–80. [Google Scholar] [CrossRef]
  30. Zhang, L.; Chen, Y.; Li, Y.; Ma, J.; Du, K.; Zheng, F.; Sun, Z. Estimating above Ground Biomass of Winter Wheat at Early Growth Stages Based on Visual Spectral. Spectrosc. Spectral Anal. 2019, 39, 2501–2506. [Google Scholar]
  31. Steele, M.R.; Gitelson, A.A.; Rundquist, D.C.; Merzlyak, M.N. Nondestructive Estimation of Anthocyanin Content in Grapevine Leaves. Am. J. Enol. Vitic. 2009, 60, 87–92. [Google Scholar] [CrossRef]
  32. Haboudane, D.; Miller, J.R.; Tremblay, N.; Zarco-Tejada, P.J.; Dextraze, L. Integrated Narrow-Band Vegetation Indices for Prediction of Crop Chlorophyll Content for Application to Precision Agriculture. Remote Sens. Environ. 2002, 81, 416–426. [Google Scholar] [CrossRef]
  33. Zarco-Tejada, P.J.; Berjón, A.; López-Lozano, R.; Miller, J.R.; Martín, P.; Cachorro, V.; González, M.R.; de Frutos, A. Assessing Vineyard Condition with Hyperspectral Indices: Leaf and Canopy Reflectance Simulation in a Row-Structured Discontinuous Canopy. Remote Sens. Environ. 2005, 99, 271–287. [Google Scholar] [CrossRef]
  34. Lv, Y.; Lv, W.; Han, K.; Tao, W.; Zheng, L.; Weng, S.; Huang, L. Determination of Wheat Kernels Damaged by Fusarium Head Blight Using Monochromatic Images of Effective Wavelengths from Hyperspectral Imaging Coupled with an Architecture Self-Search Deep Network. Food Control 2022, 135, 108819. [Google Scholar] [CrossRef]
  35. Maimouni, S.; Bannari, A.; El-Harti, A.; El-Ghmari, A. Potentiels et Limites Des Indices Spectraux Pour Caractériser La Dégradation Des Sols En Milieu Semi-Aride. Can. J. Remote Sens. 2011, 37, 285–301. [Google Scholar] [CrossRef]
  36. Juliane, B.; Kang, Y.; Helge, A.; Andreas, B.; Simon, B.; Janis, B.; Martin, L.G.; Georg, B. Combining UAV-Based Plant Height from Crop Surface Models, Visible, and near Infrared Vegetation Indices for Biomass Monitoring in Barley. Int. J. Appl. Earth Obs. Geoinf. 2015, 39, 79–87. [Google Scholar] [CrossRef]
  37. Zhang, J.; Qiu, X.; Wu, Y.; Zhu, Y.; Cao, Q.; Liu, X.; Cao, W. Combining Texture, Color, and Vegetation Indices from Fixed-Wing UAS Imagery to Estimate Wheat Growth Parameters Using Multivariate Regression Methods. Comput. Electron. Agric. 2021, 185, 106138. [Google Scholar] [CrossRef]
  38. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef]
  39. Sims, D.A.; Gamon, J.A. Relationships between Leaf Pigment Content and Spectral Reflectance across a Wide Range of Species, Leaf Structures and Developmental Stages. Remote Sens. Environ. 2002, 81, 337–354. [Google Scholar] [CrossRef]
  40. Gitelson, A.A.; Dall’Olmo, G.; Moses, W.; Rundquist, D.C.; Barrow, T.; Fisher, T.R.; Gurlin, D.; Holz, J. A Simple Semi-Analytical Model for Remote Estimation of Chlorophyll-a in Turbid Waters: Validation. Remote Sens. Environ. 2008, 112, 3582–3593. [Google Scholar] [CrossRef]
  41. Naidu, R.A.; Perry, E.M.; Pierce, F.J.; Mekuria, T. The Potential of Spectral Reflectance Technique for the Detection of Grapevine Leafroll-Associated Virus-3 in Two Red-Berried Wine Grape Cultivars. Comput. Electron. Agric. 2009, 66, 38–45. [Google Scholar] [CrossRef]
  42. Mahlein, A.-K.; Rumpf, T.; Welke, P.; Dehne, H.-W.; Plümer, L.; Steiner, U.; Oerke, E.-C. Development of Spectral Indices for Detecting and Identifying Plant Diseases. Remote Sens. Environ. 2013, 128, 21–30. [Google Scholar] [CrossRef]
  43. Huang, L.; Zhang, H.; Ruan, C.; Huang, W.; Zhao, J. Detection of Scab in Wheat Ears Using in Situ Hyperspectral Data and Support Vector Machine Optimized by Genetic Algorithm. Int. J. Agric. Biol. Eng. 2020, 13, 182–188. [Google Scholar] [CrossRef]
  44. Wessman, C.A.; Aber, J.D.; Peterson, D.L. An Evaluation of Imaging Spectrometry for Estimating Forest Canopy Chemistry. Int. J. Remote Sens. 1989, 10, 1293–1316. [Google Scholar] [CrossRef]
  45. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a Green Channel in Remote Sensing of Global Vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  46. Merzlyak, M.N.; Gitelson, A.A.; Chivkunova, O.B.; Rakitin, V.Y. Non-Destructive Optical Detection of Pigment Changes during Leaf Senescence and Fruit Ripening. Physiol. Plant. 1999, 106, 135–141. [Google Scholar] [CrossRef]
  47. Imhoff, M.L.; Bounoua, L.; DeFries, R.; Lawrence, W.T.; Stutzer, D.; Tucker, C.J.; Ricketts, T. The Consequences of Urban Land Transformation on Net Primary Productivity in the United States. Remote Sens. Environ. 2004, 89, 434–443. [Google Scholar] [CrossRef]
  48. Patrick, A.; Pelham, S.; Culbreath, A.; Holbrook, C.C.; De Godoy, I.J.; Li, C. High Throughput Phenotyping of Tomato Spot Wilt Disease in Peanuts Using Unmanned Aerial Systems and Multispectral Imaging. IEEE Instrum. Meas. Mag. 2017, 20, 4–12. [Google Scholar] [CrossRef]
  49. Guo, A.; Huang, W.; Dong, Y.; Ye, H.; Ma, H.; Liu, B.; Wu, W.; Ren, Y.; Ruan, C.; Geng, Y. Wheat Yellow Rust Detection Using UAV-Based Hyperspectral Technology. Remote Sens. 2021, 13, 123. [Google Scholar] [CrossRef]
  50. Cui, Y. The Progress of Leaf Blight Research in Taro. China Plant Prot. 2020, 40, 22–26+38. [Google Scholar]
  51. Yuan, H.; Zhang, F.; Wang, X.; Wu, X.; Yang, Y. Influencing Factors and Control Measures of Rugao Xiangtang Taro Blight. China Plant Prot. 2018, 38, 52–56. [Google Scholar]
  52. Feng, H.; Hong, Q.; Hu, C.; Huang, W.; Hu, X.; Liu, J.; Zhang, Y.; Zhang, Z.; Qiao, H.; Liu, W. Recent Advances in Intelligent Techniques for Monitoring and Prediction of Crop Diseases and Insect Pests in China. Plant Prot. 2023, 49, 229–242. [Google Scholar] [CrossRef]
  53. Guo, W.; Zhu, Y.; Wang, H.; Zhang, J.; Dong, P.; Qiao, H. Monitoring Model of Winter Wheat Take-All Based on UAV Hyperspectral Imaging. Trans. Chin. Soc. Agric. Mach. 2019, 50, 162–169. [Google Scholar] [CrossRef]
  54. Su, B.; Liu, Y.; Huang, Y.; Wei, R.; Cao, X.; Han, D. Analysis for Stripe Rust Dynamics in Wheat Population Using UAV Remote Sensing. Trans. Chin. Soc. Agric. Eng. 2021, 37, 127–135. [Google Scholar] [CrossRef]
  55. Feng, Z.; Song, L.; Zhang, S.; Jing, Y.; Duan, J.; He, L.; Yin, F.; Feng, W. Wheat Powdery Mildew Monitoring Based on Information Fusion of Multi-Spectral and Thermal Infrared Images Acquired with an Unmanned Aerial Vehicle. Sci. Agric. Sin. 2022, 55, 890–906. [Google Scholar] [CrossRef]
  56. Shen, W.; Li, Y.; Feng, W.; Zhang, H.; Zhang, Y.; Xie, Y.; Guo, T. Inversion Model for Severity of Powdery Mildew in Wheat Leaves Based on Factor Analysis-BP Neural Network. Trans. Chin. Soc. Agric. Eng. 2015, 31, 183–190. [Google Scholar] [CrossRef]
  57. Liu, L.; Dong, Y.; Huang, W.; Du, X.; Ma, H. Monitoring Wheat Fusarium Head Blight Using Unmanned Aerial Vehicle Hyperspectral Imagery. Remote Sens. 2020, 12, 3811. [Google Scholar] [CrossRef]
  58. Wang, H.; Yu, S.; Zhang, H.; Zhao, Y. Taro Blight: Spatial Distribution Pattern and Sampling Technique. Chin. Agric. Sci. Bull. 2020, 36, 118–122. [Google Scholar] [CrossRef]
  59. Nie, J.; Jiang, J.; Li, Y.; Li, J.; Chao, X.; Ercisli, S. Efficient Detection of Cotton Verticillium Wilt by Combining Satellite Time-Series Data and Multiview UAV Images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2024, 17, 13547–13557. [Google Scholar] [CrossRef]
Figure 1. This schematic diagram illustrates the geographical location and layout of the designated test area used in the study.
Figure 1. This schematic diagram illustrates the geographical location and layout of the designated test area used in the study.
Agronomy 15 01189 g001
Figure 2. Visual comparison of taro leaves exhibiting different levels of disease severity. (a) healthy leaves; (b) mild disease symptoms; (c) moderate disease symptoms; (d) severe disease symptoms.
Figure 2. Visual comparison of taro leaves exhibiting different levels of disease severity. (a) healthy leaves; (b) mild disease symptoms; (c) moderate disease symptoms; (d) severe disease symptoms.
Agronomy 15 01189 g002
Figure 3. Distribution of taro plants categorized by varying degrees of disease incidence. (a) healthy condition; (b) moderate disease symptoms; (c) severe disease symptoms.
Figure 3. Distribution of taro plants categorized by varying degrees of disease incidence. (a) healthy condition; (b) moderate disease symptoms; (c) severe disease symptoms.
Agronomy 15 01189 g003
Figure 4. Step-by-step workflow of the random forest algorithm.
Figure 4. Step-by-step workflow of the random forest algorithm.
Agronomy 15 01189 g004
Figure 5. Accuracy verification of taro blight DI estimation model based on PLSR using hyperspectral characteristic parameters. (a) in the early stage of taro formation; (b) in the middle stage of taro formation.
Figure 5. Accuracy verification of taro blight DI estimation model based on PLSR using hyperspectral characteristic parameters. (a) in the early stage of taro formation; (b) in the middle stage of taro formation.
Agronomy 15 01189 g005
Figure 6. Accuracy verification of taro blight DI estimation model based on RFR using hyperspectral characteristic parameters. (a) in the early stage of taro formation; (b) in the middle stage of taro formation.
Figure 6. Accuracy verification of taro blight DI estimation model based on RFR using hyperspectral characteristic parameters. (a) in the early stage of taro formation; (b) in the middle stage of taro formation.
Agronomy 15 01189 g006
Figure 7. Accuracy verification of taro blight DI estimation model based on BPNN using hyperspectral characteristic parameters. (a) in the early stage of taro formation; (b) in the middle stage of taro formation.
Figure 7. Accuracy verification of taro blight DI estimation model based on BPNN using hyperspectral characteristic parameters. (a) in the early stage of taro formation; (b) in the middle stage of taro formation.
Agronomy 15 01189 g007
Figure 8. Accuracy verification of taro blight DI estimation model based on PLSR using multispectral vegetation indices. (a) in the early stage of taro formation; (b) in the middle stage of taro formation.
Figure 8. Accuracy verification of taro blight DI estimation model based on PLSR using multispectral vegetation indices. (a) in the early stage of taro formation; (b) in the middle stage of taro formation.
Agronomy 15 01189 g008
Figure 9. Accuracy verification of taro blight DI estimation model based on RFR using multispectral vegetation indices. (a) in the early stage of taro formation; (b) in the middle stage of taro formation.
Figure 9. Accuracy verification of taro blight DI estimation model based on RFR using multispectral vegetation indices. (a) in the early stage of taro formation; (b) in the middle stage of taro formation.
Agronomy 15 01189 g009
Figure 10. Accuracy verification of taro blight DI estimation model based on BPNN using multispectral vegetation indices. (a) in the early stage of taro formation; (b) in the middle stage of taro formation.
Figure 10. Accuracy verification of taro blight DI estimation model based on BPNN using multispectral vegetation indices. (a) in the early stage of taro formation; (b) in the middle stage of taro formation.
Agronomy 15 01189 g010
Table 1. Standard used for grading taro blight disease severity.
Table 1. Standard used for grading taro blight disease severity.
Disease LevelGrading Standard (In Plants)
0Disease-free
1Sporadic necrotic spots
2Necrotic area not exceeding 1/4 of leaf area
3Necrotic area covering 1/4 to 1/3 of leaf area
4Necrotic area covering 1/3 to 2/3 of leaf area
5Necrotic area covering more than 2/3 of leaf area
Table 2. Technical parameters of the Matrice 600 Pro UAV.
Table 2. Technical parameters of the Matrice 600 Pro UAV.
Technology NameSpecific Parameters
Take-Off Weight9.5 kg
Diagonal Wheelbase1133 mm
Max Pitch Angle25°
Operating Temperature −10 °C to 40 °C
Hover Time32 min (no load)
16 min (6 kg load)
Hover Accuracy Vertical: ±0.5 m
Horizontal: ±1.5 m
Table 3. Main parameters of GaiaSky-mini airborne hyperspectrometer.
Table 3. Main parameters of GaiaSky-mini airborne hyperspectrometer.
Technology NameSpecific ParameterTechnology NameSpecific Parameter
Spectral Range400~1000 (1 nm)Field of View (FOV)31.34°@16 mm
Spectral Resolution3.5 nm@30 µm slitHorizontal Field of View (flight altitude 300 m)168 m@16 mm
Numerical ApertureF/2.8Lens16 mm/23 mm/25 mm
Spectral Sampling Rate0.7 nmNumber of Spectral Channels1040 (1X)/520(2X)
256 (4X)/128 (8X)
Full-Width Pixel1392 × 1040Spatial Resolution0.12
Pixel Pitch6.45 (µm)Camera Out14 (bit)
Table 4. Technical parameters of the Phantom 4 multispectral UAV.
Table 4. Technical parameters of the Phantom 4 multispectral UAV.
Technology NameSpecific Parameter
Take-Off Weight1487 g
Diagonal Size (Propellers Excluded)350 mm
Flight Time27 min
Operating Frequency5.725 GHz to 5.850 GHz
Hover Accuracy Vertical: ±0.1 m
Horizontal: ±0.1 m
Table 5. Optical parameters of the Phantom 4 multispectral UAV lenses.
Table 5. Optical parameters of the Phantom 4 multispectral UAV lenses.
Technology NameSpecific ParameterTechnology NameSpecific Parameter
Max Photo Resolution1600 × 1300
(4:3.25)
ISO Range200~800
LensFOV: 62.7°
Focal Length: 5.74 mm
Aperture: f/2.2
Electronic Global Shutter1/100~1/20,000 s (visible light)
1/100~1/10,000 s(multispectral)
Photo FormatJPEG + TIFFMonochromatic Sensor Gain1 to 8 times
Table 6. Thirty hyperspectral characteristic parameters were selected.
Table 6. Thirty hyperspectral characteristic parameters were selected.
TypeParameter NameAbridgeExpression and Extraction MethodReferences
Three-Edge ParameterBlue edge amplitudeDbThe maximum value of the first derivative spectrum at wavelengths 490~530 nm[24]
Yellow edge amplitudeDyThe maximum value of the first derivative spectrum at wavelengths 560~640 nm [24]
Red edge amplitudeDrThe maximum value of the first derivative spectrum at wavelengths 680~760 nm[24]
Blue edge areaSDbIntegration of first derivative spectra at wavelengths 490~530 nm[24]
Yellow edge areaSDyIntegration of first derivative spectra at wavelengths 560~640 nm [24]
Red edge areaSDrIntegration of first derivative spectra at wavelengths 680~760 nm [24]
Red valley valueΡrThe minimum value of the original spectrum at wavelength 640~680 nm[24]
Green peak valueΡgThe maximum value of the original spectrum at wavelength 510~560 nm[24]
Green peak areaSDgIntegration of original spectra at wavelengths 510~560 nm[24]
-ρg/ρrThe ratio of the green peak value to the red valley value[25]
-(ρg − ρr)/
(ρg + ρr)
The normalized value for the green peak value and red valley value[25]
Three-Edge Parameter-SDr/SDbThe ratio of the red edge area to the blue edge area[25]
-SDr/SdyThe ratio of the red edge area to the yellow edge area[25]
-(SDr − SDb)/
(SDr + SDb)
Normalized ratio of red edge area to blue edge area[25]
-(SDr − SDy)/
(SDr + SDy)
Normalized ratio of red edge area to yellow edge area[25]
Absorption Valley ParametersAbsorption valley areaAIntegration of absorption valleys in continuum removal spectra[26]
Absorption valley widthWDistance on either side of absorption valley at half-depth[26]
Absorption valley depthDPDistance from the lowest point of absorption valley to the baseline[26]
Absorption valley left slopeSLThe slope of the connecting line between the left starting point of the absorption valley and the bottom point of the absorption valley[26]
Absorption valley right slopeSRThe slope of the connecting line between the right starting point of the absorption valley and the bottom point of the absorption valley[26]
Table 7. Twenty-four multispectral vegetation indices were selected.
Table 7. Twenty-four multispectral vegetation indices were selected.
Parameter NameAbridgeExpression and Extraction MethodReferences
Anthocyanin Reflectance IndexARI1/g − 1/r[27]
Coloration IndexCI(r − b)/r[28]
Combination IndicesCOM0.25EXG + 0.3EXGR + 0.33CIVE + 0.12VEG[29]
Extra Green–Red Difference IndexEXGREXG − EXR[30]
Extra Red Vegetation IndexEXR1.4r − g[30]
Greenness IndexGIg/r[31,32,33]
Green Leaf IndexGLI(2g − b − r)/(2g + b+r)[34]
HueHArctan((2r − g − b)/3.5*(g − b))[27]
Indice de FormeIF(2r − g − b)/(g − b)[35]
Red Green Ratio IndexIGRr − b[27]
Modified Green–Red Vegetation IndexMGRVI(g2 − r2)/(g2 + r2)[36]
Normalized Green–Red Difference IndexNGRDI(g − r)/(g + r)[37]
Red, Green, and Blue Vegetation IndexRGBVI(g2 − br)/(g2 + br)[38]
Red–Green RatioRGRr/g[39]
Visible Atmospherically Resistant Index in Green BandVARIgreen(g − r)/(g + r − b)[40,41]
Chlorotic Leaf Spot IndexCLSI(re − g)/(tr − g) − re[42]
Modified Simple RatioMSRr/(nir/r + 1)^0.5[43]
Normalized Difference Vegetation IndexNDVI(nir − r)/(nir + r)[44]
Normalized Difference Vegetation Index of Red EdgeNDVIrededge(re − r)/(re + r)[45]
Plant Senescence Reflectance IndexPSRI(re − g)/nir[46]
Red and Blue Normalized Difference Vegetation IndexRBNDVI(nir − (r + b))/(nir + (r + b))[27]
Red Red Edge Ratio Index 2RRI2re/r[27]
Ratio Vegetation IndexRVInir/r[47]
Woebbecke IndexWI(g − b)/(re − b)[48]
Table 8. Correlation between hyperspectral parameters and DI in the early stage of taro formation.
Table 8. Correlation between hyperspectral parameters and DI in the early stage of taro formation.
Characteristic ParameterCorrelation CoefficientRankingCharacteristic ParameterCorrelation CoefficientRanking
Db−0.318 *26A10.621 **18
Dy−0.441 **23A20.81 **9
Dr0.627 **17A30.856 **1
SDb0.02130W1−0.384 *25
SDy0.819 **6W2−0.855 **2
SDr−0.814 **7W3−0.22427
Ρr0.603 **19DP10.683 **14
Ρg0.434 **24DP20.784 **11
SDg0.441 **22DP30.838 **3
ρg/ρr−0.628 **16SL10.812 **8
(ρg − ρr)/(ρg + ρr)−0.666 **15SL2−0.486 **21
SDr/SDb−0.83 **5SL30.836 **4
SDr/SDy−0.12228SR10.09529
(SDr − SDb)/(SDr + SDb)−0.762 **12SR2−0.752 **13
(SDr − SDy)/(SDr + SDy)−0.571 **20SR3−0.805 **10
Note: * indicates a significant correlation (p < 0.05) and ** indicates a highly significant correlation (p < 0.01).
Table 9. Correlation between hyperspectral parameters and DI in the middle stage of taro formation.
Table 9. Correlation between hyperspectral parameters and DI in the middle stage of taro formation.
Characteristic ParameterCorrelation CoefficientRankingCharacteristic ParameterCorrelation CoefficientRanking
Db−0.438 **17A10.682 **3
Dy0.361 *20A20.641 **9
Dr−0.662 **7A3−0.11526
SDb−0.27423W1−0.668 **5
SDy0.604 **13W2−0.555 **16
SDr−0.664 **6W30.22024
Ρr0.608 **12DP10.677 **4
Ρg0.315 *22DP20.648 **8
SDg0.393 *19DP3−0.07929
ρg/ρr−0.695 **1SL10.623 **11
(ρg − ρr)/(ρg + ρr)−0.684 **2SL2−0.428 **18
SDr/SDb−0.10328SL30.07230
SDr/Sdy0.599 **15SR10.327 *21
(SDr − SDb)/(SDr + SDb)−0.10727SR2−0.639 **10
(SDr − SDy)/(SDr + SDy)0.601 **14SR30.19125
Note: * indicates a significant correlation (p < 0.05) and ** indicates a highly significant correlation (p < 0.01).
Table 10. Correlation between DI and multispectral index in the early stage of taro formation.
Table 10. Correlation between DI and multispectral index in the early stage of taro formation.
Vegetation IndexCorrelation CoefficientRankingVegetation IndexCorrelation CoefficientRanking
ARI0.743 **13RGBVI−0.715 **20
CI0.861 **1RGR0.795 **6
COM−0.731 **17VARIgreen−0.705 **22
EXGR−0.758 **11CLSI−0.709 **21
EXR0.795 **5MSR0.764 **10
GI−0.768 **9NDVI−0.721 **18
GLI−0.740 **14NDVIrededge−0.738 **15
H0.826 **2PSRI0.749 **12
IF0.800 **3RBNDVI−0.735 **16
IGR0.781 **8RRI2−0.682 **23
MGRVI−0.799 **4RVI−0.671 **24
NGRDI−0.793 **7WI0.718 **19
Note: ** indicates a highly significant correlation (p < 0.01).
Table 11. Correlation between DI and multispectral index in the middle stage of taro formation.
Table 11. Correlation between DI and multispectral index in the middle stage of taro formation.
Vegetation IndexCorrelation CoefficientRankingVegetation IndexCorrelation CoefficientRanking
ARI0.679 **11RGBVI−0.665 **18
CI0.670 **15RGR0.703 **3
COM−0.636 **23VARIgreen−0.629 **24
EXGR−0.668 **16CLSI−0.668 **17
EXR0.705 **2MSR0.696 **7
GI−0.694 **8NDVI−0.655 **20
GLI−0.681 **10NDVIrededge−0.677 **13
H0.706 **1PSRI0.678 **12
IF0.700 **6RBNDVI−0.664 **19
IGR0.682 **9RRI2−0.672 **14
MGRVI−0.702 **4RVI−0.646 **21
NGRDI−0.701 **5WI0.642 **22
Note: ** indicates a highly significant correlation (p < 0.01).
Table 12. The effectiveness of the estimation model of taro blight disease index based on PLSR.
Table 12. The effectiveness of the estimation model of taro blight disease index based on PLSR.
Growth StageSelected VariablesPLSR Regression EquationR2RMSE
The early stage of taro formation X1 (W2)y = 2.3829 − 0.0223X1 − 0.0128X20.790.086
X2 (A2)
The middle stage of taro formation X1 (A1)y = 1.1759 + 0.0067X1 − 0.0069X2 − 0.0059X30.590.081
X2 (W1)
X3 (A2)
Table 13. The effectiveness of the estimation model of taro blight disease index based on RFR.
Table 13. The effectiveness of the estimation model of taro blight disease index based on RFR.
Growth StageR2RMSE
The early stage of taro formation0.920.056
The middle stage of taro formation0.580.088
Table 14. The effectiveness of the estimation model of taro blight disease index based on BPNN.
Table 14. The effectiveness of the estimation model of taro blight disease index based on BPNN.
Growth StageR2RMSE
The early stage of taro formation0.920.054
The middle stage of taro formation0.900.042
Table 15. Taro blight disease index estimation model built based on PLSR.
Table 15. Taro blight disease index estimation model built based on PLSR.
Growth StageSelected VariablePLSR Regression EquationR2RMSE
The early stage of taro formationX1 (CI)y = −1.3483 + 2.5339X1 + 0.2059X2 + 0.6873X30.760.098
X2 (IF)
X3 (GI)
The middle stage of taro formationX1 (IF)y = 2.093 − 0.1721X1 − 1.0126X20.490.11
X2 (GI)
Table 16. Taro blight disease index estimation model built based on RFR.
Table 16. Taro blight disease index estimation model built based on RFR.
Growth StageR2RMSE
The early stage of taro formation0.890.069
The middle stage of taro formation0.830.074
Table 17. Taro blight disease index estimation model built based on BPNN.
Table 17. Taro blight disease index estimation model built based on BPNN.
Growth StageR2RMSE
The early stage of taro formation0.870.074
The middle stage of taro formation0.870.057
Table 18. Comparison of estimation models of taro blight disease index based on spectral characteristics.
Table 18. Comparison of estimation models of taro blight disease index based on spectral characteristics.
Growth StageParameter TypeModelTraining SetValidation Set
R2RMSER2RMSE
The early stage of taro formationHyperspectral characteristic parameterPLSR0.790.0860.810.081
RFR0.920.0560.840.075
BPNN0.920.0540.890.074
Multispectral vegetation indexPLSR0.760.0980.850.074
RFR0.890.0690.830.082
BPNN0.870.0740.820.084
The middle stage of taro formationHyperspectral characteristic parameterPLSR0.590.0810.610.10
RFR0.580.0880.620.096
BPNN0.900.0420.790.063
Multispectral vegetation indicesPLSR0.490.110.450.11
RFR0.830.0740.740.099
BPNN0.870.0570.810.076
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Chen, Y.; Shu, Z.; Zhu, S.; Zhang, W.; Liu, T.; Sun, C. Integration of UAV Remote Sensing and Machine Learning for Taro Blight Monitoring. Agronomy 2025, 15, 1189. https://doi.org/10.3390/agronomy15051189

AMA Style

Wang Y, Chen Y, Shu Z, Zhu S, Zhang W, Liu T, Sun C. Integration of UAV Remote Sensing and Machine Learning for Taro Blight Monitoring. Agronomy. 2025; 15(5):1189. https://doi.org/10.3390/agronomy15051189

Chicago/Turabian Style

Wang, Yushuai, Yuxin Chen, Zhou Shu, Shaolong Zhu, Weijun Zhang, Tao Liu, and Chengming Sun. 2025. "Integration of UAV Remote Sensing and Machine Learning for Taro Blight Monitoring" Agronomy 15, no. 5: 1189. https://doi.org/10.3390/agronomy15051189

APA Style

Wang, Y., Chen, Y., Shu, Z., Zhu, S., Zhang, W., Liu, T., & Sun, C. (2025). Integration of UAV Remote Sensing and Machine Learning for Taro Blight Monitoring. Agronomy, 15(5), 1189. https://doi.org/10.3390/agronomy15051189

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop