Next Article in Journal
Optimized Nitrogen Fertilization Promoted Soil Organic Carbon Accumulation by Increasing Microbial Necromass Carbon in Potato Continuous Cropping Field
Previous Article in Journal
Study on Spray Evaluation: The Key Role of Droplet Collectors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Supporting Screening of New Plant Protection Products through a Multispectral Photogrammetric Approach Integrated with AI

by
Samuele Bumbaca
* and
Enrico Borgogno-Mondino
Department of Agricultural, Forest and Food Sciences (DISAFA), University of Torino, Largo Paolo Braccini 2, 10095 Grugliasco, Italy
*
Author to whom correspondence should be addressed.
Agronomy 2024, 14(2), 306; https://doi.org/10.3390/agronomy14020306
Submission received: 27 December 2023 / Revised: 26 January 2024 / Accepted: 27 January 2024 / Published: 30 January 2024
(This article belongs to the Section Precision and Digital Agriculture)

Abstract

:
This work was aimed at developing a prototype system based on multispectral digital photogrammetry to support tests required by international regulations for new Plant Protection Products (PPPs). In particular, the goal was to provide a system addressing the challenges of a new PPP evaluation with a higher degree of objectivity with respect to the current one, which relies on expert evaluations. The system uses Digital Photogrammetry, which is applied to multispectral acquisitions and Artificial Intelligence (AI). The goal of this paper is also to simplify the present screening process, moving it towards more objective and quantitative scores about phytotoxicity. The implementation of an opportunely trained AI model for phytotoxicity prediction aims to convert ordinary human visual observations, which are presently provided with a discrete scale (forbidding a variance analysis), into a continuous variable. The technical design addresses the need for a reduced dataset for training the AI model and relating discrete observations, as usually performed, to some proxy variables derived from the photogrammetric multispectral 3D model. To achieve this task, an appropriate photogrammetric multispectral system was designed. The system operates in multi-nadiral-view mode over a bench within a greenhouse exploiting an active system for lighting providing uniform and diffuse illumination. The whole system is intended to reduce the environmental variability of acquisitions tending to a standard situation. The methodology combines advanced image processing, image radiometric calibration, and machine learning techniques to predict the General Phytotoxicity percentage index (PHYGEN), a crucial measure of phytotoxicity. Results show that the system can generate reliable estimates of PHYGEN, compliant with existing accuracy standards (even from previous PPPs symptom severity models), using limited training datasets. The proposed solution addressing this challenge is the adoption of the Logistic Function with LASSO model regularization that has been shown to overcome the limitations of a small sample size (typical of new PPP trials). Additionally, it provides the estimate of a numerical continuous index (a percentage), which makes it possible to tackle the objectivity problem related to human visual evaluation that is presently based on an ordinal discrete scale. In our opinion, the proposed prototype system could have significant potential in improving the screening process for new PPPs. In fact, it works specifically for new PPPs screening and, despite this, it has an accuracy consistent with the one ordinarily accepted for human visual approaches. Additionally, it provides a higher degree of objectivity and repeatability.

1. Introduction

Researchers in the field of Plant Protection Products (PPPs) need to bridge the gap between evaluations from traditional human-based approaches and those enabled by Artificial Intelligence (AI) [1]. Specifically, new PPPs undergo a rigorous safety screening before market entry. PPP developers must meticulously formulate and dose these PPPs to avoid harmful phytotoxic effects on crops, thus maintaining selectivity [2]. Traditionally, experimenters assess the severity of phytotoxicity through visual observations. The reliability of these assessments depends on low variability among experimenters’ observations and proper rating scales [3]. In Europe, technicians are required to operate according to Good Experimental Practice (GEP), which is based on international laws [4]. GEP is a set of standards that ensures objectivity and precision in scientific experiments. The World Trade Organization Agreement on Sanitary and Phytosanitary Measures [5] designates the International Plant Protection Convention (IPPC) as the authority for plant health standards [6]. The European Union falls under the European and Mediterranean Plant Protection Organization (EPPO) within IPPC. EPPO is responsible for setting phytosanitary and PPP standards. EPPO standards address crop selectivity [2] by providing evaluation methods involving both discrete and continuous values. However, experimenters often prefer using quantitative ordinal discrete scales due to their practicality [7]. As observed by Chiang et al. [3], percentage scales with intervals of 10% can reduce rater uncertainty. That is because 10% is commonly accepted as inter-rater error. This can potentially lead to inconsistencies with theoretical assumptions in variance analysis [8,9]. Nevertheless, the selectivity of PPPs is inherently a continuous variable, assumed to be inversely proportional to the percentage of phytotoxicity symptoms and their intensity. According to EPPO, phytotoxicity symptoms include (i) modifications in the development cycle, (ii) thinning, (iii) modifications in color, (iv) necrosis, (v) deformation, and (vi) effects on quantity and quality of the yield [2]. General Phytotoxicity (PHYGEN) is an aggregate indicator that summarizes the above symptoms by defining the percentage of damage to a plant compared to a perfectly healthy reference plant [10].
Imaging sensors have already been demonstrated to improve precision and objectivity in the detection of pathological symptoms [7,11]. Some spectral properties of plants, as recorded through multispectral sensors [12], are recognized as indicators of photosynthetic efficiency [13,14]. Various methods, including multi-view approaches [15,16,17], can be used to create 3D models of plants [11]. Spectral and geometric features of plants can be used to virtually reproduce the plant appearance, as observed by an experimenter during assessment. When working with three-dimensional and multispectral data, a summary is necessary to obtain an accurate estimate of PHYGEN, like a direct human-based evaluation approach. Machine learning (ML) models from artificial intelligence (AI) can synthesize vast amounts of digital information in a robust and reasonable manner when guided by expert (low variation) experimenter annotations [12]. Open platforms offer large labeled training datasets, allowing users to customize ML algorithms to their requirements [18,19]. Convolutional Neural Networks (CNNs) were found to be the most accurate method for symptom classification [20,21] while working with image-based data. CNNs were shown to be capable of rating EPPO symptoms, specifically “modifications in color”, at both leaf and canopy levels [22]. Gómez-Zamanillo et al. [23] proposed a method for assessing PHYGEN by classifying the most common symptoms. Their study demonstrated the effectiveness of CNNs as feature extractors for predicting PHYGEN rates or similar measures. The study utilized CNN to identify and classify color-related phytotoxicity symptoms from RGB images. Severity estimates were determined by assigning arbitrary weights to the detected symptoms. Rather, they relied on expert experimenters to quantify weights without optimizing scores. Currently, no CNN-based model has been proposed to generate a reasonable estimate of PHYGEN based on a comprehensive analysis of all symptoms. Weight optimization is highly appreciated as it is expected to enhance the accuracy of estimates and provide insights into the significance of each symptom in the toxicological mechanism of PPPs. Further challenges associated with the deployment of CNNs for plant disease detection and scoring are reported in Barbedo et al. [24,25]. In particular, these include (i) sensitivity of deductions to environmental and sensor-related issues, (ii) capability of generalization of the model, and (iii) training dataset quality. It is important to note that the quality of the training dataset is highly significant as it must be properly calibrated for the specific type of PPP being tested. Therefore, pre-trained networks relying on training datasets generated for different symptoms from different PPPs should not be used to test new PPPs. It is worth noting that, in order for CNN training to be robust and accurate enough, it requires huge training datasets consisting of thousands of images. Table 1 shows some of the methods proposed in the literature for the estimation of PHYGEN, enhancing their suitability for new PPPs PHYGEN prediction.
Typical trials for new PPPs usually involve only a few hundred plants. This may not provide a sufficient dataset for robust training, testing, and deployment of a new CNN. It is noteworthy that CNNs maintain their efficacy when symptoms of phytotoxicity are well-documented and recognized within the training dataset. This specificity is a true challenge in ML optimization for the newer PPP-related trials since the explored symptomatology may not be cataloged.
This work emphasizes that symptoms of phytotoxicity resulting from new PPPs can be unique due to their novelty, making them unpredictable. Therefore, screening trials are necessary. The proposed method involves a PHYGEN evaluation via a CV ML system for new PPPs operating in a greenhouse environment that overcome such limitations.
The system is specifically designed to address three key challenges in adopting AI, and specifically CV ML for new PPPs screening: small amount of training data, stability, and accuracy. Moreover, the model prediction suitability for ANOVA testing is also discussed.
The presented method requires only a small training sample with respect to CNN algorithms because it relies on a single linear regression and a logistic function. It takes a small training sample from the available study population, effectively addressing issues of under-representation of training datasets [24], which is typical when testing new PPP phytotoxicity.
The system was found to reduce the impact of environmental and sensor-related factors on plant symptom detection, increasing the stability of plant pictures and measures. This is achieved through proper platform calibration techniques and a multi-view image capture approach that allows for the monitoring of errors of the geometrical and radiometric measures used to train and test the model. Model stability was tested using cross-validation. The results confirmed the robustness of the method regardless of the sample adopted. The accuracy of the model’s prediction was compared to the precision of human raters as described in the literature (10%) [3] and to the state-of-the-art (SOTA) model for PHYGEN of non-new PPPs (6.74%) [23]. It was not possible to find a direct comparison of a model predicting PHYGEN for new PPPs by CV ML in the literature. Therefore, the accuracy must be considered satisfactory if it is higher than the precision of human raters, and it is expected to be lower than that of CNN models with a greater amount of training data. The methodology also addresses the challenge of adopting discrete quantitative scales in the ML training step. It has been shown to improve the prediction of PHYGEN as a continuous scale variable, starting from quantitative ordinal discrete values, such as those obtained from ordinary approaches. Furthermore, as the PHYGEN estimates are now on a continuous scale, the ANOVA test can be more appropriately utilized, resolving the cumbersome lack of adaptation to the statistical theory that is often observed in the field of PPP screening.

2. Materials and Methods

2.1. Hardware Platform

A platform was developed and integrated into a greenhouse structure for multispectral photogrammetric data acquisition. The integration was achieved using a framework consisting of two 10-m-long aluminum extruded profiles affixed to the roof and walls of the greenhouse. To enable the sensing system to move along the Y-axis, two parallel linear rail guides were mounted on these profiles. In addition, a 6-m-long aluminum support was installed perpendicular to the initial rails. This support incorporates a linear guide rail, which enables camera movement along the X-axis. Adjustments along the Z-axis were made possible by altering the brackets on the Y-axis rails. The proximity of the sensing system to the bench, where the pots were situated, was adjustable within a range of 1.1 to 1.5 m. The camera’s position along the 6-m rail could be adjusted using fixing brackets, as shown in Figure 1.
Camera movement along the Y-axis in the greenhouse was controlled by a DC motor that operates through a pulley system. This system works similarly to a bridge crane that moves the imaging compound automatically with a speed of about 0.08 m/s along the Y-axis. The motion was manually activated and stopped.
The whole moving platform was made of (i) one MAPIR Survey3W (PeauProductions, San Diego, CA, USA) camera (S3) multispectral camera, (ii) two Light-Emitting Diode (LED) panels (GODOX FL150R) (Godox, Shenzhen, Guangdong, China) each measuring 1.2 × 0.3 m, and (iii) a 6-m LED strip emitting with a peak at 850 nm that encircles the GODOX FL150R panels to ensure that adequate Near-Infrared (NIR) radiation reaches the plants. Panels (the entire imaging system) ran parallelly to the bench hosting the pots to be imaged to ensure uniform illumination.
Furthermore, shading curtains were installed on the walls and ceiling of the greenhouse to reduce exterior light contribution during data collection.
A preliminary test was conducted to ensure the consistency of the spectrum provided by LEDs through its comparison with the reflectance spectrum acquired by an RS-5400 Spectroradiometer (Spectral Evolution, Haverhill, MA, USA). The acquisition was performed using calibrated panels of the RS-5400 instrument (Figure 2a) in lighting conditions replicating the operational environment.
The S3 camera was used for image capture, as detailed in Table 2. A white balance setting was employed during acquisition to increase the intensity of the Red and NIR bands, resulting in a reduction of green band sensitivity.

2.2. Experimental Design

An experiment was conducted to assess the reliability of the system and the processing workflow with respect to EPPO standards. The selectivity of a herbicide with an unknown mode of action was tested in a controlled environment greenhouse following EPPO standards [2,28,29,30]. This allowed uniform growing conditions to be maintained throughout the greenhouse. Forty-four pots, each 40 × 30 cm, were sown with oilseed rape (OSR) and treated with the experimental product before emergence. The treatments were applied using an automatic spray chamber. To ensure a balanced set of PHYGEN, different concentrations of the herbicide, including a control group, were used to cover a range of phytotoxicity intensities. Visual and digital evaluations were carried out simultaneously. The PHYGEN assessment values were recorded as Day After Treatment (DAA) in Table 3.
Only five discrete PHYGEN values were retained for scoring: 0%, 13%, 38%, 63%, and 88%. This emphasizes the nature of the data generated by the visual assessment and the extreme use of the discrete quantitative scale. It is important to note that all five values were assigned during the three assessments, except on the last day, when the highest value (88%) was not observed. This resulted in an imperfectly balanced distribution of PHYGEN over time. The interval between consecutive discrete values was 25%, except for the interval between 0% and 13%. The 0% value may be unreliable for treated pots due to the inevitable effect of herbicides, even for resistant crops. The true value in the range between 0% and 13% is difficult to detect, even for expert experimenters upon visual inspection, and is usually interpreted as having no effect on the harvest. Despite this, 0% values were always recorded, as assessed by the experimenters.

2.3. Data Processing

The workflow starts with planning the image acquisition of the experimental plants. Then, the images are used to retrieve the multispectral 3D reconstruction of the plants. The parameters of the observed plants are extracted by the 3D model. Finally, the ML model is trained on the extracted parameters and validated. The workflow is summarized in Figure 3.

2.3.1. Planning the Acquisition

The camera movements were planned to capture stereoscopic images using a local Euclidean coordinate system, hereinafter called Coordinate Reference System (CRS), having the origin located at the lower left corner of the bench hosting plants.
Image block bundle adjustment was intended to refine both position and attitude image Exterior Orientation (EO) parameters, using nominal coordinates of the focal point position and a nadiral orientation as an initial solution during the adjustment.
Nominal values for image focal point position (X0, Y0, Z0) were determined assuming (i) X0 as the horizontal distance between adjacent strips; (ii) Y0 was computed by considering the speed of the camera shifts along the bars, and (iii) Z0 was set to a fixed value, which is discussed in the next paragraph. The camera was positioned with the longer side (4000 pixels) aligned across the track.
The nominal Z0 value was determined based on two conditions. First, the resulting image footprint must be consistent with the expected target size of plants. Second, targets should be visible at the smallest distance longer than the hyper-focal distance (0.815 m for S3). This condition ensures the maximum obtainable resolution, known as Ground Sampling Distance (GSD), which maximizes the efficiency and quality of tie point recognition. It is important to note that GSD is proportional to the physical pixel size according to Equation (1),
G S D = δ H f
where H is the camera-to-target distance, f is the camera focal length, and δ is the physical pixel size. As the height of the assessed plants can vary greatly during the same acquisition, H can range from 0.815 to 1.500, resulting in a ground sample distance (GSD) that varies between 0.37 and 0.69 mm·pixel−1. When planning an acquisition, it is important to ensure that the coarser GSD (which depends on H) is smaller than the smallest feature that needs to be recognized.
Tie point recognition depends on both the forward and side overlap among images. The forward overlap is determined by the baseline (B), which is the distance between consecutive focal points along the same strip. On the other hand, the side overlap is determined by the distance between two adjacent strips.
The platform is designed to operate with a strip distance equal to the baseline (0.2 m), resulting in 95% forward overlap.
In digital photogrammetry, it is widely acknowledged that the Z coordinate of target points is the most critical to estimate accurately. Its precision can be evaluated using Equation (2) [31,32,33],
σ z = H 2 B f σ x
where H is the camera-target distance, σ x is the precision of parallax measures in the image domain (assumed to be half the physical pixel size, i.e., 1.685 μm for S3), B is the baseline, f is the sensor focal length, and σ z is the estimated precision of the Z coordinate of the target point.
The graphs in Figure 4 relate the theoretical (expected) σ z with the baseline B while varying the camera-to-target distance at three reference values. The B interval was considered to be within the minimum and maximum overlap.
Equation (2) was used to estimate the actual Z precision from the bundle adjustment solution and compare it to the expected (theoretical) precision ( σ z ). To enhance the robustness of validation and test for geometrical errors, four metered tapes were placed over the bench (Figure 5), and at least nine GCPs were manually positioned throughout the scene for each acquisition date. The GCPs were positioned in a pattern to ensure a uniform distribution across the image block in both longitude and latitude. The GCPs were at three different heights: 0 m, 0.35 m, and 0.7 m.
In summary, the only adjustable parameters for planning the acquisition were (i) the Z position of the camera and (ii) the distance between strips (side overlap). The Z position was set at 1.5 m from the bench, and the distance between strips (on the X axis) was 20 cm in all three assessments.

2.3.2. Bundle Adjustment and Point Cloud Generation

Digital photogrammetry software utilize computer vision algorithms, such as the Scale-Invariant Feature Transform (SIFT), to automatically identify potential tie points in images [34,35,36]. Photogrammetric software may use various algorithms to match these points across images, including Random Sample Consensus (RANSAC) or other methods, depending on computational efficiency and accuracy requirements [37]. After matching the points, software uses bundle adjustment to estimate the spatial locations of the points and the camera positions. This process takes into account the matched points and the camera’s Exterior and Interior Orientation (EO/IO) parameters [33,38,39]. This study employed tie point identification, matching, and bundle adjustment using Agisoft Metashape version 2.1.0 (Agisoft LLC, St. Petersburg, Russia). To support image bundle adjustment, a portion of the GCPs and initial camera EO/IO parameters were provided [40,41,42].
As far as IO parameters are concerned, the initial values used to bootstrap adjustment were the following: (i) focal length as supplied by S3 and (ii) lens distortion parameters = 0, coordinates of the Principal Point of Autocollimation (PPA) equal to the physical center of the image (fiducial point). Sensor array and physical pixel size were set to their nominal values.
The solution was spatially referenced using GCP coordinates, which are referred to as a local reference system (CRS). The resulting point cloud associates spectral values from bands to each point. These values were obtained as the mean value of the image pixels corresponding to the target points. Bundle adjustment provides estimated camera EO and IO parameters and their uncertainties, as well as all GCP coordinates estimated by the model and their corresponding errors. The GCPs involved in the bundle adjustment allow for the detection of outliers and refinement of the solution by running the bundle adjustment again after removing the outliers.
To ensure accuracy, the solution was checked by three GCPs, which were not involved in bundle adjustment. The adjustment solution was considered satisfactory if the difference between these three GCP values from the model and the reference values was less than or equal to the expected error (as described in Section 3.1.1).

2.3.3. Products

A digital surface model (DSM) with a GSD inherited from the previous steps was generated from the point cloud data. The DSM was then utilized to create the final multi-spectral orthomosaic (MSO) [39]. Both the DSM and MSO are projected in the CRS.

2.3.4. Radiometric Calibration of the Multi-Spectral Orthomosaic

MSO radiometric calibration was performed using an empirical line approach with reference reflectance values obtained from the S3 calibrated panel provided by the MAPIR company [43]. The average pixel value from each squared area of the panel having the same grey level was computed for all the bands of the non-calibrated orthomosaic.
Reference reflectance values from the MAPIR calibration panel were compared with the averaged ones from the orthomosaic by scatterplot. An Ordinary Least Squares approach was used to calibrate a linear function modeling the relationship between MSO Digital Numbers and the correspondent “expected” reflectance values [44]. Calibration function definition was carried out separately for each band.
The resulting functions were then applied to all the pixels of MSO bands, resulting in a calibrated (reflectance) version of MSO (Figure 6).
The radiometric calibration accuracy was computed as the Mean Absolute Error (MAE) between the panel ground truth values and the forecasted values [45] according to Equation (3),
M A E = i n y i x i n
where y i is the expected reflectance value of the i-calibration panel square, x i the estimated correspondent one, and n the number of observations.

2.3.5. MSO Classification

A vector format file was generated to map the area of each potted plant. A local coordinate system (CRS) was adopted. The file contains two essential pieces of information: a unique identifier for each plant and the date of assessment. The second process involved manually isolating the plant from the soil in the pot using thresholding, focusing only on the plant pixels. The soil was identified and masked by applying a bimodal threshold [46] to the green band. The mask was then refined using a semi-automatic technique [47]. This step produced the final vegetation mask (VM) (Figure 6), effectively isolating the plants for analysis.

2.3.6. Predictors

It is important to note that a PHYGEN estimate, in terms of a continuous variable, is the main expected outcome of this work. To achieve this task, the VM-derived area was assumed as a proxy for the Leaf Area Index (LAI). Differently, the mean (µ) and standard deviation (σ) of the following bands/indices from the calibrated S3 orthomosaic were computed: (i) Red, Green, and NIR bands, (ii) Normalized Difference Vegetation Index (NDVI), and (iii) Soil Adjusted Vegetation Index (SAVI).
Additionally, the mean (µ) and standard deviation (σ) of heights of pixels belonging to VM were obtained by differencing DSM values of pixels within VM and the average of DSM values of soil pixels.
Finally, the date of acquisition (defined as DAA) was also considered to calibrate the prediction model.
PredictorsVariables meaning
N D V I = ρ N I R ρ R E D ρ N I R + ρ R E D where ρ N I R and ρ R E D are the calibrated reflectance values form MSO(4)
S A V I = 1.5 · ρ N I R ρ R E D ρ N I R + ρ R E D + 0.5 where ρ N I R and ρ R E D are the calibrated reflectance values form MSO(5)
H P = H V H _ S where H P is the computed pixel relative average height of the vegetation contained in a pot, H V is the absolute height of vegetation pixel in a pot, and H _ S is the average absolute height of soil level in a pot.(6)

2.3.7. ML Model

The available dataset is made of 132 multivariate observations (n), each providing 14 different predictors (p). To simplify the model and reduce parameters, the least absolute shrinkage and selection operator (LASSO) model (7) was used [48]. The PHYGEN variable ( y ) originally expressed as a percentage, was transformed into a probability by dividing it by one hundred. As PHYGEN values range between 0 and 1, a linear regression model is unsuitable. A logistic function was used to adjust the linear predictions from the LASSO model to the PHYGEN scale, which is relevant to human vision.
Twelve variables from MSO and DAA were normalized and used as independent variables. The dataset was split into an 80% training set and a 20% testing set. A K-fold (K = 10) strategy was applied to train and cross-validate the model [49]. To ensure a balanced splitting of observations, a stratified method was used based on PHYGEN values and acquisition dates. The human visual PHYGEN was fitted using a multivariate regression model with a L1 regularization term [50] and a least squares adjusting method. The hyperparameter λ of L1 was determined through a cross-validation involving 5 subsets of the training data, each representing a different part of a logarithmic range developing approximately between 0.003 and 0.67. The trained model outputs were then used as inputs for a logistic function (LF) (8), which was fitted to the PHYGEN data. The function parameters were estimated using non-linear least-squares optimization [51,52], with initial values inferred from the PHYGEN distribution. The optimization aimed to minimize two error functions in the model, thereby enhancing the accuracy of the PHYGEN prediction:
ModelLoss function Model output
LASSO i = 1 n y i β 0 j = 1 p β j x i j 2 + L 1 = m i n ; L 1 = λ j = 1 p β j β j ^ , β 0 ^ (7)
L o g i s t i c F u n c t i o n L F i = 1 n y i L 1 + e k y ^ i y 0 = m i n L ^ , k ^ , y ^ 0 (8)
where y i is the i-PHYGEN observed rate, x i j (7) is the observed value of the j-th explaining variable, β 0 (7) is the intercept of the function, β j (7) is the weight corresponding to j-th variable, and y ^ i (8) is the predicted value of the PHYGEN rate computed using weights estimated values from LASSO ( β j ^ , β 0 ^ ).
The logistic function (8) has three parameters: L, y 0 , and k. These correspond to the higher limit of the function, the inflection point of the sigmoid, and the rate of growth, respectively. The estimated values for L, k, and y 0 are, respectively, L ^ , k ^ , y ^ 0 , the correspondent estimated values. Initial values of L ^ , k ^ , and y ^ 0 , needed to run the not-linear least squares were set to 100, 50, and a random value extracted in the range [0, 1], respectively.

3. Results and Discussion

3.1. Measurement Errors

The surveyed 3D coordinates of GCPs were compared to those obtained from the photogrammetric restitution of the adjusted image block to assess errors associated with geometric features. To ensure a reasonable level of robustness for the accuracy assessment despite the low number of surveyed points, a Leave One Out method was used. MAE was used as an error measure.
Similarly, the accuracy of radiometric calibration was assessed using a Leave One Out (LOO) approach. An assessment was performed separately for the different dates, and the corresponding Mean Absolute Percentage Error (MAPE) values were computed. Finally, MAPE values from the different dates were averaged to define the final reference value for radiometric calibration accuracy.

3.1.1. Geometric Assessment Errors

Accuracy assessment concerning image block bundle adjustment was achieved at a single date level. MAE values (for each coordinate) are reported in Table 4.
The retained solution was deemed suitable, assuming that the differences between the main geometric features of diseased and healthy plants are greater than the reported errors. A comparison between M A E z with the theoretical accuracy expected for the Z coordinate measure through photogrammetry (Equation (2)) showed that they were consistent.

3.1.2. Radiometric Validation

The Mean Absolute Percentage Error (MAE) of the calibration function training sample (Table 5) was used to estimate the goodness of function fitting.
The higher Rad-MAPE value was found for the green band, which is expected given the white balancing strategy adopted during image pre-processing (Section 2.1). MAPE for red and NIR bands was found to be high as well, suggesting further refinements in the future to improve radiometric calibration.

3.2. ML Model Validation

3.2.1. Stability

The stability of the LASSO and Logistic model coefficients was analyzed. A 10-fold strategy was performed to generate an estimate for the mean and the standard deviation of coefficient estimates. Figure 7 and Table 6 show related statistics.
Insights into the stability of the model can be gained by observing the coefficient of variation (Coef.Var.) of the most influencing parameters as estimated through the 10-fold strategy. Low values of Coef.Var. across all parameters proved that model stability is ensured. Bands and spectral indices showed the highest values of Coef.Var. This can be related to the significant uncertainty of calibrated reflectance, thus confirming the strict correspondence between measurement errors and the stability of the model (Barbedo et al. [24]).

3.2.2. Model Performances

Descriptive statistics of accuracy metrics were calculated with respect to the K-adjusted models used for predicting PHYGEN. MAE and the adjusted coefficient of determination Adj R2 were calculated for the LASSO model, whereas the coefficient of determination R2 was calculated for the Logistic function trained on LASSO predictions. The adjusted R2 residual degrees of freedom were maintained equal to the number of the LASSO nonzero coefficients [53]. Table 7 shows the results.
The stacked model predictions ensure a mean absolute error, slightly overcoming the 11% and having a minimum coefficient of determination R2 of about 0.9.
Regarding the main goal of this work, it is worth noting that whatever the approach used to obtain an estimate of PHYGEN, its accuracy should be consistent with the one of human evaluation. According to the values reported above, the proposed method is able to provide PHYGEN scores similar to the one from experts. Our estimated accuracy (about 11%) is close to the reference threshold ordinarily accepted for PPP tests, which is 10%. Moreover, it presents an R2 value similar to the SOTA model that is trained with a huge amount of data from already tested PPPs due to its CNN architecture [23]. In contrast, MAE values for PHYGEN from our model were about double that obtained from SOTA, which can exploit a huge training set more effectively.
Despite this, we believe that our method is promising and affordable when considering the actual operational conditions for the estimate of PHYGEN for new and untested PPPs, which escapes from the field of application of SOTA, basing deductions on a small training set.

3.2.3. Compliance with ANOVA Assumptions

As previously stated, ANOVA, t-tests, and Z-tests cannot be used with ordinal discrete scale dependent variables [8]. Figure 8 shows both the ordinal discrete data used to test the model and the continuous ones from the model. This is a great improvement in the ordinary screening procedures since it enables the possibility of testing group differences through an ANOVA-based approach that a discrete variable excludes.

4. Conclusions

The goal of this study was to test the operability and effectiveness of a controllable simple system based on multispectral digital photogrammetry and AI to support (and improve) current procedures for new PPP screening. This means that the system must be able to generate estimates of ordinarily recognized standard parameters (i.e., PHYGEN) and define the level of phytotoxicity of new PPPs before they enter the market. Basic requirements concern both compliance with accuracy standards and the robustness of the model output.
The proposed method can be made operational if proper Geomatics and AI skills are properly integrated. Geomatic skills are related to proper management of the acquisition system that involves both geometric (image block bundle adjustment) and radiometric-related operations needed to prepare the data that the predictors of the PHYGEN have to be extracted from. Hardware solutions proposed for the system exploit the abovementioned skills with the aim of reducing environmental and sensor-related issues. This makes acquired images more similar, partially overcoming one of the biggest problems recognized for the proper adoption of ML in phytopathometry: image features variability.
A strong constraint introduced by this specific field of study is the lack of a huge training dataset that cannot be reasonably supplied for new PPPs to be screened. In such situations, this type of screening is required.
The system operates in an effectively prepared greenhouse and requires significant infrastructure for the proper movement of the camera and lighting platform.
In this work, we present a simple solution to these requirements. In particular, after suggesting how to pre-process the data from a photogrammetric and radiometric point of view, we found some predictors for the model to be trained that are able to exploit both the geometric and spectral content of acquired data.
The predictors were analyzed and selected. They were used to train an ML algorithm integrating a LASSO and a logistic function to generate continuous estimates of PHYGEN. The robustness of the model was tested by conducting the training with a k-fold strategy and the correspondent statistics analyzed.
The proposed method/system showed stability (robustness), proving to be independent of the training sample. The accuracy of PHYGEN prediction from our model is consistent with the ones from traditional methods. Compared to other AI-based approaches (i.e., SOTA), it showed slightly higher performances in terms of correlation with expert scores applied for new PPPs (our model: R2 = 0.9, SOTA: R2 = 0.89).
In contrast, our model was not able to reach SOTA accuracy in PHYGEN scores prediction (our model: MAE = 10.66%, SOTA: MAE = 6.74%). However, it must be noted that SOTA is not intended for predictions concerning new PPPs, and the reference values we reported refer to previously tested PPPs (i.e., providing a huge amount of training data). A surprising capability of the model was to overcome the discrete nature of expert-based scores for PHYGEN. In fact, it is able to generate continuous scores of PHYGEN, even if trained on discrete ones. Their continuous nature provides a high added value since it makes it possible to test differences among groups using ordinary ANOVA-based methods.
However, some improvements are desirable, mostly in relation to a refinement of the hardware of the acquisition platform. A better-performing multispectral camera showing a higher spectral resolution and more rigorous calibration metadata is certainly a first step for future work. The active system providing controlled lighting can also be improved by using light sources that are able to generate a wider spectrum. Camera motion can be improved by using a stepper motor, allowing the possibility to stop the camera during image acquisition, thus avoiding blurring and reducing geometric deformations. Image processing could be also enhanced by strengthening automation in vegetation mask calculation from orthomosaic.
The most significant improvement of the model would be to train a CNN with such a small amount of data. The final activation layer of this CNN should be set to the logistic function proposed in this work. Further studies must test data augmentation techniques and such activation layers with MAE loss to predict PHYGEN in similar setups. Regardless of the solution, we maintain that the explicability of the model, where the physical meaning of predictors and their relationships can be somehow recognized, is an added value for those applications where precise decision making is involved.

Author Contributions

Conceptualization, S.B. and E.B.-M.; Methodology, S.B. and E.B.-M.; Software, S.B.; Validation, S.B.; Formal analysis, S.B. and E.B.-M.; Investigation, S.B.; Resources, E.B.-M.; Data curation, S.B.; Writing—original draft, S.B.; Writing—review & editing, E.B.-M.; Visualization, S.B.; Supervision, E.B.-M.; Project administration, S.B.; Funding acquisition, E.B.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was conducted as part of a PhD program supported by SAGEA centro di saggio s.r.l.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. EPPO. Digital Technology and Efficacy Evaluation of Plant Protection Products. 27/29 June 2022. Available online: https://www.eppo.int/MEETINGS/2022_meetings/wk_digital_technology_ppp (accessed on 7 November 2022).
  2. European and Mediterranean Plant Protection Organization. PP1/135 (4) Phytotoxicity Assessment. EPPO Bull. 2014, 44, 265–273. [Google Scholar] [CrossRef]
  3. Chiang, K.S.; Bock, C.H.; El Jarroudi, M.; Delfosse, P.; Lee, I.H.; Liu, H.I. Effects of Rater Bias and Assessment Method on Disease Severity Estimation with Regard to Hypothesis Testing. Plant Pathol. 2016, 65, 523–535. [Google Scholar] [CrossRef]
  4. European Union. Regulation (EC) No 1107/2009 of the European Parliament and of the Council of 21 October 2009 Concerning the Placing of Plant Protection Products on the Market and Repealing Council Directives 79/117/EEC and 91/414/EEC. Off. J. Eur. Union 2009, 309, 1–50. [Google Scholar]
  5. Alcala, R.; Vitikkala, H.; Ferlet, G. The World Trade Organization Agreement on the Application of Sanitary and Phytosanitary Measures and Veterinary Control Procedures. OIE Rev. Sci. Tech. 2020, 39, 253–261. [Google Scholar] [CrossRef] [PubMed]
  6. Petter, F.; Roy, A.S.; Smith, I. International Standards for the Diagnosis of Regulated Pests. Eur. J. Plant Pathol. 2008, 121, 331–337. [Google Scholar] [CrossRef]
  7. Chiang, K.-S.; Bock, C.H. Understanding the Ramifications of Quantitative Ordinal Scales on Accuracy of Estimates of Disease Severity and Data Analysis in Plant Pathology. Trop. Plant Pathol. 2022, 47, 58–73. [Google Scholar] [CrossRef]
  8. Stevens, S.S. On the Theory of Scales of Measurement. Science 1946, 103, 677–680. [Google Scholar] [CrossRef]
  9. Agresti, A. Analysis of Ordinal Categorical Data; John Wiley & Sons: Hoboken, NJ, USA, 2010; Volume 656. [Google Scholar]
  10. Owen, M.D.; Franzenburg, D.D.; Grossnickle, D.M.; Lux, J.F. Evaluation of Application Timings of Warrant Herbicide for Soybean Phytotoxicity. Iowa State Univ. Res. Demonstr. Farms Prog. Rep. 2013, 2012, 33–37. [Google Scholar]
  11. Mahlein, A.-K. Plant Disease Detection by Imaging Sensors—Parallels and Specific Demands for Precision Agriculture and Plant Phenotyping. Plant Dis. 2016, 100, 241–251. [Google Scholar] [CrossRef]
  12. Mahlein, A.-K.; Kuska, M.T.; Behmann, J.; Polder, G.; Walter, A. Hyperspectral Sensors and Imaging Technologies in Phytopathology: State of the Art. Annu. Rev. Phytopathol. 2018, 56, 535–558. [Google Scholar] [CrossRef]
  13. Gates, D.M.; Keegan, H.J.; Schleter, J.C.; Weidner, V.R. Spectral Properties of Plants. Appl. Opt. AO 1965, 4, 11–20. [Google Scholar] [CrossRef]
  14. Carter, G.A.; Knapp, A.K. Leaf Optical Properties in Higher Plants: Linking Spectral Characteristics to Stress and Chlorophyll Concentration. Am. J. Bot. 2001, 88, 677–684. [Google Scholar] [CrossRef] [PubMed]
  15. Rossi, R.; Leolini, C.; Costafreda-Aumedes, S.; Leolini, L.; Bindi, M.; Zaldei, A.; Moriondo, M. Performances Evaluation of a Low-Cost Platform for High-Resolution Plant Phenotyping. Sensors 2020, 20, 3150. [Google Scholar] [CrossRef] [PubMed]
  16. Li, D.; Xu, L.; Tang, X.; Sun, S.; Cai, X.; Zhang, P. 3D Imaging of Greenhouse Plants with an Inexpensive Binocular Stereo Vision System. Remote Sens. 2017, 9, 508. [Google Scholar] [CrossRef]
  17. Zhou, J.; Fu, X.; Schumacher, L.; Zhou, J. Evaluating Geometric Measurement Accuracy Based on 3D Reconstruction of Automated Imagery in a Greenhouse. Sensors 2018, 18, 2270. [Google Scholar] [CrossRef] [PubMed]
  18. Hughes, D.P.; Salathé, M. An Open Access Repository of Images on Plant Health to Enable the Development of Mobile Disease Diagnostics. arXiv 2015, arXiv:1511.08060. [Google Scholar]
  19. Hajam, M.A.; Arif, T.; Khanday, A.M.U.D.; Neshat, M. An Effective Ensemble Convolutional Learning Model with Fine-Tuning for Medicinal Plant Leaf Identification. Information 2023, 14, 618. [Google Scholar] [CrossRef]
  20. Tan, L.; Lu, J.; Jiang, H. Tomato Leaf Diseases Classification Based on Leaf Images: A Comparison between Classical Machine Learning and Deep Learning Methods. AgriEngineering 2021, 3, 542–558. [Google Scholar] [CrossRef]
  21. Nikith, B.V.; Keerthan, N.K.S.; Praneeth, M.S.; Amrita, T. Leaf Disease Detection and Classification. Procedia Comput. Sci. 2023, 218, 291–300. [Google Scholar] [CrossRef]
  22. Ghosal, S.; Blystone, D.; Singh, A.K.; Ganapathysubramanian, B.; Singh, A.; Sarkar, S. An Explainable Deep Machine Vision Framework for Plant Stress Phenotyping. Proc. Natl. Acad. Sci. USA 2018, 115, 4613–4618. [Google Scholar] [CrossRef]
  23. Gómez-Zamanillo, L.; Bereciartua-Pérez, A.; Picón, A.; Parra, L.; Oldenbuerger, M.; Navarra-Mestre, R.; Klukas, C.; Eggers, T.; Echazarra, J. Damage Assessment of Soybean and Redroot Amaranth Plants in Greenhouse through Biomass Estimation and Deep Learning-Based Symptom Classification. Smart Agric. Technol. 2023, 5, 100243. [Google Scholar] [CrossRef]
  24. Barbedo, J.G.A. Deep Learning Applied to Plant Pathology: The Problem of Data Representativeness. Trop. Plant Pathol. 2022, 47, 85–94. [Google Scholar] [CrossRef]
  25. Barbedo, J.G.A. Factors Influencing the Use of Deep Learning for Plant Disease Recognition. Biosyst. Eng. 2018, 172, 84–91. [Google Scholar] [CrossRef]
  26. Ali, A.; Streibig, J.C.; Duus, J.; Andreasen, C. Use of Image Analysis to Assess Color Response on Plants Caused by Herbicide Application. Weed Technol. 2013, 27, 604–611. [Google Scholar] [CrossRef]
  27. Chu, H.; Zhang, C.; Wang, M.; Gouda, M.; Wei, X.; He, Y.; Liu, Y. Hyperspectral Imaging with Shallow Convolutional Neural Networks (SCNN) Predicts the Early Herbicide Stress in Wheat Cultivars. J. Hazard. Mater. 2022, 421, 126706. [Google Scholar] [CrossRef] [PubMed]
  28. European and Mediterranean Plant Protection Organization. Design and Analysis of Efficacy Evaluation Trials. EPPO Bull. 2012, 42, 367–381. [Google Scholar] [CrossRef]
  29. European and Mediterranean Plant Protection Organization. PP 1/319 (1) General Principles for Efficacy Evaluation of Plant Protection Products with a Mode of Action as Plant Defence Inducers. EPPO Bull. 2021, 51, 5–9. [Google Scholar] [CrossRef]
  30. European and Mediterranean Plant Protection Organization. PP 1/181 (5) Conduct and Reporting of Efficacy Evaluation Trials, Including Good Experimental Practice. EPPO Bull. 2022, 52, 4–16. [Google Scholar] [CrossRef]
  31. De Petris, S.; Sarvia, F.; Borgogno-Mondino, E. RPAS-Based Photogrammetry to Support Tree Stability Assessment: Longing for Precision Arboriculture. Urban For. Urban Green. 2020, 55, 126862. [Google Scholar] [CrossRef]
  32. Borgogno Mondino, E. Multi-Temporal Image Co-Registration Improvement for a Better Representation and Quantification of Risky Situations: The Belvedere Glacier Case Study. Geomat. Nat. Hazards Risk 2015, 6, 362–378. [Google Scholar] [CrossRef]
  33. Kraus, K. Photogrammetry: Geometry from Images and Laser Scans; De Gruyter: Berlin, Germany, 2011; ISBN 978-3-11-089287-1. [Google Scholar]
  34. Otero, I.R.; Delbracio, M. Anatomy of the SIFT Method. Image Process. Line 2014, 4, 370–396. [Google Scholar] [CrossRef]
  35. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  36. Lowe, D.G. Object Recognition from Local Scale-Invariant Features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157. [Google Scholar]
  37. Muja, M.; Lowe, D.G. Fast approximate nearest neighbors with automatic algorithm configuration. In Proceedings of the Fourth International Conference on Computer Vision Theory and Applications, Lisboa, Portugal, 5–8 February 2009; pp. 331–340. [Google Scholar]
  38. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004; ISBN 978-0-521-54051-3. [Google Scholar]
  39. Gomarasca, M.A. Elements of Photogrammetry. In Basics of Geomatics; Gomarasca, M.A., Ed.; Springer Netherlands: Dordrecht, The Netherlands, 2009; pp. 79–121. ISBN 978-1-4020-9014-1. [Google Scholar]
  40. Atkinson, K.B. (Ed.) Close Range Photogrammetry and Machine Vision; Reprinted; Whittles: Caithness, UK, 1996; ISBN 978-1-870325-73-8. [Google Scholar]
  41. Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle Adjustment—A Modern Synthesis. In Vision Algorithms: Theory and Practice; Triggs, B., Zisserman, A., Szeliski, R., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2000; Volume 1883, pp. 298–372. ISBN 978-3-540-67973-8. [Google Scholar]
  42. Moulon, P. Positionnement Robuste et Précis de Réseaux d’images. Ph.D. Thesis, Université Paris-Est, Créteil, France, 2014. [Google Scholar]
  43. MAPIR_Survey3_Camera_Datasheet_English.Pdf. Available online: https://drive.google.com/file/d/10gIzOjWVNoG9dvZwmAUG9fVqkEZHXEur/view?usp=drive_open&usp=embed_facebook (accessed on 20 September 2022).
  44. Camera, M. MAPIR Camera Reflectance Calibration Ground Target Package (V2). Available online: https://www.mapir.camera/products/mapir-camera-reflectance-calibration-ground-target-package-v2 (accessed on 15 September 2022).
  45. Wyatt, C. Radiometric Calibration: Theory and Methods; Elsevier: Amsterdam, The Netherlands, 2012; ISBN 978-0-323-16009-4. [Google Scholar]
  46. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst., Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  47. Rother, C.; Kolmogorov, V.; Blake, A. “GrabCut”: Interactive Foreground Extraction Using Iterated Graph Cuts. In ACM SIGGRAPH 2004 Papers; Association for Computing Machinery: New York, NY, USA, 2004; pp. 309–314. [Google Scholar]
  48. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning; Springer Series in Statistics; Springer: New York, NY, USA, 2009; ISBN 978-0-387-84857-0. [Google Scholar]
  49. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning; Springer: New York, NY, USA, 2013; Volume 112. [Google Scholar]
  50. Friedman, J.; Hastie, T.; Tibshirani, R. Regularization Paths for Generalized Linear Models via Coordinate Descent. J. Stat. Softw. 2010, 33, 1–22. [Google Scholar] [CrossRef] [PubMed]
  51. Garbow, B.S. MINPACK-1, Subroutine Library for Nonlinear Equation System; Nuclear Energy Agency: Paris, France, 1984. [Google Scholar]
  52. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  53. Zou, H.; Hastie, T.; Tibshirani, R. On the “Degrees of Freedom” of the Lasso. Ann. Statist. 2007, 35, 2173–2192. [Google Scholar] [CrossRef]
Figure 1. Platform and sensing system (top) and only the sensing system (bottom).
Figure 1. Platform and sensing system (top) and only the sensing system (bottom).
Agronomy 14 00306 g001
Figure 2. (a) Spectral signature of the reference panel, lighted with the tested LEDs and measured using the RS-5400 Spectro-radiometer. (b) Reflectivity of the MAPIR calibration panels corresponding to the different grayscale levels (yellow, light-green, blue, and violet colors in the graph) provided by the factory. The dark green line shows the filter sensitivity of MAPIR for the different bands. (c) Transmissivity of the S3 camera filter.
Figure 2. (a) Spectral signature of the reference panel, lighted with the tested LEDs and measured using the RS-5400 Spectro-radiometer. (b) Reflectivity of the MAPIR calibration panels corresponding to the different grayscale levels (yellow, light-green, blue, and violet colors in the graph) provided by the factory. The dark green line shows the filter sensitivity of MAPIR for the different bands. (c) Transmissivity of the S3 camera filter.
Agronomy 14 00306 g002
Figure 3. General workflow of the suggested method.
Figure 3. General workflow of the suggested method.
Agronomy 14 00306 g003
Figure 4. σ z estimates computed by Equation (2). Colored curves refer to different D values.
Figure 4. σ z estimates computed by Equation (2). Colored curves refer to different D values.
Agronomy 14 00306 g004
Figure 5. Metered tapes and GCPs on the bench.
Figure 5. Metered tapes and GCPs on the bench.
Agronomy 14 00306 g005
Figure 6. Non-calibrated (dark) and calibrated MSO are shown together with the last vegetation mask (white pixels) on an oil seed rape pot.
Figure 6. Non-calibrated (dark) and calibrated MSO are shown together with the last vegetation mask (white pixels) on an oil seed rape pot.
Agronomy 14 00306 g006
Figure 7. Mean values of LASSO β coefficients from the 10-fold approach, given for all the predictors. Whisker bars show 1-sigma LASSO β estimates.
Figure 7. Mean values of LASSO β coefficients from the 10-fold approach, given for all the predictors. Whisker bars show 1-sigma LASSO β estimates.
Agronomy 14 00306 g007
Figure 8. Discrete PHYGEN scores from the ordinary human vision-based approach (left). Continuous PHYGEN scores from the model proposed in this work (right).
Figure 8. Discrete PHYGEN scores from the ordinary human vision-based approach (left). Continuous PHYGEN scores from the model proposed in this work (right).
Agronomy 14 00306 g008
Table 1. Related works.
Table 1. Related works.
PaperMethodAccuracy 1Suitability 2
Human ratersDepending on the rater,
the recommended maximum error is 10% [3]
Traditional method
Ali et al. [26]Image processingNot reportedInvolving only biomass estimation, no AI involved, and no monitorable stability
Chu et al. [27]Shallow CNN80%Destructive and only spectral signature involved
Ghosal et al. [22]CNNFrom 50% to 90%
depending on rater
Not phytotoxicity-specific, destructive
Gómez-Zamanillo et al. [23]CNN93.26%Not suitable for new PPPs because the amount of training data required
1 It indicates the accuracy of phytotoxicity severity with respect to human raters. 2 For new PPPs PHYGEN screening.
Table 2. S3 and system integration specifics.
Table 2. S3 and system integration specifics.
SpecificationDetails
Focal Length3.37 mm
Aperturef/2.8 (fixed)
Lens Distortion<1%
Focal LengthFixed
Hyper-focal Distance81.5 cm
Sensor Size3000 × 4000 pixels
Pixel Physical Size1.55 μm
BandsGreen, Red, and NIR (Figure 2c)
Camera Shift (Y-axis)20.3 cm per shot
Frames per second~1/3
Horizontal Footprint 1202–276 cm
Vertical Footprint 1152–207 cm
1 at a 1.1–1.5 m distance.
Table 3. PHYGEN observations.
Table 3. PHYGEN observations.
DAA 10%13%38%63%88%
3119879
754151010
141514960
TOT3127322319
1 Days After Application.
Table 4. XYZ errors from photogrammetric restitution in mm.
Table 4. XYZ errors from photogrammetric restitution in mm.
DAAMAEx (mm)MAEy (mm)MAEz (mm)
30.570.610.62
70.650.70.91
140.670.680.89
Table 5. Radiometric Mean Absolute Percentage Error (Rad-MAPE) and ratio with the expected values obtained for the different bands and grey levels averaged along the three dates.
Table 5. Radiometric Mean Absolute Percentage Error (Rad-MAPE) and ratio with the expected values obtained for the different bands and grey levels averaged along the three dates.
BandBlack (%)Dark Gray (%)Light Gray (%)White (%)
Red76.714.319.24.1
Green82.847.553.218.1
NIR119.629.220.76.2
Table 6. Mean, standard deviation, and coefficient of variation1 values for the coefficients of the LASSO and logistic functions estimated using the 10-fold strategy.
Table 6. Mean, standard deviation, and coefficient of variation1 values for the coefficients of the LASSO and logistic functions estimated using the 10-fold strategy.
ModelParameterMeanStd. dev.Coef.Var. 1
LASSOβ DAA−0.0990.0170.17
β Rµ0.20.050.25
β SAVIµ−0.140.70.5
β NDVIσ−0.280.040.14
β Area−0.080.010.13
λ0.00280.00130.46
LF L 94.531.22<0.1
k 0.060.001<0.1
y 0 47.150.69<0.1
1 Coef.Var. is calculated with the absolute value of the mean.
Table 7. Fit evaluation metric statistics.
Table 7. Fit evaluation metric statistics.
Model MAE (PHYGEN %)R2Adj R2
LASSOMean11.77%-0.89
Std0.67%-0.03
LASSO + LFMean10.66%0.9-
Std0.83%0.03-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bumbaca, S.; Borgogno-Mondino, E. Supporting Screening of New Plant Protection Products through a Multispectral Photogrammetric Approach Integrated with AI. Agronomy 2024, 14, 306. https://doi.org/10.3390/agronomy14020306

AMA Style

Bumbaca S, Borgogno-Mondino E. Supporting Screening of New Plant Protection Products through a Multispectral Photogrammetric Approach Integrated with AI. Agronomy. 2024; 14(2):306. https://doi.org/10.3390/agronomy14020306

Chicago/Turabian Style

Bumbaca, Samuele, and Enrico Borgogno-Mondino. 2024. "Supporting Screening of New Plant Protection Products through a Multispectral Photogrammetric Approach Integrated with AI" Agronomy 14, no. 2: 306. https://doi.org/10.3390/agronomy14020306

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop