Next Article in Journal / Special Issue
What Is the Predictive Capacity of Sesamum indicum L. Bioparameters Using Machine Learning with Red–Green–Blue (RGB) Images?
Previous Article in Journal
Seed Morphology in Vitis Cultivars Related to Hebén
Previous Article in Special Issue
Methodology for Determining the Main Physical Parameters of Apples by Digital Image Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning-Based System for Early Symptoms Recognition of Grapevine Red Blotch and Leafroll Diseases and Its Implementation on Edge Computing Devices

by
Carolina Lazcano-García
1,
Karen Guadalupe García-Resendiz
2,
Jimena Carrillo-Tripp
2,
Everardo Inzunza-Gonzalez
1,
Enrique Efrén García-Guerrero
1,
David Cervantes-Vasquez
1,
Jorge Galarza-Falfan
1,
Cesar Alberto Lopez-Mercado
1 and
Oscar Adrian Aguirre-Castro
1,*
1
Facultad de Ingeniería Arquitectura y Diseño, Universidad Autónoma de Baja California, Carrt. Tijuana-Ensenada No. 3917, Ensenada 22860, Baja California, Mexico
2
Departamento de Microbiología, Centro de Investigación Científica y de Educación Superior de Ensenada, Baja California (CICESE), Ensenada 22860, Baja California, Mexico
*
Author to whom correspondence should be addressed.
AgriEngineering 2025, 7(3), 63; https://doi.org/10.3390/agriengineering7030063
Submission received: 6 December 2024 / Revised: 30 January 2025 / Accepted: 21 February 2025 / Published: 3 March 2025

Abstract

:
In recent years, the agriculture sector has undergone a significant digital transformation, integrating artificial intelligence (AI) technologies to harness and analyze the growing volume of data from diverse sources. Machine learning (ML), a powerful branch of AI, has emerged as an essential tool for developing knowledge-based agricultural systems. Grapevine red blotch disease (GRBD) and grapevine leafroll disease (GLD) are viral infections that severely impact grapevine productivity and longevity, leading to considerable economic losses worldwide. Conventional diagnostic methods for these diseases are costly and time-consuming. To address this, ML-based technologies have been increasingly adopted by researchers for early detection by analyzing the foliar symptoms linked to viral infections. This study focused on detecting GRBD and GLD symptoms using Convolutional Neural Networks (CNNs) in computer vision. YOLOv5 outperformed the other deep learning (DL) models tested, such as YOLOv3, YOLOv8, and ResNet-50, where it achieved 95.36% Precision, 95.77% Recall, and an F1-score of 95.56%. These metrics underscore the model’s effectiveness at accurately classifying grapevine leaves with and without GRBD and/or GLD symptoms. Furthermore, benchmarking was performed with two edge computer devices, where Jetson NANO obtained the best cost–benefit performance. The findings support YOLOv5 as a reliable tool for early diagnosis, offering potential economic benefits for large-scale agricultural monitoring.

1. Introduction

In recent years, the agricultural sector has undergone a significant digital transformation, incorporating artificial intelligence (AI) technologies to harness and effectively analyze the increasing volume of data from diverse sources and extract valuable insights. In the context of artificial intelligence (AI), machine learning (ML) is recognized as a powerful tool for addressing the many challenges associated with developing knowledge-based agricultural systems [1]. Furthermore, the global vineyard area in 2022 was estimated to span 7.3 million hectares, with approximately 258 million hectoliters of wine produced worldwide during that same year [2]. In Mexico, 36,586.5 hectares are dedicated to grape cultivation [3], including approximately 80 grape varieties, with 50% of this area designated for industrial purposes, primarily wine production [4]. Baja California is an important viticulture region that has seen exponential growth in recent years, covering an estimated 4365 hectares [5]. Between 2016 and 2017, grapevine plant samples that exhibited GRBD symptomatology were collected in Ensenada, Baja California, as shown in Figure 1. These samples were analyzed using polymerase chain reaction (PCR) and were found to be positive for grapevine red blotch virus (GRBV), the causative agent of GRBD [6,7]. GRBV belongs to the Grablovirus Vitis species in the Geminiviridae family. Similar to other geminiviruses, GRBV has a single-stranded DNA genome encapsidated in a geminate particle [8,9]. GRBV is spread by the three-cornered alfalfa hopper Spissistilus festinus and through the propagation of infected material [10]. Moreover, other viruses can infect grapevines and even co-infect them. Specifically, grapevine leafroll-associated viruses (GLRaVs), such as grapevine leafroll-associated virus 1 (GLRaV-1), grapevine leafroll-associated virus 2 (GLRaV-2), and grapevine leafroll-associated virus 3 (GLRaV-3), can cause symptoms of GLD, as depicted in Figure 1a,c,d,h. GLRaVs belong to the Closteroviridae family [11] and are transmitted by infected plant material and vectors, such as several species of mealybugs and scales [12]. The grapevine mealybug Planococcus ficus particularly affects Baja California and is one of the primary vectors of GLRaV-3 [13,14], as well as coccoids, such as Pulvinaria vitis, Parthenolecanium corni, Ceroplastes rusci, and Coccus longulus [15]. In the past two decades, GLD has emerged as a significant threat to grapevine production, leading to an estimated 60% reduction in yield and a decline in grape quality [16].
GLD severely affects vine vigor and physiology, leading to uneven ripening and reduced yield and berry quality due to lower sugar content [17,18]. GRBV-infected grapevine plants show significant physiological disorders, such as changes in metabolism, accumulation of starch and soluble sugars, and a decrease in photosynthesis [19]; consequently, there is an inhibition of the ripening pathways, impacting the concentrations of sugar, phenolic, and volatile compounds in whole grapes and wines [20,21]. GRBD and GLD viruses disrupt the plant physiology and metabolism, negatively impacting vineyard profits by reducing fruit quality and ripening, resulting in estimated economic losses ranging from USD 2213 to 68,528 [22].
When GRBV infects a grapevine plant, the virus causes symptomatology comprising irregular red spots on the leaves, especially on the edges, reddish colored veins, and irregular edges [23], as seen in Figure 1b,c,e–h. It should be noted that GRBV infects a wide range of white-berry cultivars (Chardonnay, Riesling, Sauvignon blanc, etc.) and red-berry cultivars (Cabernet Franc, Cabernet Sauvignon, Malbec, Merlot, Mourvèdre, Petit Verdot, Petite Syrah, Pinot noir, Zinfandel, etc.) [8]. For their part, GLRaVs can potentially affect a large number of wine grape varieties; however, red-berry cultivars usually present the most characteristic foliar symptoms of the disease, comprising red and reddish-purple discolorations, and these expand over time, showing a downward rolling of the leaves at the end of the annual physiological cycle [24]. GRBD and GLD share the symptom of red blotchiness in specific spots or the entire leaf bundle. Initially, GLRaV was thought to cause GRBV symptoms [25].
GRBD was first identified in 2008 as a disease affecting grape production. To enhance the detection and monitoring of the virus, ref. [26] developed a method for sample processing and a multiplex polymerase chain reaction assay. This development was prompted by observing symptoms that resembled GLD in an 8-year-old Cabernet Sauvignon vineyard at the experimental research station of the Department of Viticulture and Enology at the University of California [25]. Although molecular diagnosis remains the most reliable method for detecting viruses, leaf symptomatology can approximate the presence of grapevine diseases. Therefore, it is feasible to use AI to efficiently classify grapevine leaves according to the presence or absence of GRBD and GLD symptoms indistinctly, enabling the diagnosis of these viruses through digital images. According to refs. [8,27], GRBD and GLD symptoms are recurrent in various regions, suggesting that the presence or absence of symptoms is not strictly determined by geographic location. Instead, symptom expression is primarily differentiated between red and white cultivars [28]. While climate and geographic factors may influence the stage of vegetative development in which symptoms manifest and their severity during the annual cycle [29], they do not dictate whether symptoms will appear. Therefore, DL models can be adapted using techniques such as transfer learning to enhance their classification performance across different datasets and conditions. This work compared the results of CNN models, such as YOLOv3, YOLOv5, YOLOv8, and ResNet-50. Among these, YOLOv5 achieved the best detection performance, with a 95.36% Accuracy, using a new dataset of 3198 images supported by 800 images of molecular diagnostics. Furthermore, benchmarking was performed with two edge computing devices, where Jetson Nano (NVIDIA Corporation, Santa Clara, CA, USA) obtained the best cost–benefit performance. The best-performing model was integrated into a graphical user interface (GUI) for the inspection of images, video, and real-time video in the field and laboratory.
The structure of the following sections is as follows: Section 2 contains information on recent related research; Section 3 presents the proposed method and materials; Section 4 presents the obtained results; Section 5 discusses the results compared with related work; and lastly, Section 6 presents the conclusions.

2. Related Work

New methods based on AI technologies garnered significant interest in scientific research for grapevine disease management. The medicinal leaf detection model based on P-Net, S-Net, and R-Net architectures, developed by [30], achieved accuracies above 97% when evaluated across three distinct datasets, thereby illustrating its generalizability. Nevertheless, the authors emphasized the importance of handling environmental factors, particularly the potentially poor quality of leaf images and changing lighting conditions. Furthermore, they intended to explore alternative DL architectures to enhance the performance of the computer vision system for its use in the field. The authors of [1] highlighted the high Accuracy and efficiency of CNNs in grapevine diagnosis, although they noted the limited availability of accessible image datasets for grapevine diseases. Complementarily, refs. [17,31] employed hyperspectral imaging and achieved Accuracies between 66.67% and 89.93%. They also utilized reverse transcription polymerase chain reaction (RT-PCR) for molecular diagnosis and identified characteristic wavelengths of 690, 715, 731, 1409, 1425, and 1582 nm as critical for the early detection of GLD. These studies demonstrated that hyperspectral imaging is effective for the non-destructive detection of grapevines infected with GLD. Authors [32] used the AlexNet architecture to train a dataset of 40,000 images of healthy and diseased leaves. The developed detection system successfully identified nine plant species and 24 diseases with an Accuracy of 98.90%. In [33], the authors performed a diagnostic detection of grape diseases using various CNN and Transformer vision models. Four models achieved 100% Accuracy using a dataset of Plant Village 4062 images. In [34], a dataset of 295 images was utilized across seven classifier models, which reached an Accuracy of 96%. In [31], diagnostic efforts with ML models, such as Random Forest (RF), and a CNN model that used 500 hyperspectral images reported an Accuracy of 87%. For grape cluster detection and physical grape injury assessment, ref. [35] employed CNN models with a dataset of 910 images, where YOLOv7 achieved the highest Accuracy, approximately 98%. In [36], 15 grape diseases were identified with an improved CNN model, which obtained a 99.1% Accuracy. Ref. [37] conducted a comparison of deep learning models for vine growth stage recognition using three classifier models and reported that ResNet provided better results with a 88.1% Accuracy. Ref. [17] detected three viruses associated with GLD in an ML classifier using a least squares support vector machine with Accuracies that ranged from 66% to 89%. Table 1 summarizes and compares the articles reviewed with the results obtained in this work.

3. Materials and Methods

This section provides details on the equipment used for the leaf molecular diagnosis; the hardware and software tools required to perform the advanced image classification; and, ultimately, a GUI-based implementation of the model validated with images taken directly from the vineyard to evaluate the reliability of the proposed methodology.
Consequently, the trained DL model in this study was tested in three different hardware devices to select the best option considering their performance and the requirements of the use scenario. Table 2 comprises the characteristics of the three tested devices: NVIDIA Jetson Nano (NVIDIA Corporation, Santa Clara, CA, USA), Raspberry Pi 4 (Raspberry Pi Foundation, Cambridge, UK), and a laptop hp victus d15 (Hewlett-Packard Company, Palo Alto, CA, USA).
The procedure shown in Figure 2 began with data collection, which included gathering grapevine leaf samples and capturing digital images of each sample to create an annotated dataset for DL image classification. The second phase involved preparing the data for DL model training, which included balancing the classes, selecting, training, validating the DL model based on the literature review, and choosing the most appropriate performance metrics to evaluate the model according to the project’s priorities. The third phase consisted of running the trained model in inference mode on three computing devices. Finally, a real-world use scenario for the DL-based grapevine leaf disease detector is proposed.

3.1. Data Collection

The sample collection of Vitis vinifera L. was conducted following the UC-Davis Foundation Plant Services guidelines [38], with modifications requiring the collection of at least 10 leaves and transportation in coolers with refrigerant gels. All vineyard owners or their representatives signed an informed consent form to participate in the project. To ensure data confidentiality, the identity and exact location of the sampled vineyards are not disclosed. The data collection methodology is detailed in Figure 3. For the leaf sampling, between 5 and 10 leaves were photographed from each plant, totaling 360 plants sampled throughout Baja California. The collections were carried out during October (the peak period for symptom manifestation) from 2018 to 2019 and from 2021 to 2023 [39]. The leaves were selected from various sections of each plant (top, bottom, right, left, and center) to include young and mature leaves that displayed diverse symptomatology. The field-collected samples were transported to the Agricultural Virology Laboratory at CICESE in airtight bags with cooling gels. Molecular diagnostics were conducted on several leaves to confirm the presence or absence of viruses associated with GRBD and GLD, and to correlate leaf symptoms with the amplification of genomic regions of GRBV and GLRaVs (GLRaV-1, GLRaV-2, GLRaV-3, and GLRaV-4). The total nucleic acids were extracted following the protocol of [40] with modifications, and real-time RT-PCR was performed according to the protocols [26,41], with modifications using EvaGreen dye.
Individual leaf images were captured using conventional smartphone cameras, with specifications provided in Table 3. Photographs were taken under natural and artificial lighting, using lamps to enhance the leaf features and color details. After obtaining laboratory diagnostic results, grapevine plants with confirmed viral infections were photographed in the field. The dataset composition (Table 3) included 0.3% of images collected in 2018 (four cultivars), 0.4% in 2019 (eight cultivars), 35.7% in 2021 (four cultivars), 32.6% in 2022 (thirteen cultivars), and 30.9% in 2023 (thirteen cultivars). Notably, 43.3% and 35% of the leaves photographed in 2022 and 2023, respectively, underwent molecular diagnostics.
Images of individual leaves were captured using conventional smartphone cameras, which had the characteristics detailed in Table 4. These photographs were taken under natural and artificial lighting, using lamps to highlight the characteristics of the leaves, which enhanced the details and colors. Additionally, after obtaining the laboratory test results, grapevine plants with verified virus symptoms were photographed in the field.

3.2. DL Models Training

Before starting the training process for the DL models, it was essential to confirm that the dataset contained sufficient images to accurately detect the grapevine leaf disease symptoms. To address this, the diagram in Figure 4 illustrates an iterative process of image selection, training, and testing using the YOLOv5 model.
Figure 5 shows in an illustrative manner how YOLOv5 focuses its attention and detects grapevine leaf diseases by processing images step by step. First, the Backbone extracts important details, like texture changes and patterns that might indicate disease. Then, the Neck combines information from different layers, helping the model recognize symptoms, even when lighting changes, leaves overlap, or parts of the image are blocked. Finally, the Head makes the final predictions, identifying and locating diseased areas [42]. This architecture offers more stable detection for objects of different sizes and scales, lighting conditions, and rotation compared with previous versions [43].
Table 5 provides a clear and concise summary of the key hyperparameters used to train the YOLOv5 model. It includes parameters such as the image size ( 416 × 416 ), batch size (5), number of epochs (30), data configuration file, and pre-trained weights (yolov5s.pt). The learning rate (0.01) and the default optimizer (SGD) were also specified, highlighting the model’s setup for efficient training. Additionally, Table 5 outlines the use of caching for faster data processing and deployment on a GPU-enabled device. Data augmentation is crucial for enhancing the generalization capability of deep learning models, particularly in object detection tasks. In YOLOv5, several augmentation hyperparameters are employed to improve the model robustness. The HSV Hue (0.015), Saturation (0.7), and Value (0.4) modified the color properties to simulate various lighting conditions. The Translation (0.1) shifted objects within an image, while the Scale (0.5) resized them to enhance the scale invariance. The Flip left–right (0.5) introduced horizontal mirroring, which reduced bias in the object orientation. Finally, the Mosaic (1.0) combined four images to increase the training diversity, which allowed the model to learn from varied object placements and occlusions [44]. This detailed configuration ensured reproducibility and transparency in the experimental setup.
Once the model detects the classes correctly across images of varying quality, three additional DL models were trained. The second object detector implemented in this study was YOLOv3. Since its release in 2018, this architecture has been extensively studied in computer vision research [45]. The CNN behind YOLOv3 is Darknet-53 and it contains multiple layers that extract features of various scales from the input image. The model’s final layer generates bounding boxes and predicts the object class. For more details, see [45]. YOLOv8 was the third model selected in this study due to its improved features compared with YOLOv5. These improvements include better generalization, the addition of advanced algorithms for calculating bounding boxes, and enhanced performance metrics. Although it outperforms YOLOv5 and earlier versions, YOLOv8 is also slower in real-time object detection tasks. This approach enhances object detection in low-resolution images [46]. For more details, see [47]. Finally, the last CNN model considered for this research was ResNet-50 due to its potential advantages in employing an architecture different from Darknet. ResNet-50 is a residual network with a simpler structure that connects layers through skip connections, enhancing the training efficiency [48]. For more details, consult [49].

3.3. Hardware Selection

The third phase of the methodology involved evaluating the selected DL model on three devices, including a personal computer (laptop) and the NVIDIA Jetson Nano and Raspberry Pi 4 edge computing devices, as illustrated in Figure 6. The objective was to identify the most suitable hardware under the operational conditions of the grapevine leaf symptom detector, whether in a field or laboratory setting, considering the performance of the device and its cost, as seen in Table 2.

3.4. Real-World Usage

The last stage involved utilizing the predictions of the DL models in a graphic user interface (GUI), enabling users to upload and analyze new images or video recordings from the field or leaves brought to the laboratory, as depicted in Figure 7. Finally, the model classifies leaves as symptomatic or asymptomatic for GRBD and/or GLD with high probability, displaying the result in the GUI to provide a preventive diagnosis.

4. Results

This section presents the results obtained in this work, which involved collecting leaf samples from vineyards in Baja California and analyzing them using DL-based models to identify the most effective one and propose an implementation strategy.

4.1. Data Collection and Preparation

An asymptomatic leaf sample, examined using real-time RT-PCR, confirmed a positive diagnosis for the whole plant. A total of 800 leaf samples were analyzed using real-time RT-PCR. The outcome of this process was 3198 images, each of which was individually photographed. The categories for the diagnosis were asymptomatic, with a total of 1535 images for leaves with no symptoms of GLD or GRBD based on 200 images with real-time RT-PCR, and 1663 images in the symptomatic category for leaves that presented GLD and GRBD symptoms according to 600 images with real-time RT-PCR. Table 6 provides a summary of these results. The main symptoms observed on leaves classified as symptomatic were irregular red blotches in pixels (especially along the edges), reddish-colored veins, rolling of leaf edges and intense green veins. In contrast, asymptomatic leaves sometimes showed non-GRBD and GLD symptoms, such as edge wilting or necrosis, which were excluded from consideration in this work.
The preparation of the dataset involved the second phase of the methodology (Figure 2) and included utilizing the image annotation tool Roboflow [50]. This tool was used to annotate images point by point, highlighting only the leaf to enhance its visual features and generate the dataset. After categorizing all the images, the classes were checked to ensure a balanced number of images in each class, thereby minimizing the risk of class imbalance, which could negatively impact the Accuracy of each class. Once the dataset was finalized, each DL model was coded and trained using Python 3.11.0 language. Furthermore, TensorFlow 2.6.0, Keras 3.5.0, and PyTorch 1.8.0 were the libraries used to facilitate the implementation of the CNNs. The models were trained for 15 epochs using the dataset of 3128 images from the symptomatic and asymptomatic classes, as shown in Table 6.
Field images and new leaves were collected and taken to the laboratory to validate the trained model, ensuring the Accuracy percentages were suitable for presenting reliable results. The dataset included images under occlusion conditions, lighting fluctuations, obstacles, and overlapping, as observed in Figure 8. Several performance metrics, such as Precision, Accuracy, F1-score, and the confusion matrix, were used to evaluate the models. These metrics identified the DL model with the best performance for detecting GRBD and/or GLD symptoms.

4.2. Training and Validation of DL Models

For YOLOv5, five different versions were trained to determine the optimal dataset size and balance between the number of images in the two classes to correctly detect and classify them, as illustrated in Figure 4. The first version included approximately 1000 images of asymptomatic leaves and over 1500 symptomatic leaves, organized as shown in Figure 9a.
A CNN model with the YOLOv5 architecture was trained using the dataset presented in Figure 9a and achieved an Accuracy of 91.41%. However, this model was limited to only detecting symptomatic and asymptomatic leaves from individual leaf images. Figure 10a illustrates the confusion matrix of the first model trained on an unbalanced dataset. In the second version of the model’s dataset, the number of images in each class was balanced, with 1200 in the asymptomatic class and 1500 in the symptomatic class. By increasing the number of asymptomatic images, the model achieved better results in leaf detection, as shown in Figure 9b. However, this version could not detect leaf symptoms in field images. The results obtained from this second model version are presented in Figure 10b. This model achieved an Accuracy of 95% and could detect individual leaves exposed to artificial light, regardless of whether they exhibited symptoms. For the third version of the model, 1200 images were used in the asymptomatic class and 1400 in the symptomatic class, as depicted in Figure 9c. Unlike the previous models, the Accuracies presented in Figure 10c were validated by incorporating field images. The fourth version introduced field images, including plants with GRBD and/or GLD symptoms, to further diversify the dataset with images from the laboratory and the field. This adjustment increased the symptomatic class to 1600 images, while the asymptomatic class remained at 1200, which resulted in a critical class imbalance, as shown in Figure 9d. The model achieved an Accuracy of 96.32%, as depicted in Figure 10d. The final version of the model was developed based on the performance of the fifth YOLOv5 model, which incorporated all necessary modifications. Since the two classes were more balanced, as shown in Figure 9e, the model achieved higher and closer metrics of Precision and Recall, as illustrated in the confusion matrix in Figure 10e.
Table 7 shows the classification Precision and error results per class for each of the five YOLOv5 model versions and their Accuracies. Additionally, it examines each model’s capability to identify the visual features of GRBD and GLD while evaluating their performance in classifying low-resolution images.
Figure 11 illustrates the confusion matrices of the four DL models selected, as discussed in Section 3. These models were trained using the same dataset of 3198 images that achieved the highest detection results for the YOLOv5 models, as presented in Table 7. The models used to compare the performance and identify the most effective one for detecting symptoms related to GRBD and GLD were YOLOv3, YOLOv5, YOLOv8, and ResNet-50.
The results obtained for the YOLOv3 model, summarized in Table 8, include an Accuracy of 87.5%, a Precision of 83.78% for the asymptomatic class, and 92.13% for the symptomatic class, as depicted in Figure 11a. Furthermore, the YOLOv5 model, presented in Figure 11, proved to be the best-performing model according to the evaluation procedure, as shown in Figure 10 and Table 7. It achieved an Accuracy of 95.36%, a Precision of 94.85% for the asymptomatic class, and 95.87% for the symptomatic class. For YOLOv8, the Accuracy was 86.5%, with a Precision of 87.63% for the asymptomatic class and 85.44% for the symptomatic class, as illustrated in Figure 11c. The confusion matrix for ResNet-50, shown in Figure 11d, reveals a lower detection rate for the asymptomatic class. The Accuracy of this DL model was 85.16%, with a Precision of 87.99% for the asymptomatic class and 82.72% for the symptomatic class, as detailed in Table 8.
Finally, the model with the best classification performance for the asymptomatic class was YOLOv5, followed by YOLOv3, YOLOv8, and ResNet-50, respectively. For the symptomatic class, YOLOv5 also outperformed the other models. In this case, the second-best model was ResNet-50, followed by YOLOv8 and YOLOv3. These results suggest that the YOLOv3, YOLOv8, and ResNet-50 models faced challenges in accurately detecting both classes.
The training process accounted for different field conditions, including occlusions caused by obstacles, fluctuations in light intensity across images, and overlapping leaves. Figure 12a illustrates a successful detection despite an object partially obstructing the leaves, while Figure 12b demonstrates the model’s capability to identify leaves under different lighting conditions. Similarly, Figure 12c showcases another instance of occlusion, and Figure 12d highlights a case of significant leaf overlap. The model’s class confidence score ranged from 0.28 to 0.85, reflecting its ability to detect leaves under these conditions.
Figure 13 presents the confusion matrix derived from evaluating the YOLOv5 model on a test dataset that consisted of 600 images with molecular diagnoses, with 400 symptomatic and 200 asymptomatic images, which served as the test values. The model demonstrated high Accuracy in classifying the images, where it achieved a 95% success rate for the asymptomatic class and 88% for the symptomatic class. These results highlight the model’s robust capability to distinguish between the two conditions, which is critical for applications in plant health and early symptom detection on grapevine leaves. This balance between error rates and predictive performance underscores the effectiveness of YOLOv5. These results indicate that the YOLOv5 model is well-suited for evaluating symptom detection in images, video, and real-time video in field and laboratory scenarios.
Table 9 summarizes the performance metrics of the YOLOv5 model when tested with a dataset of leaf images, comparing its predictions against molecular diagnoses as reference values. The metrics Accuracy, Precision, Recall, and F1-score provided a comprehensive evaluation of the model’s predictive capabilities for both the asymptomatic and symptomatic classes. For the asymptomatic class, the model achieved an Accuracy of 91.25%, a Precision of 88.37%, and a Recall of 95.00%, indicating a strong performance in correctly identifying asymptomatic samples. Furthermore, the table presents the model’s performance across different image resolutions (240 × 240, 480 × 480, and 640 × 640), demonstrating the effect of resolution on the classification Accuracy. At 240 × 240 resolution, the model achieved an Accuracy of 87.63% for asymptomatic leaves and 93.62% for symptomatic ones, with a lower Recall of 80.77% for symptomatic cases, suggesting a higher rate of false negatives. At the 480 × 480 resolution, the model’s overall performance improved, where the Accuracy reached 91.25% for asymptomatic leaves and 94.59% for symptomatic ones. The F1-score for symptomatic classification also increased to 90.91%, reflecting a more balanced detection of diseased samples. Finally, at the 640 × 640 resolution, the model achieved the highest Recall (98.00% for asymptomatic and 88.00% for symptomatic samples), but with a slight trade-off in Precision, particularly for asymptomatic leaves (89.09%). These results suggest that increasing the image resolution enhanced the model’s ability to detect symptomatic leaves, and thus, reduced the false negative rate. However, beyond a certain resolution, gains in performance became marginal, as observed in the small difference between the 480 × 480 and 640 × 640 results. The optimal balance between the computational efficiency and Accuracy was found at the 480 × 480 resolution, where the model maintained a high Precision and Recall while minimizing the computational overhead.

4.3. Hardware Selection

This section evaluates the hardware using the model with the best metrics for detecting symptoms in grapevine leaves and provides the necessary information for the optimal selection of hardware to deploy the model in a real-world scenario. As mentioned above, the selected hardware included two edge computing devices, namely, a Jetson Nano with 2 GB and a Raspberry Pi 4, and a high-performance computer. The model was evaluated in inference mode on a short video and in real-time vision scenarios for field deployment. The results obtained are shown in Table 10. These results demonstrate the capability of these devices at handling recordings and real-time camera input, highlighting the performance of edge computing devices compared with high-performance personal computers. In this benchmarking process, which aimed to select the appropriate hardware, the edge computing devices showed longer processing times. However, Raspberry Pi 4 is an excellent, cost-effective option for testing the YOLOv5 model. With an image resolution of 240 × 240 , this device can process low-resolution videos in real-time at 1.8 FPS. The Jetson Nano is a higher-performance option for processing due to its ability to handle various resolutions effectively. It can process video at approximately 4 FPS for a resolution of 240 × 240, 2 FPS for 480 × 480, and 1 FPS for 640 × 640. These performance metrics are detailed in Table 10. Those metrics demonstrate the Jetson Nano’s capability to handle images, video recordings, and real-time video across a range of resolutions in lab and field applications.
Table 10 presents the benchmarking results of three computing devices—Raspberry Pi, Jetson Nano, and a personal computer (laptop)—in terms of inference time and frames per second (FPS) for different image resolutions (240 × 240, 480 × 480, and 640 × 640). This evaluation provided insights into the performance trade-offs between edge computing devices and a high-performance computing system when deploying AI models. The benchmarking results demonstrate that while edge devices, like the Raspberry Pi and Jetson Nano, are viable options for deploying AI models, their performances varied significantly based on the computational demands of the application. The Jetson Nano offered a good balance between the cost, performance, and resolution capabilities, making it suitable for real-time tasks with moderate resolution requirements. Conversely, the personal computer’s superior performance was ideal for high-throughput applications, albeit at a higher cost and reduced portability. These findings underscore the importance of selecting the appropriate hardware based on the specific requirements and constraints of the deployment scenario.

4.4. Real-World Usage Scenario

Although the system’s ability to diagnose GRBD and GLD depended directly on the quality of the dataset and the inclusion of diverse images (as discussed in Section 4.1) and the performance of the model (Section 4.2), the implementation and proposed usage necessitate direct interaction between the final user and the DL model.
The process began by capturing images or videos of grapevine leaves in the field or laboratory settings. These images served as the input data for the analysis, and a camera was used to record or capture high-quality images of the grapevine leaves. These images were then transferred to the edge computing device for further processing. The captured images were processed on an edge computing device, such as the Jetson Nano. This device executed the pre-trained YOLOv5-based DL model optimized for detecting GRBD and GLD symptoms. The processed data were visualized using custom software designed for symptom detection. The software provides the following functionalities: Image or video selection: users can select an image or video from their device for analysis. Real-time video analysis: the software supports real-time symptom detection by interfacing with live video streams. Symptom detection results: the software applies the detection model to identify symptomatic and asymptomatic areas on the leaves. The software displays the results of the analysis, including the classification of the detected areas as symptomatic or asymptomatic. A confidence score indicates the likelihood of the detected symptom; these results are visual overlays on the images or video frames, highlighting the regions of interest. This end-to-end pipeline demonstrates the practical application of AI-driven symptom detection in viticulture, enabling rapid and accurate disease diagnosis in the field or laboratory. This procedure is illustrated in Figure 14.
Figure 15 illustrates using a Python-based GUI to analyze the laboratory and field input image, video, and real-time video, and by selecting the “Symptom detector” button in the GUI, the image was processed and displayed with a diagnosis result based on the DL model.

5. Discussion

The diseases addressed in this work were GRBD and GLD, which often share symptoms, such as irregular red spots on grapevine leaves (particularly along the margins), and manifest during the same period. Other symptoms, including reddish or intense green veins, irregular edges, and leaf rolling, may also appear. Due to these viral infections, plants experience physiological and metabolic disruptions, significantly reducing the fruit quality and ripeness, resulting in economic losses for growers [20,24]. Molecular diagnostics are typically employed when GRBD and/or GLD are suspected in a vineyard. These highly sensitive techniques [41] are often expensive and require specialized personnel. Consequently, seeking alternatives, such as AI, is necessary to classify leaves showing GRBD and/or GLD symptoms, facilitating rapid and preventive diagnoses through images or serving as a preliminary measure before applying molecular diagnostic techniques.
Based on data provided by [39], information was collected in Baja California. It included symptomatic and asymptomatic grapevine leaf images with real-time RT-PCR on 800 samples with molecular diagnostics. Using this dataset, five YOLOv5 versions and four DL models were developed with various characteristics and results in terms of the evaluation metrics, including three YOLO versions (v3, v5, and v8) and ResNet-50. Comparing the YOLO and ResNet-50 versions, a higher Accuracy rate was observed with the DL models using YOLO. YOLOv5-v5 included 800 images with molecular diagnoses that categorized individual leaf and plant images, and demonstrated the ability to use low-quality images effectively for categorization.
The main differences from previously published works were the number of processed images, their quality, and the processing method. In this study, we used conventional (cellphone) cameras and 3198 images to build the dataset, and achieved an Accuracy rate of 95.36% with YOLOv5. Ref. [33] detected grape leaf diseases using various CNN and Transformer vision models, where four models achieved a 100% Accuracy, which suggests potential overfitting in the training data, leading to the memorization of examples rather than learning visual characteristics. Other studies, like [31], employed hyperspectral imaging and molecular diagnostics, such as PCR, to detect GLD and GRBV, where they achieved a 87% Accuracy with CNN models compared with 82.8% with RF models. Unlike the work by [17], which detected leafroll disease (GLRaV-3) in grapevine plants using a least squares support vector machine classifier (LS-SVM) and obtained accuracies from 66.67% to 89.93%, this study achieved higher Accuracy rates in both categories with a more straightforward approach for detecting GRBD and/or GLD. Other studies used different models or study units. Ref. [35] detected grape clusters and evaluated biophysical lesions using YOLO models (YOLOv5x6, YOLOv7-E6E, and YOLOR-CSP-X), where YOLOv7 achieved the highest Accuracy at 98%. Similarly, ref. [36] used improved YOLOXS (GFCD-YOLOXS) and CBAM models to identify 15 grape diseases, with an Accuracy of 99.10%. In comparison, this study identified YOLOv5 as the most effective model for leaf detection. Refs. [34,37] used the ResNet model; the former performed a quick vineyard health diagnosis based on digital images, where it obtained an F-measure of 96.6% and an intersection over the union of 93.4%, while the latter compared deep learning methods to recognize grapevine growth stages and achieved an Accuracy of 88.1%.
The presence of GRBD and GLD can vary depending on climate, grape variety, and environmental factors. Additionally, differences in lighting, soil nutrients and water availability can affect how symptoms appear, which may impact the model’s Accuracy. To enhance the model’s adaptability, the training dataset included images with varying lighting conditions and quality in terms of clarity and sharpness, as well as images from both the laboratory and the field, as seen in Figure 9. Consequently, the DL model test described in Section 4.3 was conducted to identify the optimal computing device for the use case, considering different image resolutions that comprised 240 × 240, 480 × 480, and 640 × 640. This resulted in the selection of the edge computing device Jetson NANO, which reached an inference time of 1.2774 s and a frame rate of 1.0204 FPS at a resolution of 640 × 640. Adapting this approach to different regions or crops comes with several challenges. Therefore, transfer learning can be used to fine-tune the model with region-specific datasets, ensuring reliable detection across various grape-growing areas. Moreover, the implementation under field conditions must consider the variability in image quality attributable to diverse devices, angles, and illumination conditions.
Based on the evaluation of three different hardware devices, the Jetson Nano was identified as the most suitable option for deploying the trained DL model in real-world agricultural scenarios. Its optimal balance between computational efficiency and portability makes it an excellent choice for edge computing applications, particularly in precision agriculture. As highlighted by [51], the Jetson Nano combination of processing power, low energy consumption and compact design enables efficient real-time disease detection in field conditions. These findings reinforce the feasibility of integrating deep learning-based disease detection into practical vineyard management strategies, paving the way for scalable and cost-effective monitoring solutions.
Finally, it is essential to consider that diagnosis based on symptoms may not reliably identify asymptomatic or virus-free plants, as some grape cultivars do not exhibit symptoms despite infection with GLD- and/or GRBD-associated viruses. Additionally, early-stage infections may also be asymptomatic [24]. Therefore, molecular diagnostics are recommended to confirm results. Notably, the presence of viruses requires molecular confirmation, as the leaf symptoms of these diseases appear only during specific and brief time windows and may also be influenced by external factors, such as nutrient deficiencies or water stress [52].

6. Conclusions

A robust database was created based on molecular diagnostics associated with symptomatic samples and images, enabling the development of an AI model informed by molecular diagnostic outcomes. DL models, such as YOLOv3, YOLOv5, YOLOv8, and ResNet-50, were assessed for their efficacy in detecting signs of grapevine red blotch disease (GRBD) and grapevine leafroll disease (GLD). The testing findings indicated that YOLOv5 outperformed the other models, with a Precision of 95.36%, Recall of 95.77%, and an F1-score of 95.56%. These data highlight the model’s efficacy in precisely categorizing grapevine leaves exhibiting signs of GRBD and/or GLD and those without symptoms. A benchmarking analysis of two edge computing devices revealed that the Jetson NANO had the most favorable cost–benefit ratio, validating the feasibility of using the proposed method in real agricultural settings. The results demonstrate that the YOLOv5-based approach is suitable for on-site vineyard monitoring, enabling rapid disease identification and response actions.
The main aim of this study was to create an enhanced DL model for the early detection of GRBD and/or GLD symptoms, allowing for timely intervention to support vineyard owners, technicians, and researchers. An intuitive interface was created to enhance the accessibility and use, enabling the rapid deployment of the proposed YOLOv5 model. The model showed proficiency in identifying symptoms in both individual leaves and whole grapevine plants, yet it is advisable to obtain close-up images of leaves to enhance the diagnostic Accuracy. This study’s results underscore YOLOv5’s potential as a strong and reliable instrument for extensive agricultural monitoring, providing substantial economic advantages via early disease diagnosis and proactive management measures.

Future Work

The main interest of this study in the future is to improve the dataset diversity to consider a broader range of environmental conditions, such as lighting, occlusions, and the presence of near objects or obstacles, as well as potential leaf conditions that could interfere with the visual diagnosis of the diseases’ symptoms, like leaf nutrient deficiencies or hydric stress. Moreover, integrating additional data sources, such as multispectral or hyperspectral imaging, could enhance the diagnostic Accuracy beyond symptom recognition. Developing a mobile application or a cloud-based system could enable real-time analysis for grapevine growers, thereby facilitating the efficient monitoring of their vineyards. Another future contribution will be to update the dataset considering additional regions in Mexico and worldwide to confirm its usability and versatility under other environmental and climatological conditions.

Author Contributions

Conceptualization, K.G.G.-R. and O.A.A.-C.; Data curation, E.I.-G. and E.E.G.-G.; Formal analysis, D.C.-V. and C.A.L.-M.; Funding acquisition, J.C.-T.; Investigation, C.L.-G. and K.G.G.-R.; Methodology, C.L.-G., O.A.A.-C. and K.G.G.-R.; Project administration, E.I.-G. and K.G.G.-R.; Resources, E.I.-G. and J.C.-T.; Software, C.L.-G., O.A.A.-C. and J.G.-F.; Supervision, E.I.-G. and E.E.G.-G.; Validation, D.C.-V., C.A.L.-M. and E.E.G.-G.; Visualization, D.C.-V. and C.A.L.-M.; Writing—original draft, C.L.-G., K.G.G.-R., J.G.-F. and O.A.A.-C.; Writing—review and editing, J.C.-T., O.A.A.-C. and E.I.-G. All authors read and agreed to the published version of this manuscript.

Funding

This research was funded by the Universidad Autónoma de Baja California (UABC) through the 25th internal call for research projects with grant number 402/6/C/53/25 and the Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE, institutional project 683210). Furthermore, thanks to Comité Estatal de Sanidad Vegetal de Baja California (CESVBC) and SADERBC (Vine Phytosanitary Management Project), The authors also thank CONAHCyT with FOP02-2021-4 Grant 316602 and the scholarship awarded to K. G. García-Resendiz and J. Galarza-Falfan.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset is available at: Garcia, Karen; Carrillo Tripp, Jimena. 2024. Grapevine Virus and Symptom Database (GVS-DB), Mendeley Data, V1 at https://doi.org/10.17632/wkbd3wsjpj.1.

Acknowledgments

The authors would like to thank the Universidad Autónoma de Baja California and the Centro de Investigación Científica y de Educación Superior de Ensenada for the support provided in the use of the laboratories and computing equipment.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
ANNArtificial Neural Network
CBAMConvolutional Block Attention Module
CNNConvolutional Neural Network
DLDeep learning
GLDGrapevine leafroll disease
GLRaVsGrapevine leafroll-associated viruses
GRBDGrapevine red blotch disease
GRBVGrapevine red blotch virus
LS-SVMLeast squares support vector machine
MLMachine learning
RT-PCRReverse transcription polymerase chain reaction
ResNetResidual network
RFRandom Forest
YOLOYou Only Look Once

References

  1. Gatou, P.; Tsiara, X.; Spitalas, A.; Sioutas, S.; Vonitsanos, G. Artificial Intelligence Techniques in Grapevine Research: A Comparative Study with an Extensive Review of Datasets, Diseases, and Techniques Evaluation. Sensors 2024, 24, 6211. [Google Scholar] [CrossRef] [PubMed]
  2. International Organisation of Vine and Wine. 2022. Available online: https://www.oiv.int/sites/default/files/documents/OIV_Actualidad_de_la_coyuntura_del_sector_vitivinicola_mundial_en_2022_0.pdf (accessed on 2 February 2024).
  3. Instituto Nacional de Estadística y Geografía (INEGI). Economy and Productive Sectors: Agriculture. 2025. Available online: https://www.inegi.org.mx/temas/agricultura/ (accessed on 20 February 2025).
  4. Grape Production in Mexico 2022. Available online: https://www.gob.mx/siap/documentos/produccion-de-uva-en-mexico-2022 (accessed on 19 December 2023).
  5. Sembradas 4,365 hectáreas con vid en la Zona Costa de baja California: Agricultura. 2023. Available online: https://www.gob.mx/agricultura/bajacalifornia/articulos/sembradas-4-365-hectareas-con-vid-en-la-zona-costa-de-baja-california-agricultura (accessed on 29 January 2024).
  6. Yepes, L.M.; Cieniewicz, E.; Krenz, B.; McLane, H.; Thompson, J.R.; Perry, K.L.; Fuchs, M. Causative Role of Grapevine Red Blotch Virus in Red Blotch Disease. Phytopathology® 2018, 108, 902–909. [Google Scholar] [CrossRef] [PubMed]
  7. Gasperin-Bulbarela, J.; Licea-Navarro, A.F.; Pino-Villar, C.; Hernández-Martínez, R.; Carrillo-Tripp, J. First Report of Grapevine Red Blotch Virus in Mexico. Plant Dis. 2019, 103, 381. [Google Scholar] [CrossRef]
  8. Krenz, B.; Fuchs, M.; Thompson, J.R. Grapevine red blotch disease: A comprehensive Q&A guide. PLoS Pathog. 2023, 19, e1011671. [Google Scholar] [CrossRef]
  9. ICTV. Genus: Grablovirus. 2010. Available online: https://ictv.global/report/chapter/geminiviridae/geminiviridae/grablovirus (accessed on 19 December 2023).
  10. Fiallo-Olivé, E.; Lett, J.M.; Martin, D.P.; Roumagnac, P.; Varsani, A.; Zerbini, F.M.; Navas-Castillo, J. ICTV Virus Taxonomy Profile: Geminiviridae 2021. J. Gen. Virol. 2021, 102, 001696. [Google Scholar] [CrossRef] [PubMed]
  11. Fuchs, M.; Bar-Joseph, M.; Candresse, T.; Maree, H.J.; Martelli, G.P.; Melzer, M.J.; Menzel, W.; Minafra, A.; Sabanadzovic, S.; ICTV Report Consortium. ICTV Virus Taxonomy Profile: Closteroviridae. J. Gen. Virol. 2020, 101, 364–365. [Google Scholar] [CrossRef]
  12. Hommay, G.; Beuve, M.; Herrbach, E. Transmission of Grapevine Leafroll-Associated Viruses and Grapevine Virus A by Vineyard-Sampled Soft Scales (Parthenolecanium corni, Hemiptera: Coccidae). Viruses 2022, 14, 2679. [Google Scholar] [CrossRef]
  13. Cabaleiro, C.; Pesqueira, A.M.; Segura, A. Planococcus ficus and the spread of grapevine leafroll disease in vineyards: A 30-year-long case study in north-West Spain. Eur. J. Plant Pathol. 2022, 163, 733–747. [Google Scholar] [CrossRef]
  14. Comité Estatal de Sanidad Vegetal de Baja California. Plagas de vid. 2023. Available online: https://www.cesvbc.org/copia-de-manejo-fitosanitario-de-fr (accessed on 29 January 2024).
  15. Herrbach, E.; Alliaume, A.; Prator, C.A.; Daane, K.M.; Cooper, M.L.; Almeida, R.P.P. Vector Transmission of Grapevine Leafroll-Associated Viruses. In Grapevine Viruses: Molecular Biology, Diagnostics and Management; Springer International Publishing: Cham, Switzerland, 2017; pp. 483–503. [Google Scholar] [CrossRef]
  16. Atallah, S.S.; Gómez, M.I.; Fuchs, M.F.; Martinson, T.E. Economic Impact of Grapevine Leafroll Disease on Vitis vinifera cv. Cabernet franc in Finger Lakes Vineyards of New York. Am. J. Enol. Vitic. 2012, 63, 73–79. [Google Scholar] [CrossRef]
  17. Gao, Z.; Khot, L.R.; Naidu, R.A.; Zhang, Q. Early detection of grapevine leafroll disease in a red-berried wine grape cultivar using hyperspectral imaging. Comput. Electron. Agric. 2020, 179, 105807. [Google Scholar] [CrossRef]
  18. Burger, J.T.; Maree, H.J.; Gouveia, P.; Naidu, R.A. Grapevine leafroll-associated virus3. In Grapevine Viruses: Molecular Biology, Diagnostics and Management; Springer International Publishing: Cham, Switzerland, 2017; pp. 167–195. [Google Scholar] [CrossRef]
  19. Bahder, B.W.; Zalom, F.G.; Jayanth, M.; Sudarshana, M.R. Phylogeny of Geminivirus Coat Protein Sequences and Digital PCR Aid in Identifying Spissistilus festinus as a Vector of Grapevine red blotch-associated virus. Phytopathology® 2016, 106, 1223–1230. [Google Scholar] [CrossRef] [PubMed]
  20. Blanco-Ulate, B.; Hopfer, H.; Figueroa-Balderas, R.; Ye, Z.; Rivero, R.M.; Albacete, A.; Pérez-Alfocea, F.; Koyama, R.; Anderson, M.M.; Smith, R.J.; et al. Red blotch disease alters grape berry development and metabolism by interfering with the transcriptional and hormonal regulation of ripening. J. Exp. Bot. 2017, 68, 1225–1238. [Google Scholar] [CrossRef] [PubMed]
  21. Rumbaugh, A.C.; Sudarshana, M.R.; Oberholster, A. Grapevine Red Blotch Disease Etiology and Its Impact on Grapevine Physiology and Berry and Wine Composition. Horticulturae 2021, 7, 552. [Google Scholar] [CrossRef]
  22. Ricketts, K.D.; Gómez, M.I.; Fuchs, M.F.; Martinson, T.E.; Smith, R.J.; Cooper, M.L.; Moyer, M.M.; Wise, A. Mitigating the Economic Impact of Grapevine Red Blotch: Optimizing Disease Management Strategies in U.S. Vineyards. Am. J. Enol. Vitic. 2017, 68, 127–135. [Google Scholar] [CrossRef]
  23. Sudarshana, M.R.; Perry, K.L.; Fuchs, M.F. Grapevine Red Blotch-Associated Virus, an Emerging Threat to the Grapevine Industry. Phytopathology® 2015, 105, 1026–1032. [Google Scholar] [CrossRef] [PubMed]
  24. Lee, L.; Reynolds, A.; Lan, Y.; Meng, B. Identification of unique electromagnetic signatures from GLRaV-3 infected grapevine leaves in different stages of virus development. Smart Agric. Technol. 2024, 8, 100464. [Google Scholar] [CrossRef]
  25. Calvi, B.L. Effects of Red-leaf Disease on Cabernet Sauvignon at the Oakville Experimental Vineyard and Mitigation by Harvest Delay and Crop Adjustment. Master’s Thesis, University of California, Davis, CA, USA, 2011. [Google Scholar]
  26. Krenz, B.; Thompson, J.R.; McLane, H.L.; Fuchs, M.; Perry, K.L. Grapevine red blotch-associated virus Is Widespread in the United States. Phytopathology® 2014, 104, 1232–1240. [Google Scholar] [CrossRef] [PubMed]
  27. Naidu, R.A.; Maree, H.J.; Burger, J.T. Grapevine Leafroll Disease and Associated Viruses: A Unique Pathosystem. Annu. Rev. Phytopathol. 2015, 53, 613–634. [Google Scholar] [CrossRef] [PubMed]
  28. Naidu, R.; Rowhani, A.; Fuchs, M.; Golino, D.; Martelli, G.P. Grapevine Leafroll: A Complex Viral Disease Affecting a High-Value Fruit Crop. Plant Dis. 2014, 98, 1172–1180. [Google Scholar] [CrossRef]
  29. Martelli, G. Directory of virus and virus-like diseases of the grapevine and their agents. J. Plant Pathol. 2014, 96, 1–136. [Google Scholar]
  30. Sekharamantry, P.K.; Rao, M.S.; Srinivas, Y.; Uriti, A. PSR-LeafNet: A Deep Learning Framework for Identifying Medicinal Plant Leaves Using Support Vector Machines. Big Data Cogn. Comput. 2024, 8, 176. [Google Scholar] [CrossRef]
  31. Sawyer, E.; Laroche-Pinel, E.; Flasco, M.; Cooper, M.L.; Corrales, B.; Fuchs, M.; Brillante, L. Phenotyping grapevine red blotch virus and grapevine leafroll-associated viruses before and after symptom expression through machine-learning analysis of hyperspectral images. Front. Plant Sci. 2023, 14, 1117869. [Google Scholar] [CrossRef] [PubMed]
  32. Maeda Gutiérrez, V.; Guerrero Méndez, C.; Olvera Olvera, C.A.; Araiza Esquivel, M.A.; Espinoza García, G.; Bordón López, R. Convolutional neural networks for detection and classification of plant diseases based on digital imagenes. Rev. BiolóGico Agropecu. Tuxpan 2018. [Google Scholar] [CrossRef]
  33. Kunduracioglu, I.; Pacal, I. Advancements in deep learning for accurate classification of grape leaves and diagnosis of grape diseases. J. Plant Dis. Prot. 2024, 131, 1061–1080. [Google Scholar] [CrossRef]
  34. Elsherbiny, O.; Elaraby, A.; Alahmadi, M.; Hamdan, M.; Gao, J. Rapid Grapevine Health Diagnosis Based on Digital Imaging and Deep Learning. Plants 2024, 13, 135. [Google Scholar] [CrossRef] [PubMed]
  35. Pinheiro, I.; Moreira, G.; Queirós da Silva, D.; Magalhães, S.; Valente, A.; Moura Oliveira, P.; Cunha, M.; Santos, F. Deep Learning YOLO-Based Solution for Grape Bunch Detection and Assessment of Biophysical Lesions. Agronomy 2023, 13, 1120. [Google Scholar] [CrossRef]
  36. Wang, C.; Wang, Y.; Ma, G.; Bian, G.; Ma, C. Identification of Grape Diseases Based on Improved YOLOXS. Appl. Sci. 2023, 13, 5978. [Google Scholar] [CrossRef]
  37. Schieck, M.; Krajsic, P.; Loos, F.; Hussein, A.; Franczyk, B.; Kozierkiewicz, A.; Pietranik, M. Comparison of deep learning methods for grapevine growth stage recognition. Comput. Electron. Agric. 2023, 211, 107944. [Google Scholar] [CrossRef]
  38. Foundation Plant Services, UC Davis. FPS Grape Program—Sample Collection. 2025. Available online: http://fps.ucdavis.edu/samplecollection.cfm (accessed on 24 January 2025).
  39. García, K.; Carrillo Tripp, J. Grapevine Virus and Symptom Database (GVS-DB); Mendeley Data. 2024. Available online: https://data.mendeley.com/datasets/wkbd3wsjpj/1 (accessed on 5 December 2024).
  40. Gambino, G.; Perrone, I.; Gribaudo, I. A Rapid and effective method for RNA extraction from different tissues of grapevine and other woody plants. Phytochem. Anal. 2008, 19, 520–525. [Google Scholar] [CrossRef]
  41. Osman, F.; Leutenegger, C.; Golino, D.; Rowhani, A. Real-time RT-PCR (TaqMan) assays for the detection of Grapevine Leafroll associated viruses 1-5 and 9. J. Virol. Methods 2007, 141, 22–29. [Google Scholar] [CrossRef]
  42. Jocher, G. YOLOv5 by Ultralytics. 2020. [Google Scholar] [CrossRef]
  43. Liu, H.; Sun, F.; Gu, J.; Deng, L. SF-YOLOv5: A Lightweight Small Object Detection Algorithm Based on Improved Feature Fusion Mode. Sensors 2022, 22, 5817. [Google Scholar] [CrossRef]
  44. Ultralytics. Data Augmentation—Tools and Libraries. 2024. Available online: https://www.ultralytics.com/glossary/data-augmentation#tools-and-libraries (accessed on 18 January 2024).
  45. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018. [Google Scholar] [CrossRef]
  46. Reis, D.; Kupec, J.; Hong, J.; Daoudi, A. Real-Time Flying Object Detection with YOLOv8. arXiv 2023. [Google Scholar] [CrossRef]
  47. Jocher, G.; Chaurasia, A.; Qiu, J. Ultralytics YOLOv8. 2023. [Google Scholar] [CrossRef]
  48. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015. [Google Scholar] [CrossRef]
  49. Bendjillali, R.I.; Beladgham, M.; Merit, K.; Taleb-Ahmed, A. Illumination-robust face recognition based on deep convolutional neural networks architectures. Indones. J. Electr. Eng. Comput. Sci. 2020, 18, 1015–1027. [Google Scholar] [CrossRef]
  50. Dwyer, B.; Nelson, J.; Hansen, T. Roboflow, Version 1.0. 2024. Available online: https://roboflow.com (accessed on 18 January 2024).
  51. Galarza-Falfan, J.; García-Guerrero, E.E.; Aguirre-Castro, O.A.; López-Bonilla, O.R.; Tamayo-Pérez, U.J.; Cárdenas-Valdez, J.R.; Hernández-Mejía, C.; Borrego-Dominguez, S.; Inzunza-Gonzalez, E. Path Planning for Autonomous Mobile Robot Using Intelligent Algorithms. Technologies 2024, 12, 82. [Google Scholar] [CrossRef]
  52. Ju, Y.L.; Yue, X.F.; Zhao, X.F.; Zhao, H.; Fang, Y.L. Physiological, micro-morphological and metabolomic analysis of grapevine (Vitis vinifera L.) leaf of plants under water stress. Plant Physiol. Biochem. 2018, 130, 501–510. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Grapevine plants with GRBD and/or GLD symptoms: (a) plant with GLRaV-3 and GRBV symptoms exhibiting irregular red spots on leaves, light green veins, and margin rolling; (b) grapevine plant with GLRaV-1 symptoms exhibiting red pixelated blotches; (c) grapevine plant with GLRaV-2 and GRBV symptoms comprising red pixels on the leaf and margin rolling; (d) grapevine plant with GRBV symptoms, with red pixelated blotches on the margin; (eg) grapevine plant with GLRaV-3 symptoms, exhibiting red blotches and light green veins; (h) grapevine plant with GLRaV-3 and GRBV symptoms, presenting irregular red spots on the leaves, light green veins, and margin rolling.
Figure 1. Grapevine plants with GRBD and/or GLD symptoms: (a) plant with GLRaV-3 and GRBV symptoms exhibiting irregular red spots on leaves, light green veins, and margin rolling; (b) grapevine plant with GLRaV-1 symptoms exhibiting red pixelated blotches; (c) grapevine plant with GLRaV-2 and GRBV symptoms comprising red pixels on the leaf and margin rolling; (d) grapevine plant with GRBV symptoms, with red pixelated blotches on the margin; (eg) grapevine plant with GLRaV-3 symptoms, exhibiting red blotches and light green veins; (h) grapevine plant with GLRaV-3 and GRBV symptoms, presenting irregular red spots on the leaves, light green veins, and margin rolling.
Agriengineering 07 00063 g001
Figure 2. Methodology for sample collection and classification.
Figure 2. Methodology for sample collection and classification.
Agriengineering 07 00063 g002
Figure 3. Methodology for data collection.
Figure 3. Methodology for data collection.
Agriengineering 07 00063 g003
Figure 4. Data selection for training considering environment conditions.
Figure 4. Data selection for training considering environment conditions.
Agriengineering 07 00063 g004
Figure 5. YOLOv5 architecture used to detect the grapevine diseases’ key features.
Figure 5. YOLOv5 architecture used to detect the grapevine diseases’ key features.
Agriengineering 07 00063 g005
Figure 6. Selection of hardware for field and laboratory GDL and GRBD symptoms detection.
Figure 6. Selection of hardware for field and laboratory GDL and GRBD symptoms detection.
Agriengineering 07 00063 g006
Figure 7. Methodology for using the model in the field or laboratory.
Figure 7. Methodology for using the model in the field or laboratory.
Agriengineering 07 00063 g007
Figure 8. The dataset included the conditions (a) occlusion, (b) lighting fluctuations, (c) obstacles, and (d) leaf overlapping.
Figure 8. The dataset included the conditions (a) occlusion, (b) lighting fluctuations, (c) obstacles, and (d) leaf overlapping.
Agriengineering 07 00063 g008aAgriengineering 07 00063 g008b
Figure 9. Total of images per class for the five versions of the model YOLOv5 (left), and visual features of the images in each class (right). (a) Classification of individual leaves for model YOLOv5-v1. (b) Classification of individual leaves for model YOLOv5-v2. (c) Classification of asymptomatic grapevine leaves for model YOLOv5-v3. (d) Classification of symptomatic and asymptomatic grapevine leaves. Images from laboratory and field for model YOLOv5-v4. (e) Classification of laboratory and field images for model YOLOv5-v5.
Figure 9. Total of images per class for the five versions of the model YOLOv5 (left), and visual features of the images in each class (right). (a) Classification of individual leaves for model YOLOv5-v1. (b) Classification of individual leaves for model YOLOv5-v2. (c) Classification of asymptomatic grapevine leaves for model YOLOv5-v3. (d) Classification of symptomatic and asymptomatic grapevine leaves. Images from laboratory and field for model YOLOv5-v4. (e) Classification of laboratory and field images for model YOLOv5-v5.
Agriengineering 07 00063 g009aAgriengineering 07 00063 g009b
Figure 10. Confusion matrices obtained for the five trained versions of YOLOv5 with different datasets: (a) YOLOv5-v1; (b) YOLOv5-v2; (c) YOLOv5-v3; (d) YOLOv5-v4; (e) YOLOv5-v5.
Figure 10. Confusion matrices obtained for the five trained versions of YOLOv5 with different datasets: (a) YOLOv5-v1; (b) YOLOv5-v2; (c) YOLOv5-v3; (d) YOLOv5-v4; (e) YOLOv5-v5.
Agriengineering 07 00063 g010
Figure 11. Confusion matrices for the trained models: (a) YOLOv3, (b) YOLOv5, (c) YOLOv8, and (d) ResNet-50.
Figure 11. Confusion matrices for the trained models: (a) YOLOv3, (b) YOLOv5, (c) YOLOv8, and (d) ResNet-50.
Agriengineering 07 00063 g011
Figure 12. YOLOv5 symptoms detection results under field conditions: (a) occlusion, (b) lighting fluctuations, (c) obstacles, and (d) leaf overlapping.
Figure 12. YOLOv5 symptoms detection results under field conditions: (a) occlusion, (b) lighting fluctuations, (c) obstacles, and (d) leaf overlapping.
Agriengineering 07 00063 g012
Figure 13. Confusion matrix for the test of YOLOv5 versus molecular diagnosis.
Figure 13. Confusion matrix for the test of YOLOv5 versus molecular diagnosis.
Agriengineering 07 00063 g013
Figure 14. Procedure for using the GRBD and GLD symptom detection system running on the Jetson Nano edge computing device.
Figure 14. Procedure for using the GRBD and GLD symptom detection system running on the Jetson Nano edge computing device.
Agriengineering 07 00063 g014
Figure 15. Graphic user interface (GUI) with classification results for (a) symptomatic laboratory image, (b) asymptomatic laboratory image, (c) symptomatic field image, and (d) asymptomatic field image.
Figure 15. Graphic user interface (GUI) with classification results for (a) symptomatic laboratory image, (b) asymptomatic laboratory image, (c) symptomatic field image, and (d) asymptomatic field image.
Agriengineering 07 00063 g015aAgriengineering 07 00063 g015b
Table 1. Comparative analysis of DL classification methods for grapevine diseases.
Table 1. Comparative analysis of DL classification methods for grapevine diseases.
ReferenceContributionsAlgorithm/ModelDatasetResultsYear
This workIdentification of symptoms related to GLD and GRBD in grapevines (Vitis vinifera)DL, CNN and YOLOv53198 grapevine leaf imagesYOLOv5 achieved an Accuracy of 95.36%, Overall Recall 95.77%, and F1-score 95.56%2025
Kunduracioglu et al. [33]Accurate classification of grapevine leaves and diagnosis of grape diseasesPerformance comparison of 14 CNN and 17 vision Transformer models4062 images from the PlantVillage dataset and 500 images from the Grapevine dataset4 models reached an Accuracy of 100% for both datasets2024
Elsherbiny et al. [34]Rapid grapevine diagnosis using DLCNN, LSTM, DNN, transfer learning with VGG16, VGG19, ResNet50, and ResNet101V2295 images from the PlantVillage datasetValidation Accuracy, Precision, Recall, and F1-score of 96.6% and an intersection over union of 93.4%2024
Sawyer et al. [31]Detection of GLD and GRBD in grapevine leavesRF and 3D CNN500 hyperspectral imagesThe CNN model performed better, with an average Precision of 87% against 82.8% from the RF model2023
Pinheiro et al. [35]Grape bunch detection and identification of biophysical lesionsYOLOv5x6, YOLOv7-E6E, and YOLOR-CSP-X910 imagesYOLOv7 achieved the best results with a Precision of 98%, a Recall of 90%, an F1-score of 94%, and a mAP of 77%2023
Wang et al. [36]Identification of 15 grape diseasesImproved YOLOXS and Convolutional Block Attention Module (CBAM)China State Key Laboratory of Plant Pest Biology datasetAverage Precision of 99.1%2023
Schieck et al. [37]Grapevine growth stage recognition using DL modelsResNet, DenseNet, and InceptionV3Grapevine growth stage dataset (BBCH 71-79)ResNet achieved the best classification results with an average Accuracy of 88.1%2023
Gao et al. [17]Identification of GLRaV-3 virus during asymptomatic and symptomatic stages of GLDLeast squares support vector machine (LS-SVM)500 hyperspectral imagesClassifier Precision between 66.67% and 89.93%2020
Table 2. Summary of the main characteristics of the three edge computing devices tested: a personal computer (laptop), NVIDIA Jetson Nano, and Raspberry Pi 4.
Table 2. Summary of the main characteristics of the three edge computing devices tested: a personal computer (laptop), NVIDIA Jetson Nano, and Raspberry Pi 4.
Edge Computing DeviceCPUGPURAMCost [USD]
Personal computer (laptop)Ryzen 7 5800HRTX 305040 GB1000.00
Jetson NanoQuad-core ARM Cortex-A57128-core Maxwell2 GB LPDDR4149.00
Raspberry Pi 4Quad-core ARM Cortex-A72Broadcom VideoCore VI4 GB LPDDR472.80
Table 3. Conformation of the dataset according to the year of obtaining, grapevine cultivar, and molecular diagnosis.
Table 3. Conformation of the dataset according to the year of obtaining, grapevine cultivar, and molecular diagnosis.
YearNumber of ImagesGrapevine CultivarLeaves with Molecular
Diagnosis Photographed
2023989Tempranillo, Syrah, Cabernet Sauvignon,
Malbec, Nebbiolo, Barbera, Chenin blanc,
Thompson, Crimson, Grenache, Red globe,
Sauvignon blanc, and Mision
347
20221044Tempranillo, Syrah, Cabernet Sauvignon,
Chenin blanc, Colombard, Malbec, Nebbiolo,
Merlot, Chardonnay, Grenache, Red globe,
Carignan, and Petite Syrah
453
20211142Cabernet Sauvignon, Nebbiolo italiana,
Merlot, and Nebbiolo
0
201913Gamay, Nebbiolo, Mounedre, Petit verdot, Merlot,
Cabernet Sauvignon, Mision, and Crimson
0
201810Nebbiolo, Temporal, Chardonnay, and Tempranillo0
Total319823 different cultivars800
Table 4. Characteristics of the cellphones’ cameras used for image acquisition.
Table 4. Characteristics of the cellphones’ cameras used for image acquisition.
ModelResolutionWide Angle ApertureUltra-Wide Angle ApertureTelephoto LensImage Format
iPhone 812 MPƒ/1.8NANAHEIF and JPEG
iPhone 1012 MPƒ/1.8NAƒ/2.4 lens apertureHEIF and JPEG
iPhone 1312 MPƒ/1.6ƒ/2.4 lens aperture, 120° field of viewNAHEIF and JPEG
iPhone 1412 MPƒ/1.5ƒ/2.4 lens aperture, 120° field of viewNAHEIF and JPEG
Table 5. YOLOv5 hyperparameter configuration.
Table 5. YOLOv5 hyperparameter configuration.
HyperparameterValue
Image size (–img)416
Batch size (–batch)5
Number of epochs (–epochs)30
Data configuration file (–data)data.yaml
Pre-trained weights (–weights)yolov5s.pt
Experiment name (–name)yolov5s_results_EN
Device (–device)1
Cache images (–cache)Enabled
Learning rate0.01 (default initial value)
OptimizerSGD (Stochastic Gradient Descent)
Data Augmentation HyperparameterValue
HSV Hue0.015
HSV Saturation0.7
HSV Value0.4
Translate0.1
Scale0.5
Flip left–right0.5
Mosaic1
Table 6. Total of images in the dataset.
Table 6. Total of images in the dataset.
AsymptomaticSymptomaticTotal
RT-PCR diagnosis200600800
Visual symptoms diagnosis133510632398
Total153516633198
Table 7. Metrics for YOLOv5 model varying the dataset characteristics.
Table 7. Metrics for YOLOv5 model varying the dataset characteristics.
MetricsYOLOv5-v1YOLOv5-v2YOLOv5-v3YOLOv5-v4YOLOv5-v5
Asymptomatic class Precision95.52%95.92%93.93%97.76%94.85%
Asymptomatic class error4.48%4.08%6.07%2.24%5.15%
Symptomatic class Precision88.06%94.12%92.94%95.05%95.87%
Symptomatic class error11.94%5.88%7.06%4.95%4.13%
Accuracy91.41%95.00%93.43%96.37%95.36%
Classification of individual leavesYesYesYesYesYes
Classification of asymptomatic grapevine leavesNoNoYesYesYes
Classification of symptomatic grapevine leavesNoNoNoYesYes
Classification of low-resolution imagesNoNoNoNoYes
Table 8. Comparison of the four DL models performance metrics.
Table 8. Comparison of the four DL models performance metrics.
ModelClassesAccuracyPrecision1-PrecisionRecall1-RecallF1-Score
YOLOv3Asymptomatic0.87500.83780.16220.93000.07000.8815
Symptomatic0.92130.07870.82000.18000.8677
YOLOv5Asymptomatic0.95360.94850.05150.95920.04080.9538
Symptomatic0.95870.04130.94790.05210.9533
YOLOv8Asymptomatic0.86500.87630.12370.85000.15000.8629
Symptomatic0.85440.14560.88000.12000.8670
ResNet-50Asymptomatic0.85160.87990.12010.81430.18570.8459
Symptomatic0.82720.17280.88890.11110.8569
Table 9. Test of the YOLOv5 model with dataset images of leaves versus molecular diagnosis.
Table 9. Test of the YOLOv5 model with dataset images of leaves versus molecular diagnosis.
Image ResolutionClassesAccuracyPrecision1-PrecisionRecall1-RecallF1-Score
240 × 240Asymptomatic0.87630.83090.16910.94500.05500.8843
Symptomatic0.93620.06380.80770.19230.8672
480 × 480Asymptomatic0.91250.88370.11630.95000.05000.9157
Symptomatic0.94590.05410.87500.12500.9091
640 × 640Asymptomatic0.93000.89090.10910.98000.02000.9333
Symptomatic0.97780.02220.88000.12000.9263
Table 10. Benchmarking of edge computing devices for AI systems and personal computer (laptop).
Table 10. Benchmarking of edge computing devices for AI systems and personal computer (laptop).
Edge Computing DeviceInference Time Based on Image Resolution [ms]FPS
240 × 240480 × 480640 × 640240 × 240480 × 480640 × 640
Raspberry Pi521.41309.82160.71.81810.90120.5554
Jetson NANO315.2757.31277.43.96821.81811.0204
Personal computer (laptop)10.410.410.5114.942596.1538478.74015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lazcano-García, C.; García-Resendiz, K.G.; Carrillo-Tripp, J.; Inzunza-Gonzalez, E.; García-Guerrero, E.E.; Cervantes-Vasquez, D.; Galarza-Falfan, J.; Lopez-Mercado, C.A.; Aguirre-Castro, O.A. Deep Learning-Based System for Early Symptoms Recognition of Grapevine Red Blotch and Leafroll Diseases and Its Implementation on Edge Computing Devices. AgriEngineering 2025, 7, 63. https://doi.org/10.3390/agriengineering7030063

AMA Style

Lazcano-García C, García-Resendiz KG, Carrillo-Tripp J, Inzunza-Gonzalez E, García-Guerrero EE, Cervantes-Vasquez D, Galarza-Falfan J, Lopez-Mercado CA, Aguirre-Castro OA. Deep Learning-Based System for Early Symptoms Recognition of Grapevine Red Blotch and Leafroll Diseases and Its Implementation on Edge Computing Devices. AgriEngineering. 2025; 7(3):63. https://doi.org/10.3390/agriengineering7030063

Chicago/Turabian Style

Lazcano-García, Carolina, Karen Guadalupe García-Resendiz, Jimena Carrillo-Tripp, Everardo Inzunza-Gonzalez, Enrique Efrén García-Guerrero, David Cervantes-Vasquez, Jorge Galarza-Falfan, Cesar Alberto Lopez-Mercado, and Oscar Adrian Aguirre-Castro. 2025. "Deep Learning-Based System for Early Symptoms Recognition of Grapevine Red Blotch and Leafroll Diseases and Its Implementation on Edge Computing Devices" AgriEngineering 7, no. 3: 63. https://doi.org/10.3390/agriengineering7030063

APA Style

Lazcano-García, C., García-Resendiz, K. G., Carrillo-Tripp, J., Inzunza-Gonzalez, E., García-Guerrero, E. E., Cervantes-Vasquez, D., Galarza-Falfan, J., Lopez-Mercado, C. A., & Aguirre-Castro, O. A. (2025). Deep Learning-Based System for Early Symptoms Recognition of Grapevine Red Blotch and Leafroll Diseases and Its Implementation on Edge Computing Devices. AgriEngineering, 7(3), 63. https://doi.org/10.3390/agriengineering7030063

Article Metrics

Back to TopTop