Next Article in Journal
Research on Control System of Corn Planter Based on Radar Speed Measurement
Previous Article in Journal
Optimum Plant Density Improved Cotton (Gossypium hirsutum L.) Root Production Capacity and Photosynthesis for High Cotton Yield under Plastic Film Mulching
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

YOLO-Based Phenotyping of Apple Blotch Disease (Diplocarpon coronariae) in Genetic Resources after Artificial Inoculation

1
Julius Kühn Institute (JKI)—Federal Research Centre for Cultivated Plants, Institute for Breeding Research on Fruit Crops, Dresden-Pillnitz, Pillnitzer Platz 3a, 01326 Dresden, Germany
2
Leibniz Institute for Agricultural Engineering and Bioeconomy, Department Horticultural Engineering, Max-Eyth-Allee 100, 14469 Potsdam, Germany
*
Author to whom correspondence should be addressed.
Agronomy 2024, 14(5), 1042; https://doi.org/10.3390/agronomy14051042
Submission received: 16 April 2024 / Revised: 8 May 2024 / Accepted: 9 May 2024 / Published: 14 May 2024
(This article belongs to the Section Crop Breeding and Genetics)

Abstract

:
Phenotyping of genetic resources is an important prerequisite for the selection of resistant varieties in breeding programs and research. Computer vision techniques have proven to be a useful tool for digital phenotyping of diseases of interest. One pathogen that is increasingly observed in Europe is Diplocarpon coronariae, which causes apple blotch disease. In this study, a high-throughput phenotyping method was established to evaluate genetic apple resources for susceptibility to D. coronariae. For this purpose, inoculation trials with D. coronariae were performed in a laboratory and images of infested leaves were taken 7, 9 and 13 days post inoculation. A pre-trained YOLOv5s model was chosen to establish the model, which was trained with an image dataset of 927 RGB images. The images had a size of 768 × 768 pixels and were divided into 738 annotated training images, 78 validation images and 111 background images without symptoms. The accuracy of symptom prediction with the trained model was 95%. These results indicate that our model can accurately and efficiently detect spots with acervuli on detached apple leaves. Object detection can therefore be used for digital phenotyping of detached leaf assays to assess the susceptibility to D. coronariae in a laboratory.

1. Introduction

Although phenotyping is one of the most valuable tools in breeding research, it is still common to perform phenotyping manually. However, manual phenotyping is labour-intensive and time-consuming, as collecting and analysing the data requires considerable human input. This can result in delays in breeding progress and limiting its effectiveness in identifying desirable traits for breeding programs. Additionally, manual phenotyping is prone to subjectivity and human error, leading to inconsistencies in data collection and interpretation. Observer bias can further complicate the reliability and accuracy of manual phenotyping results [1].
In contrast, high-throughput phenotyping allows objective and standardised assessment across different genetic backgrounds, reducing the potential for human error and increasing the reliability of phenotypic data. The development of high-throughput phenotyping methods is necessary for a rapid and effective evaluation of a wide range of genetic resources. This facilitates breeding research and enables breeders to identify and select plants with desirable traits more efficiently.
In recent years, several high-throughput phenotyping methods based on non-invasive imaging systems have been developed for the detection of foliar diseases. Most research in this field is limited to laboratory studies and relies on images of plant diseases taken in laboratory facilities [2]. Many of these methods are mostly based on deep learning (DL) techniques such as convolutional neural networks (CNNs) [3,4] and have been successfully used to detect foliar diseases such as leaf spot, powdery mildew and rust [5,6,7]. For example, Mohanty et al. [6] developed a detection system for the identification of 26 plant diseases on 14 different plants with a detection accuracy of up to 99.35%. In general, it is important to note that models trained solely on images from a laboratory exhibit significantly lower detection accuracy (approximately 33%) when applied to images captured in field conditions [8]. Therefore, object detection approaches developed for disease diagnoses under laboratory conditions should primarily only be used for this purpose.
Recently, object recognition has become an important tool for plant disease detection. This technique enables the identification and localisation of specific regions of interest in images and is closely related to traditional plant pest detection [2,9,10]. Meanwhile, YOLO (You Only Look Once) is one of the most commonly used object detection techniques. Subsequent modifications of the YOLO algorithm (YOLOv1 to YOLOv9) have led to improved accuracy of this object detection technique [11]. In particular, the latest improvements, such as the architecture of YOLOv8, have increased the efficiency of the network and enable the detection of even small objects [12] as well as fast detection speed [13]. In addition, new approaches such as transfer learning offer a way to overcome the typical challenges of training a deep learning algorithm (e.g., high variance, low accuracy or bias) and allow an easy adaption of pretrained models to a specific dataset [14]. This makes it quick and easy to establish models for a customised dataset that achieve a high level of precision as demonstrated in studies on rice leaf diseases [15,16], grape leaf lesions [17] and apple leaf diseases [18,19,20,21,22,23,24].
In apple production, apple blotch disease is an increasing problem, especially in organic orchards and meadow orchards [25,26]. Humid and warm conditions (between 20 °C and 25 °C) favour the spread of the fungus Diplocarpon coronariae on apple leaves [27]. The pathogen is particularly widespread in southern Germany [28] due to favourable climatic conditions. Possible measures to counteract an infestation include the cultivation of existing robust varieties or development of resistant varieties via breeding. As a prerequisite for breeding resistant varieties, highly precise phenotyping methods for disease symptoms are required to identify apple genetic resources with resistance to Diplocarpon coronarea in addition to studies on resistance inheritance (QTL or GWAS).
Phenotyping is usually performed manually and requires trained personnel with many years of experience with the respective pathogen. Therefore, this step remains the limiting factor in breeding research. The development of high-throughput phenotyping techniques can significantly increase phenotyping work and ensure a rapid and effective evaluation of a wide range of genetic resources.
The aim of this study was therefore to establish a high-throughput phenotyping method for the evaluation of apple genetic resources regarding their susceptibility to apple blotch disease. Our approach is based on using a pre-trained YOLOv5s model, which was trained with images of infected leaves, captured after the artificial D. coronariae inoculation in a laboratory. The image dataset contains annotated spots with acervuli as well as images without symptoms to decrease the detection of false-positive symptoms. After observing a symptom prediction of 95% accuracy, we provide an easy-to-use YOLOv5s model that can be run on any given computer system. The workflow is available as an open-source GitHub repository with instructions (https://github.com/digijkizo/Apple_blotch_detection/tree/master, accessed on 29 April 2024).

2. Materials and Methods

2.1. Plant Material and Diplocarpon Coronariae-Resistant Leaf Test

The first step in creating an automatic detection model was to compile an image dataset of D. coronariae infected leaves. This image dataset was necessary for the training and validation of the detection model.
For the production of infected leaves, 551 apple varieties in 2022 and 80 apple varieties in 2023 were selected and grafted onto the rootstock ‘M.9’ with three replicates each. The plants were then cultivated for six to eight weeks in a greenhouse at a diurnal temperature range between 25 °C by day and 20 °C at night under natural light conditions.
The resistance test was performed on detached leaves after artificial inoculation with D. coronariae. For this, four leaves from three plants of each cultivar were collected from the middle of the plant shoot and washed with tap water to remove external contaminants. The leaves were immediately transferred to plastic Petri dishes lined with a layer of filter paper and covered with a paper towel and a metal mesh. Each Petri dish was moistened with 10 mL of tap water. The extraction of the inoculum and the inoculation was performed according to a method of Wöhner et al. [29,30] with modifications reported in Richter et al. [25].

2.2. Imaging of Infected Leaves and Manual Phenotyping

Seven, nine and thirteen days after inoculation (dai), RGB images of preferably four infected leaves of each apple variety were taken from the artificial inoculation in 2022 and 2023. For image acquisition, the leaves were positioned on a light table (Magic studio, Novoflex, Memmingen, Germany) with a light panel (Prolite Scan, Kaiser, Buchen, Germany) underneath. The camera was mounted on a tripod at a distance of 30 cm from the light table. Images of the infected leaves for the training, validation and test dataset were captured using a handheld RGB camera (Canon EOS 70D in 2022 and Canon EOS D90 in 2023). The Canon EOS D70 (Ota City, Tokyo, Japan) has a 20.2 megapixel sensor (3414 × 3648 pixels) and the Canon EOS D90 (Ota City, Tokyo, Japan) has a 32.5 megapixel sensor (5472 × 3648 pixels). For both cameras, a zoom lens with a focal length of 18 mm to 55 mm was used. The single image was taken with the following standard settings: aperture—f/5.5, exposure time—1/250, ISO 125, focal length—45 mm. The captured RGB images were either used for the workflow for training, validation and testing of the automatic detection model (image_dataset_2023) or to evaluate the accuracy of the trained model (image_dataset_2022).

2.3. Symptom Annotation and Detection Model

An object-based image analysis approach was used to establish an automatic counting of the acervuli spots of D. coronariae. From image_dataset_2023, four selected leaf areas were cropped from each single image at a size of 1 cm2 (Figure 1). All images showing large-scale browned leaf areas were excluded. Moreover, it was ensured that no midrib was visible in the selection, as these show brown glandular spots in some genotypes, which can easily be confused with acervuli spots. Furthermore, the selected image areas were reduced to a size of 768 × 768 pixels. Smaller images require less memory and are faster to process, making them more suitable for the network size and GPU capacity. Cropping was performed using the batch tool of the image editing program Irfanview (https://www.irfanview.com/, accessed on 1 January 2023). All cropped images were then visually inspected, and images that were not completely filled with leaf area were deleted.
Symptom labelling was performed manually by fruit experts using the Computer Version Annotation Tool (CVAT) [31]. For this purpose, rectangular bounding boxes were drawn around the entire D. coronariae spots and labelled as ‘DC_red’. To increase the efficiency of symptom labelling, a multi-step annotation approach was applied. The aim of this approach was to use a small dataset to train an initial model that already recognised symptoms, even if the accuracy was not yet very high. This meant that all symptoms did not have to be annotated by hand, but only individual missing or incorrect symptoms had to be corrected in the CVAT. In a first step, an image dataset with 200 images was annotated and used for the initial model training. Subsequently, a further 243 images were labelled with this first trained model. In the second step, these 243 images were imported into CVAT with their corresponding label in order to complete missing labels and correct inaccurate labels. These 243 images from the previous step and the 200 images from the first step were then used for a second model training to further improve the accuracy of the model. In the third step, 373 images of a previously unlabelled dataset were used as test images and labelled with the second trained model. The complete image dataset from the first, second and third steps with 816 annotated images as well as 111 background images was then used for the final model training.

2.4. Model Training and Testing

In this study, the single-stage model You Only Look Once (YOLO) was chosen, since it detects various objects in a single enclosure and identifies objects very fast [11,32]. The two model versions YOLOv5s and YOLOv8s were tested to compare which model achieves the best accuracy. The open-source platform PyTorch [33] was used as the deep learning framework.
The training image set was composed of 816 cropped images. Additionally, 111 background images with no acervuli spots but with midribs containing brown glandular spots, which can easily be confused with acervuli spots, were included in order to reduce false positives. These 927 images were divided randomly into 837 training images (738 annotated training images and 99 background images) and 90 validation images (78 annotated images and 12 background images). Approximately 90% of the entire image set were used as training images and about 10% were used as validation images.
The model training was performed using the setup for a maximum of 100 epochs. The image resolution was set to 768 × 768 pixels, and the batch size was set to 8. The hyperparameters were as follows: lr0—0.01, lrf—0.01, momentum—0.937, weight_decay—0.0005, warmup_epochs—3.0 and warmup_momentum—0.8. The training of deep learning algorithms requires high computing power, which is why the integrated cloud computing environment Google Colaboratory [34] with the open-source framework PyTorch [33] was used. The GPU type was NVIDIA Tesla T4 with 15,102 MB total memory.
The model training was visually evaluated based on the training and validation loss curves throughout all the trained epochs. Additionally, the model was validated according to the following parameters: precision (P)—the percentage of correct predictions, recall (R)—the rate of true positives or sensitivity (measures how well the model finds all positives). As the recall rate (number of positives) increases, the precision (accuracy of correct classification) may decrease. Therefore, the most favourable tradeoff between precision and recall is determined (F1 score). This was also performed by calculating the mean average precision (mAP) over several intersection over union (IoU) thresholds, which indicated the overlap between the ground truth and the predicted bounding box.

2.5. Comparing Manual and Automated Counting of Acervuli Spots

To confirm the accuracy of the YOLOv5s model for apple blotch detection, the results of the human and automated counting of acervuli spots were compared using image_dataset_2022. In this dataset, the number of acervuli on the leaves was manually counted by fruit experts in all 4794 images. Additionally, the number of acervuli on leaves was automatically detected using the trained YOLOv5s model. For the YOLOv5s-based detection, the images remained in their original size, where the image resolution was set to 3414 × 3648 pixels and the confidence and the IoU thresholds to 0.45. The number of detections for each image was read based on the labeled.txt files and summarized in an csv. file.
The Spearman correlation between the number of acervuli spots counted by human experts and automatically counted by the model was analysed and visualized using the ggpubr R package (R ver. 4.3.2).

3. Results

3.1. RGB Image Acquisition of Disease Symptoms and Image Annotation

After artificial inoculation in 2023, 960 images with infected leaves were acquired. The images were captured on 80 different apple cultivars, including traditional and new varieties. After cropping and visual quality inspection, 927 cropped images of the 960 original images were available in image_dataset_2023 for subsequent symptom labelling. Of these, 816 images with a size of 768 × 768 pixels were used for symptom labelling with the bounding box method using the computer vision annotation tool. To improve the training of the model, the images contained acervuli spots at different stages of development (7, 9 and 13 dai). Labels in the YOLO 1.1 format (.txt files) were used to train and validate the model. A total of 4167 symptoms were annotated in the images. A total of 111 images without annotation were used as background images. The distribution of the image dataset is shown in Table 1. The image dataset for the model training is available in the open-source GitHub repository (https://github.com/digijkizo/Apple_blotch_detection/tree/master, accessed on 29 April 2024).

3.2. YOLO Training Results

Image_dataset_2023 was used for the training of the D. coronariae acervuli spot detector (DC_red). The training of the YOLOv5s and YOLOv8s models showed similar results. For the YOLOv8s model, the best model was found after 61 epochs and 95% of the acervuli spots on infected leaves were correctly detected. For the YOLOv5s, the best model was found after 31 epochs and also 95% of the acervuli spots on infected leaves were correctly detected. Based on validation using 78 annotated images and 12 images without annotation, the precision (P), recall (R), F1 score and mean average precision (mAP) were calculated for both models (Table 2).
The recall of the YOLOv8s model was 0.91, slightly higher than that for the YOLOv5s model. However, the YOLOv5s model achieved a slightly higher precision of 0.95 than the YOLOv8s model with 0.93. It was shown that the tradeoff between precision and recall with a threshold of IoU > 0.45 resulted for both models in a mean average precision of mAP_0.5 = 0.95 for the detection of acervuli spots. As the YOLOv5s and YOLOv8s models showed similar results, but the YOLOv5s model had slightly more precision and was generally faster and easier to use, it was used for the following workflow. The YOLOv5s model showed neither overfitting nor underfitting as indicated by the overlay of the training and validation curves (Figure 2).

3.3. Model Validation

A total of 4794 images from image_dataset_2022 with acervuli spots were counted manually and we used the trained DC model in order to evaluate the accuracy of the automated detection. The image dataset was taken 7 and 9 days after inoculation, each consisting of 355 images of infected leaves. Furthermore, 4084 images with infected leaves were taken 13 days after inoculation. The leaves came from 552 different apple varieties (Table S1). The manually counted spots in the image dataset ranged from 0 to 1398 spots per sheet. Acervuli spots counted by the DC model ranged from 0 to 874 per leaf and were significantly lower than the manual count. However, the Spearman correlation between the manually and automatically counted acervuli spots after 7, 9 and 13 dai showed a strong and significant correlation (R = 0.91, p < 2.2 × 10−16) (Figure 3). Considering the Spearman correlation separately for the different time points after inoculation (dai), differences in the correlation between manual and model-based counting were observed (Figure S1). The highest correlation between manual and model-based counting of acervuli spots was observed at 9 dai with R = 0.96 (p < 2.2 × 10−16). A high and significant correlation was also calculated after 13 dai with a Spearman rank value of R = 0.91 (p < 2.2 × 10−16). As some leaves showed a strong browning of the leaves after 13 dai, which led to a coverage of the individual acervuli spots, the Spearman correlation was calculated again separately without these browned leaves for 13 dai. The correlation value without the browned leaves in 13 dai was then significantly higher with R = 0.95 (p < 2.2 × 10−16). The lowest Spearman rank correlation between the manual and model-based counting of DC spots was observed at 7 dai with R = 0.78 (p < 2.2 × 10−16).

4. Discussion

In this study, we present a user-friendly tool for counting acervuli spots after the D. coronariae resistance leaf test using deep computer vision. Leaf disc assays or leaf tests using detached leaves have already been an important part of resistance scoring under controlled laboratory conditions for a long time. For visually easily recognizable leaf diseases under laboratory conditions, several deep learning approaches have been published in recent years [35,36,37]. Some of these approaches are based on image segmentation, such as for the detection of powdery mildew [35,36,38] or apple scab and apple blight [39], which allow an estimation of the infected areas on the leaf. Other approaches are based on object recognition, for example, using YOLO. YOLO (You Only Look Once) is a state-of-the-art real-time object detection system that enables the recognition of individual disease spots on leaves [16].
Our approach for the high-throughput phenotyping of the acervuli spots of D. coronariae is based on the latter. The reason for this is the ability of the object detection approach to recognize individual spots, which enables counting disease spots on each leaf. In addition, YOLO can also be used to detect tiny spots, as shown in a study on the control of bacterial spot disease in pepper plants [37] or the detection of small lesions on grapevine leaves [17].
Recent advances in computer vision enable a user-friendly application of previously complex and computationally intensive algorithms. The well-documented open-source project Ultralytics provides an efficient tool to create customized object detection models even for people with limited knowledge of Python programming. Google Colab offers a hosted Jupyter Notebook service that requires no setup to use and provides free access to computing resources, including GPUs and TPUs.

4.1. Image Dataset and Model Training

For the training of our DC detection model, images of acervuli spots from 80 different apple varieties were used. Apple varieties can have morphologically different leaves with an elliptical-to-ovoid shape [40]. Apple leaves can also exhibit different chlorophyll content, resulting in a lighter or darker green leaf colour. This depends on leaf development, but also on the genotype [41]. This heterogeneity was represented by compiling images of up to four different leaves per genotype and time point in the dataset for model training. At the same time, the phenotyping of three different time points after inoculation (dai) was used to test after how many days the symptoms are best visible and suitable for model-based detection.
Another important factor for the establishment of the image dataset was the existence of midribs with brown glandular spots in some genotypes, which easily can be confused with acervuli spots. To avoid the detection of these spots as false-positive spots, selected areas of the leaves without midribs were cropped and used for model training. Additionally, cropped images containing midribs but no acervuli spots were included as background images in the model training.
All images were taken in the laboratory under uniform lighting conditions and on a homogeneous background. This can lead to a less robust model detection of acervuli spots in images taken under different light conditions, as is the case in the field, for example, [42,43,44]. In a study of Ferentinos [8], the application of a model trained exclusively on images from a laboratory showed a significantly lower accuracy (about 33%) in the identification of images from the field. This shows that image identification under real cultivation conditions is much more difficult and complex than with images from a laboratory [8]. However, it is worth noting that our approach was only developed for the detection of apple blotch symptoms on images of detached leaf tests after artificial inoculation in the laboratory. For this purpose, our DC detection model represents a simple and fast method for apple blotch symptom detection.

4.2. Model Training and Comparing Manual and Automated Counting of Acervuli Spots

The training of the YOLOv5s model based on 927 images showed a high specificity with a validation accuracy of 95% and a validation loss of less than 2%. The false-positive rate was low and brown glandular spots on the midribs could be successfully distinguished from acervuli spots. This is most likely due to the high number of images with brown glandular spots in the background image dataset.
To evaluate the performance of the DC-model-based recognition, we compared the number of symptoms counted by the model with the number of symptoms counted by the fruit experts. We found that the DC model generally achieved human-level performance that matched the results of the human experts. However, the degree of agreement varied depending on the time point after inoculation. The lowest agreement between the model-based and manual spot counts was found at 7 dai (R = 0.78). The reason for this was probably that some of the spots were still less pronounced and barely visible. In fact, less pronounced symptoms at an early stage could only be recognised by the human fruit experts. The same applied to more pronounced symptoms on the leaves at 13 dai. As the number of days after inoculation increased, the leaves not only showed individual symptoms but also started to turn brown. The browning of the leaves can mask the individual symptoms. In this case, the human fruit experts were still partially able to recognise individual spots within the brown leaf areas, but the recognition of individual symptoms by the DC model was hardly possible. However, if browned leaves were excluded from the comparison, both methods showed a very high correlation (R = 0.95). The best agreement between model-based detection and human counting was observed at 9 dai (R = 0.96) in our study. At this point, symptoms were clearly visible and not obscured by browning. For future D. coronariae resistance leaf tests on apple varieties, phenotyping should be performed at 9 dai.
The DC-model-based counting of acervuli spots has many advantages compared to manual counting. It guarantees fast and reproducible counting, allowing researchers to verify results within minutes and without weeks of human effort. A certain deviation in the absolute number between human and model-based counting is negligible, as the susceptibility is ultimately categorised in the scoring level. Thus, the DC-model-based phenotyping can be applied to all apple varieties, enabling the evaluation of a wide range of genetic resources.

5. Conclusions

Using image analyses, particularly deep learning techniques such as YOLO, we have developed a model that can accurately detect and quantify individual disease spots on apple leaves. By training our model with a diverse dataset that includes different apple varieties and time points after inoculation, we ensured a robust detection of D. coronariae across various leaves. This digital detection method eliminates the need for extensive human labour and reduces analysis time from weeks to minutes, providing fast and consistent results. Additionally, the model demonstrates high accuracy and reliability by effectively recognizing subtle symptoms, such as distinguishing true acervuli spots from similar features like brown gland spots. However, accuracy varies depending on the time after inoculation. For optimal results, phenotyping of apple varieties should ideally be performed around 9 days after inoculation, when symptoms are clearly defined and not obscured by leaf browning.
Overall, our YOLO-model-based phenotyping approach represents an important tool for rapid and reliable disease assessment, facilitating the evaluation of apple varieties for resistance to D. coronariae. This approach opens up possibilities for a comprehensive analysis of genetic resources and contributes to improved breeding research aimed at developing resistant varieties.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/agronomy14051042/s1. Figure S1: A scatter plot of Spearman’s rank correlation coefficient between the number of D. coronariae acervuli spots counted manually on leaves (spots_hand) and counted using the model (spots_KI), calculated separately after 7 dai (A), 9 dai (B) and 13 dai (C). As some leaves showed a strong browning of the leaves after 13 dai, which led to a coverage of the individual acervuli spots, the Spearman correlation was calculated again separately without these browned leaves for 13 dai (D). R: Spearman rank correlation coefficient, p: significance level. Table S1: Number of counted acervuli spots in each single image of image_dataset_2022 counted by a human and by the YOLOv5s model.

Author Contributions

Conceptualization, S.R. (Stefanie Reim) and V.M.; methodology, S.R. (Stefanie Reim), V.M., S.R. (Sophie Richter) and O.L.; software, O.L. and V.M.; validation, S.R. (Stefanie Reim), O.L. and S.R. (Sophie Richter); formal analysis, V.M.; investigation, S.R. (Sophie Richter); resources, S.R. (Sophie Richter); data curation, S.R. (Sophie Richter); writing—original draft preparation, S.R. (Stefanie Reim); writing—review and editing, S.R. (Stefanie Reim) and V.M.; visualization, S.R (Stefanie Reim). and S.R. (Sophie Richter); supervision, T.W.W.; project administration, T.W.W.; funding acquisition, S.R. (Sophie Richter) and T.W.W. All authors have read and agreed to the published version of the manuscript.

Funding

We gratefully acknowledge the ‘Deutsche Bundesstiftung Umwelt’ for funding the project (file number 20021/716). Furthermore, we acknowledge the Deutsche Genbank Obst (www.deutsche-genbank-obst.de, accessed on 8 May 2024) for providing plant material.

Data Availability Statement

The image dataset for the model training and the detection workflow with instructions are available in the open-source GitHub repository (https://github.com/digijkizo/Apple_blotch_detection/tree/master, accessed on 29 April 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Aeffner, F.; Wilson, K.; Martin, N.; Black, J.; Luengo Hendriks, C.; Bolon, B.; Rudmann, D.; Gianani, R.; Koegler, S.; Krueger, J.; et al. The Gold Standard Paradox in Digital Image Analysis: Manual Versus Automated Scoring as Ground Truth. Arch. Pathol. Lab. Med. 2017, 141, 1267–1275. [Google Scholar] [CrossRef] [PubMed]
  2. Shoaib, M.; Shah, B.; EI-Sappagh, S.; Ali, A.; Ullah, A.; Alenezi, F.; Gechev, T.; Hussain, T.; Ali, F. An advanced deep learning models-based plant disease detection: A review of recent research. Front. Plant Sci. 2023, 14, 1158933. [Google Scholar] [CrossRef]
  3. Karthik, R.; Hariharan, M.; Anand, S.; Mathikshara, P.; Johnson, A.; Menaka, R. Attention embedded residual CNN for disease detection in tomato leaves. Appl. Soft Comput. 2020, 86, 105933. [Google Scholar] [CrossRef]
  4. Liu, B.; Ding, Z.; Tian, L.; He, D.; Li, S.; Wang, H. Grape Leaf Disease Identification Using Improved Deep Convolutional Neural Networks. Front. Plant Sci. 2020, 11, 1082. [Google Scholar] [CrossRef]
  5. Genaev, M.A.; Skolotneva, E.S.; Gultyaeva, E.I.; Orlova, E.A.; Bechtold, N.P.; Afonnikov, D.A. Image-Based Wheat Fungi Diseases Identification by Deep Learning. Plants 2021, 10, 1500. [Google Scholar] [CrossRef]
  6. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using Deep Learning for Image-Based Plant Disease Detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [PubMed]
  7. Anjna; Sood, M.; Singh, P.K. Hybrid System for Detection and Classification of Plant Disease Using Qualitative Texture Features Analysis. Procedia Comput. Sci. 2020, 167, 1056–1065. [Google Scholar] [CrossRef]
  8. Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
  9. Kurmi, Y.; Gangwar, S. A leaf image localization based algorithm for different crops disease classification. Inf. Process. Agric. 2022, 9, 456–474. [Google Scholar] [CrossRef]
  10. Peng, Y.; Wang, Y. Leaf disease image retrieval with object detection and deep metric learning. Front. Plant Sci. 2022, 13, 963302. [Google Scholar] [CrossRef]
  11. Jocher, G.; Chaurasia, A.; Stoken, A.; Borovec, J.; NanoCode012; Kwon, Y.; Michael, K.; TaoXie; Fang, J.; Imyhxy; et al. ultralytics/yolov5: v7.0—YOLOv5 SOTA Realtime Instance Segmentation (v7.0). Zenodo. 2022. Available online: https://zenodo.org/records/7347926 (accessed on 22 November 2022).
  12. Solimani, F.; Cardellicchio, A.; Dimauro, G.; Petrozza, A.; Summerer, S.; Cellini, F.; Renò, V. Optimizing tomato plant phenotyping detection: Boosting YOLOv8 architecture to tackle data complexity. Comput. Electron. Agric. 2024, 218, 108728. [Google Scholar] [CrossRef]
  13. Ma, B.; Hua, Z.; Wen, Y.; Deng, H.; Zhao, Y.; Pu, L.; Song, H. Using an improved lightweight YOLOv8 model for real-time detection of multi-stage apple fruit in complex orchard environments. Artif. Intell. Agric. 2024, 11, 70–82. [Google Scholar] [CrossRef]
  14. Ali, A.H.; Yaseen, M.G.; Aljanabi, M.; Abed, S.A.; Farman, M.; Abbas, M.; Abbas, A. Transfer Learning: A new promising techniques. Mesopotamian J. Big Data 2023, 2023, 29–30. [Google Scholar] [CrossRef]
  15. Zhou, G.; Zhang, W.; Chen, A.; He, M.; Ma, X. Rapid Detection of Rice Disease Based on FCM-KM and Faster R-CNN Fusion. IEEE Access 2019, 7, 143190–143206. [Google Scholar] [CrossRef]
  16. Deari, S.; Ulukaya, S. A Hybrid Multistage Model Based on YOLO and Modified Inception Network for Rice Leaf Disease Analysis. Arab. J. Sci. Eng. 2023, 49, 6715–6723. [Google Scholar] [CrossRef]
  17. Yang, M.; Tong, X.; Chen, H. Detection of Small Lesions on Grape Leaves Based on Improved YOLOv7. Electronics 2024, 13, 464. [Google Scholar] [CrossRef]
  18. Zhong, Y.; Zhao, M. Research on deep learning in apple leaf disease recognition. Comput. Electron. Agric. 2020, 168, 105146. [Google Scholar] [CrossRef]
  19. Khan, A.I.; Quadri, S.; Banday, S.; Latief Shah, J. Deep diagnosis: A real-time apple leaf disease detection system based on deep learning. Comput. Electron. Agric. 2022, 198, 107093. [Google Scholar] [CrossRef]
  20. Zhu, R.; Zou, H.; Li, Z.; Ni, R. Apple-Net: A Model Based on Improved YOLOv5 to Detect the Apple Leaf Diseases. Plants 2022, 12, 169. [Google Scholar] [CrossRef]
  21. Zhu, S.; Ma, W.; Wang, J.; Yang, M.; Wang, Y.; Wang, C. EADD-YOLO: An efficient and accurate disease detector for apple leaf using improved lightweight YOLOv5. Front. Plant Sci. 2023, 14, 1120724. [Google Scholar] [CrossRef]
  22. Di, J.; Li, Q. A method of detecting apple leaf diseases based on improved convolutional neural network. PLoS ONE 2022, 17, e0262629. [Google Scholar] [CrossRef] [PubMed]
  23. Wang, Y.; Wang, Y.; Zhao, J. MGA-YOLO: A lightweight one-stage network for apple leaf disease detection. Front. Plant Sci. 2022, 13, 927424. [Google Scholar] [CrossRef] [PubMed]
  24. Xu, W.; Wang, R. ALAD-YOLO: An lightweight and accurate detector for apple leaf diseases. Front. Plant Sci. 2023, 14, 1204569. [Google Scholar] [CrossRef]
  25. Richter, S.; Höfer, M.; Bohr, A.; Buchleither, S.; Flachowsky, H.; Wöhner, T. Evaluation of the resistance of apple cultivars to Diplocarpon coronariae for the cultivation in meadow orchards. In Proceedings of the Eco-Fruit: 20th International Conference on Organic Fruit Growing, Online, 21–23 February 2022; pp. 74–77. [Google Scholar]
  26. Wöhner, T. Apple blotch disease (Marssonina coronaria (Ellis & Davis) Davis)—Review and research prospects. Eur. J. Plant Pathol. 2018, 153, 657–669. [Google Scholar] [CrossRef]
  27. Sharma, J.N.; Sharma, A.; Sharma, P. Out-break of marssonina blotch in warmer climates causing premature leaf fall problem of apple and its management. Acta Hortic. 2004, 662, 405–409. [Google Scholar] [CrossRef]
  28. Hinrichs-Berger, J.; Müller, G. (Eds.) Vorzeitiger Blattfall an Apfelbäumen in Baden-Württemberg durch Befall mit Marssonina coronaria. In Proceedings of the 58th Deutsche Pflanzenschutztagung “Pflanzenschutz—Alternativlos”, Braunschweig, Germany, 10–14 September 2012; Julius Kühn-Archiv. Technische Universität Braunschweig: Braunschweig, Germany, 2012. [Google Scholar]
  29. Wöhner, T. Evaluation of Malus gene bank resources with German strains of Marssonina coronaria using a greenhouse-based screening method. Eur. J. Plant Pathol. 2018, 153, 743–757. [Google Scholar] [CrossRef]
  30. Wöhner, T.; Emeriewen, O.F.; Höfer, M. Evidence of apple blotch resistance in wild apple germplasm (Malus spp.) accessions. Eur. J. Plant Pathol. 2021, 159, 441–448. [Google Scholar] [CrossRef]
  31. Sekachev, B.; Manovich, N.; Zhiltsov, M.; Zhavoronkov, A.; Kalinin, D.; Hoff, B.; TOsmanov; Kruchinin, D.; Zankevich, A.; DmitriySidnev; et al. opencv/cvat: v1.1.0. Zenodo. 2020. Version v1.1.0. Available online: https://zenodo.org/records/4009388 (accessed on 31 August 2020).
  32. Redmon, J.; Divvala, S.K.; Girshick, R.B.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2015; pp. 779–788. [Google Scholar] [CrossRef]
  33. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library; Neural Information Processing Systems Foundation, Inc. (NeurIPS): Vancouver, BC, Canada, 2019; Available online: https://arxiv.org/pdf/1912.01703 (accessed on 8 May 2024).
  34. Google Colaboratory. 2023. Available online: https://colab.research.google.com/ (accessed on 1 November 2023).
  35. Bierman, A.; LaPlumm, T.; Cadle-Davidson, L.; Gadoury, D.; Martinez, D.; Sapkota, S.; Rea, M. A High-Throughput Phenotyping System Using Machine Vision to Quantify Severity of Grapevine Powdery Mildew. Plant Phenomics 2019, 2019, 9209727. [Google Scholar] [CrossRef] [PubMed]
  36. Zendler, D.; Malagol, N.; Schwandner, A.; Töpfer, R.; Hausmann, L.; Zyprian, E. High-Throughput Phenotyping of Leaf Discs Infected with Grapevine Downy Mildew Using Shallow Convolutional Neural Networks. Agronomy 2021, 11, 1768. [Google Scholar] [CrossRef]
  37. Mathew, M.P.; Mahesh, T.Y. Leaf-based disease detection in bell pepper plant using YOLO v5. Signal Image Video Process. 2022, 16, 841–847. [Google Scholar] [CrossRef]
  38. Lin, K.; Gong, L.; Huang, Y.; Liu, C.; Pan, J. Deep Learning-Based Segmentation and Quantification of Cucumber Powdery Mildew Using Convolutional Neural Network. Front. Plant Sci. 2019, 10, 155. [Google Scholar] [CrossRef] [PubMed]
  39. Khan, M.A.; Akram, T.; Sharif, M.; Awais, M.; Javed, K.; Ali, H.; Saba, T. CCDF: Automatic system for segmentation and recognition of fruit crops diseases based on correlation coefficient and deep CNN features. Comput. Electron. Agric. 2018, 155, 220–236. [Google Scholar] [CrossRef]
  40. Migicovsky, Z.; Li, M.; Chitwood, D.H.; Myles, S. Morphometrics Reveals Complex and Heritable Apple Leaf Shapes. Front. Plant Sci. 2018, 8, 2185. [Google Scholar] [CrossRef]
  41. Ta, N.; Chang, Q.; Zhang, Y. Estimation of Apple Tree Leaf Chlorophyll Content Based on Machine Learning Methods. Remote Sens. 2021, 13, 3902. [Google Scholar] [CrossRef]
  42. Liu, J.; Wang, X. Tomato diseases and pests detection based on improved Yolo V3 convolutional neural network. Front. Plant Sci. 2020, 11, 898. [Google Scholar] [CrossRef] [PubMed]
  43. Liu, J.; Wang, X. Plant diseases and pests detection based on deep learning: A review. Plant Methods 2021, 17, 22. [Google Scholar] [CrossRef]
  44. Puliti, S.; Astrup, R. Automatic detection of snow breakage at single tree level using YOLOv5 applied to UAV imagery. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102946. [Google Scholar] [CrossRef]
Figure 1. A schematic of the training process for the pre-trained YOLOv5s and YOLOv8s model. Selected areas in the uncropped images from dataset_2023 containing leaves with acervuli spots were cropped to image parts with a size of 768 × 768 pixels. The labelled images and the background images without symptoms were used to train a pre-trained YOLOv5s and YOLOv8s model. The adapted YOLOv5s model was used for the detection of acervuli spots in the uncropped images of dataset_2022.
Figure 1. A schematic of the training process for the pre-trained YOLOv5s and YOLOv8s model. Selected areas in the uncropped images from dataset_2023 containing leaves with acervuli spots were cropped to image parts with a size of 768 × 768 pixels. The labelled images and the background images without symptoms were used to train a pre-trained YOLOv5s and YOLOv8s model. The adapted YOLOv5s model was used for the detection of acervuli spots in the uncropped images of dataset_2022.
Agronomy 14 01042 g001
Figure 2. Evaluation parameters during the YOLOv5s training progress with 100 epochs. The following errors in the training and validation set were assessed: train/box_loss and val/box_loss—bounding box regression loss, train/obj_loss and val/obj_loss—the confidence of object presence, train/cls_loss and val /cls_loss—the classification loss (cross entropy). The following metrics for detection accuracy were calculated: precision—measures how much of the bbox predictions are correct, recall—measures how much of the true bbox were correctly predicted, mAP_0.5—mean average precision at IoU threshold of 0.5, mAP_0.5:0.95—average mAP over different IoU thresholds, ranging from 0.5 to 0.95.
Figure 2. Evaluation parameters during the YOLOv5s training progress with 100 epochs. The following errors in the training and validation set were assessed: train/box_loss and val/box_loss—bounding box regression loss, train/obj_loss and val/obj_loss—the confidence of object presence, train/cls_loss and val /cls_loss—the classification loss (cross entropy). The following metrics for detection accuracy were calculated: precision—measures how much of the bbox predictions are correct, recall—measures how much of the true bbox were correctly predicted, mAP_0.5—mean average precision at IoU threshold of 0.5, mAP_0.5:0.95—average mAP over different IoU thresholds, ranging from 0.5 to 0.95.
Agronomy 14 01042 g002
Figure 3. A scatter plot of Spearman’s rank correlation coefficient between the number of D. coronariae acervuli spots counted manually on leaves (spots_hand) and counted using the model (spots_KI). R: Spearman rank correlation coefficient, p: significance level.
Figure 3. A scatter plot of Spearman’s rank correlation coefficient between the number of D. coronariae acervuli spots counted manually on leaves (spots_hand) and counted using the model (spots_KI). R: Spearman rank correlation coefficient, p: significance level.
Agronomy 14 01042 g003
Table 1. Overview of the division of image_dataset_2023 into annotated training and validation images, background images and number of annotations for model training for the detection of D. coronariae acervuli spots on leaves.
Table 1. Overview of the division of image_dataset_2023 into annotated training and validation images, background images and number of annotations for model training for the detection of D. coronariae acervuli spots on leaves.
DatasetImage Total (n)Annotated Images (n)Background Images (n)Annotations (n)
Training837738993642
Validation907812525
Total9278161114167
Table 2. Comparison of the training of the YOLOv5s and YOLOv8s model based on precision (P), recall (R) and mean average precision (mAP_50).
Table 2. Comparison of the training of the YOLOv5s and YOLOv8s model based on precision (P), recall (R) and mean average precision (mAP_50).
ModelParameterLayerBatch SizeBest EpochPrecisionRecallF1 ScoremAP_0.5
YOLOv5s7,022,3262148310.950.880.910.95
YOLOv8s11,135,9872558620.930.910.910.95
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Reim, S.; Richter, S.; Leonhardt, O.; Maß, V.; Wöhner, T.W. YOLO-Based Phenotyping of Apple Blotch Disease (Diplocarpon coronariae) in Genetic Resources after Artificial Inoculation. Agronomy 2024, 14, 1042. https://doi.org/10.3390/agronomy14051042

AMA Style

Reim S, Richter S, Leonhardt O, Maß V, Wöhner TW. YOLO-Based Phenotyping of Apple Blotch Disease (Diplocarpon coronariae) in Genetic Resources after Artificial Inoculation. Agronomy. 2024; 14(5):1042. https://doi.org/10.3390/agronomy14051042

Chicago/Turabian Style

Reim, Stefanie, Sophie Richter, Oskar Leonhardt, Virginia Maß, and Thomas Wolfgang Wöhner. 2024. "YOLO-Based Phenotyping of Apple Blotch Disease (Diplocarpon coronariae) in Genetic Resources after Artificial Inoculation" Agronomy 14, no. 5: 1042. https://doi.org/10.3390/agronomy14051042

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop