Next Article in Journal
Solving the Problem of Elasticity for a Layer with N Cylindrical Embedded Supports
Previous Article in Journal
Predicting the Occurrence of Metabolic Syndrome Using Machine Learning Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Monitoring System to Assess Plant Development State Based on Computer Vision in Viticulture

1
Institute of Physics and Technology, V.I. Vernadsky Crimean Federal University, Simferopol 295007, Russia
2
Humanitarian Pedagogical Academy, V.I. Vernadsky Crimean Federal University, Simferopol 295007, Russia
3
Sevastopol Branch, Plekhanov Russian University of Economics, Sevastopol 299053, Russia
4
Institute of Education and Humanities, Sevastopol State University, Sevastopol 299053, Russia
*
Author to whom correspondence should be addressed.
Computation 2023, 11(9), 171; https://doi.org/10.3390/computation11090171
Submission received: 22 July 2023 / Revised: 19 August 2023 / Accepted: 1 September 2023 / Published: 3 September 2023

Abstract

:
Plant health plays an important role in influencing agricultural yields and poor plant health can lead to significant economic losses. Grapes are an important and widely cultivated plant, especially in the southern regions of Russia. Grapes are subject to a number of diseases that require timely diagnosis and treatment. Incorrect identification of diseases can lead to large crop losses. A neural network deep learning dataset of 4845 grape disease images was created. Eight categories of common grape diseases typical of the Black Sea region were studied: Mildew, Oidium, Anthracnose, Esca, Gray rot, Black rot, White rot, and bacterial cancer of grapes. In addition, a set of healthy plants was included. In this paper, a new selective search algorithm for monitoring the state of plant development based on computer vision in viticulture, based on YOLOv5, was considered. The most difficult part of object detection is object localization. As a result, the fast and accurate detection of grape health status was realized. The test results showed that the accuracy was 97.5%, with a model size of 14.85 MB. An analysis of existing publications and patents found using the search “Computer vision in viticulture” showed that this technology is original and promising. The developed software package implements the best approaches to the control system in viticulture using computer vision technologies. A mobile application was developed for practical use by the farmer. The developed software and hardware complex can be installed in any vehicle. Such a mobile system will allow for real-time monitoring of the state of the vineyards and will display it on a map. The novelty of this study lies in the integration of software and hardware. Decision support system software can be adapted to solve other similar problems. The software product commercialization plan is focused on the automation and robotization of agriculture, and will form the basis for adding the next set of similar software.

1. Introduction

Another area of application of computer vision in agriculture is the automation of the process of harvesting and processing crops. Computer vision systems can automatically determine the maturity and size of fruits and vegetables, identify damage and disease, and classify and sort products according to various parameters. This allows one to increase the productivity and quality of agricultural products, as well as reduce the cost of their collection and processing [1,2,3]. Computer vision can also be used to control the quality of planting material and seeds, control the area under crop, determine the level of fertilizers and pesticides, and automatically control and monitor watering and irrigation systems [4,5,6,7]. In general, the use of computer vision in agriculture can improve the productivity, efficiency, and economic efficiency of agricultural production [8,9,10,11,12].
Diseases of grapes are a serious problem for their cultivation in the southern regions of Russia. Grapes are one of the most important agricultural crops, as they represent a significant source of income for local farmers and breeders, and also play an important role in wine production. However, the achievement of high yields of grapes in these regions can be significantly limited by various diseases [13,14,15,16,17].
Two of the most common grape diseases in the southern regions of Russia are Mildew and Oidium [18,19,20]. The fight against these diseases includes the use of fungicides and maintaining optimal air humidity. Other common grape diseases in the southern regions of Russia are gray, black, and white mold. These fungal infections cause plaque on fruits and leaves and can lead to significant yield losses and poor fruit quality [17,19]. Fungicides are commonly used to control these diseases. Also, grapes are prone to diseases such as bacterial cancer and viral infections. Bacterial cancer causes the yellowing and death of leaves and young shoots. This disease can be especially dangerous because there is no effective cure for it. As for viral infections, they can lead to the deformation of leaves and berries, as well as reduce yields. The fight against viruses usually comes down to breeding resistant varieties and destroying infected plants.
At the moment, in Russia, checking the vine for the presence of early signs of the disease is carried out manually. This task is time-consuming as the plots are large and contain several thousand vines. Moreover, human operators can make many mistakes (different training and skills depending on the operator, errors caused by workload or fatigue, etc.) which negatively affect how well the disease is recognized. Automating the diagnosis of early signs of disease is one of the main tasks of smart agriculture. Several methods have been proposed in recent years [20,21,22,23]. Some of these methods are based on classical approaches to image processing, which consist of developing task-oriented segmentation, shape recognition, and feature extraction algorithms. Another approach is based on deep learning and, in particular, convolutional neural networks. This type of neural network allows one to classify, segment, and detect objects by learning representations from raw images. This approach uses available data instead of subjective criteria and specialized algorithms developed by humans. An analysis of existing publications and patents found using the search “Computer vision in viticulture” showed that this technology is promising.
A schematic representation of the stages of creating an intelligent system for monitoring the state of plant development based on computer vision in viticulture is shown in Figure 1.
The developed software and hardware complex can be installed in any vehicle. Such a mobile system will allow for real-time monitoring of the state of the vineyards and will display it on a map. The novelty of this study lies in the integration of software and hardware.
The main objectives of this project were as follows:
-
Develop architecture, algorithms, and an original software and hardware complex based on the computer vision system for detecting and predicting the development of grape diseases.
-
Create a web service and a mobile application for the provision of an intelligent grape disease recognition system service for agricultural enterprises and winegrowers in southern Russia.

2. Materials and Methods

2.1. Diseases of Grapes Typical of the Black Sea Region

Two of the most common grape diseases in the southern regions of Russia are Grape Powdery Mildew (Peronospora viticola) and Oidium (Uncinula necator).
The peak of development falls in the period from mid-June to mid-August, depending on the ripening period of a particular variety. Also, this disease of grapes is characterized by outbreaks in early autumn, since treatments are stopped in the post-harvest period. This fungal disease is characterized by massive damage, affecting immature berries, chlorophyll-containing tissues (leaves and young shoots), and the ridges of bunches, which leads to rapid drying of the latter. Oidium actively develops in both wet and dry hot conditions. This fungus does not require drip-liquid moisture (dew, precipitation) for spore germination.
The long-term effects of Oidium in the vineyards of the southern coast of Crimea indicate the continuity and relative constancy of the intensity of its development over the years, as well as a less-significant dependence on weather conditions than Mildew. At the same time, the average dependence of the intensity of Oidium development on leaves and relative air humidity in May was found. The established patterns indicate the importance of mandatory monitoring and short-term forecasting of the development of Oidium (Figure 2).
Mildew (Peronospora viticola). Requires a large amount of drip-liquid moisture.
In recent years, September has been considered the most favorable period for the development of Mildew, when the heat subsides and humidity increases. During this time, the fungus infects the apical stepson leaves. In the temperature range of +10–+30 °C and at a humidity of 70% after the first prolonged precipitation, spores germinate, and this period lasts for 2–3 months. At the peak of growth, the fungus infects almost all green parts of the bush and penetrates through stomata into buds and flowers and then into berries, which is expressed in the wrinkling of the fruit at the stalk, followed by drying.
The seasonal dynamics of the epiphytotic process of Mildew in the Southwestern Crimea were determined via hydrothermal conditions. In this regard, the time of manifestation of the first visual signs of the disease varied greatly throughout the years of the study (2014–2022).
The highest level of disease intensity was recorded in 2015. On average, during this period, Mildew developed moderately on leaves and to a lesser extent on bunches.
Thus, in the ampelocenoses of the Southwestern zone of Crimean viticulture, the development of Mildew from the year was continuous and uneven. The long-term dynamics of the disease indicate a high dependence of the development of the disease on the leaves and on the amount of precipitation in the period from May to August, which indicates the significant importance of the short-term forecast for the development of Mildew grapes (Figure 3).
An analysis of the seasonal dynamics of the epiphytotic process of Oidium and Mildew in the vine plantations of the southern coast of Crimea allows us to state their relative constancy.
Also, in the Black Sea region, there are the following diseases of grapes:
Anthracnose (Gloeosporium ampelophagum). This grape disease develops in spring during cool, damp weather. All young green organs of the plant suffer, as well as unripe berries. A point lesion turns into ulcers, which can ring the shoots and inflorescences, and as a result of which the latter break or dry out.
Alternariosis (Alternaria). The fungus develops in a wide temperature range and at any humidity. Overwintering seen in buds and on plant debris in the soil. In spring, with air currents and rain splashes, spores spread through the leaves. On the green parts and berries, characteristic bulges appear, quickly taking the form of clearly defined brown, red, and dark brown spots, which tend to merge and lead to a large-scale lesion.
Esca is a fungal disease that can affect all parts of the bush and often leads to its death. The disease is caused by several types of fungi. The main ones are Fomitiporia mediterranea, Phaeomoniella chlamydospore, and Phaeoacremonium aleophilum.
Black rot (Guignardia bidwellii). Requires a large amount of drip-liquid moisture. It develops most actively in a hot climate rich in precipitation. It affects vegetative and chlorophyll-containing organs: leaves, young shoots, and ridges, as well as unripe berries at the stage of cluster formation.
White rot (Coniella diplodiella). The disease affects the above-ground parts of plants and causes maximum damage when it covers berries and shoots on mother liquors of rootstock vines in the absence of support. Infects the peduncle pads.
Gray rot (Botrytis cinerea). It affects all green organs and berries in any climate, resistant to adverse conditions.
Bacterial cancer (Agrobacterium tumefaciens) is an incurable disease of cultivated grapes, leading to the death of the bush. There are ground and root forms of the disease. The disease is caused by the aerobic rod-shaped bacterium Agrobacterium tumefaciens, which is found everywhere in all types of soil.

2.2. Comparison of the Characteristics of Neural Networks of Different Architectures Using Computer Vision

In general, an artificial neural network is a mathematical model, and a type of software implementation, created on the principles of organization and functioning of biological neural networks—networks of nerve cells of a living organism. The ANN includes grouped neurons called layers. The classification of types of neural networks is given in Table 1 [24,25].
A convolutional neural network (ConvNet / CNN) is a deep learning algorithm that is able to receive an image as an input, set digestible weights and biases to different areas in the image, and distinguish between these areas. A feature of the network is less pre-processing, unlike other classification algorithms. Also, a CNN, with proper training, is able to independently learn filters and characteristics, unlike primitive methods (see Figure 4) [26,27,28,29].
The architecture of the CNN is based on the way the visual cortex is organized. Individual neurons are only able to respond to stimuli in a narrow region of the visual field known as the receptive field. Unlike feedforward networks, which operate on data in the form of vectors, convolutional networks operate on images in the form of tensors. Tensors are 3D arrays of numbers. Images on the computer are displayed in the video image, each pixel is the value corresponding to the corresponding parameters. In this case, each of the channels means an integer from 0 to 255. Most often, color images are used, which are based on RGB material–particles containing volumes in three channels: red, green and blue.
The coverage of the entire zone of vision is achieved by the combination of such fields. Figure 5 shows an RGB image divided into three color channels—red, green, and blue. In addition to the RGB, there are other color spaces: grayscale, HSV, RGB, CMYK, etc.
Given this, you can imagine an 8K image (7680 × 4320) and what will become of computing resources. A convolutional neural network is needed to transform the input image in such a way that it is easier to process later without losing quality and characteristics, which play a significant role in obtaining a reliable forecast. This is also important in terms of scalability for massive datasets.
Convolutional neural networks are based on filters that recognize certain image characteristics (straight lines, geometric shapes). A filter is one or a collection of kernels. The kernel is a regular matrix of numbers, which are weight coefficients. These weights are adjusted to search for certain characteristics in the image [30,31,32]. The kernel moves along the image and determines the presence or absence of the desired characteristic in a specific part of it. To obtain an answer, the sum of the products of the filter elements and the matrix of the input signals is calculated. Such a process is called a convolution operation; see Figure 6.
If there is a characteristic in the image fragment, the convolution operation produces a number with a relatively large value as an output. If there is no characteristic, the output number will be small [32,33].
Let us analyze the convolution process on a two-dimensional convolution; see Figure 7 and Figure 8.
The 2D convolution operation, or 2D convolution, is the foundation of convolutional neural networks. The whole algorithm of work is concentrated in the following way: the kernel, which is a matrix of weights (weight matrix), iterates through the image and iteratively performs the multiplication operation on the input values that the matrix is currently over, and then all of the values are summed into one output pixel.
All of this work of multiplication and summation is repeated with each place that the kernel is above, while at the same time transforming the input matrix into another feature matrix. The output feature matrix is a matrix of weighted sums of the input features, where the weight values are determined by the kernel.
In order to determine what kernel size is needed for a convolutional network, it is necessary to find out the number of features combined to obtain a new feature at the output. This will determine the size of the kernel [33,34,35,36,37,38].
Figure 8 shows the situation in which the input is 5 × 5 = 25, and the output is 3 × 3 = 9 signs. Considering a situation where the convolution process is not applied, but the standard layer is used (standard fully connected layer), then with this option, the weight matrix would consist of 25 × 9 = 225 parameters.
As you can see from the example, using the convolution process, subsequent operations with only 9 parameters can be carried out, since each feature in the output is the result of analyzing only one input, which is located in “about the same place”, and not each feature in the input [39,40].
Top 1 and Top 5 accuracy refers to the performance of the model on the ImageNet validation dataset. Depth refers to the topological depth of the network. It includes activation layers, batch normalization layers, etc. The convolution operation can produce two types of results: the first, in which the final feature has a lower dimension than the input, while in the other, the dimension either increases or does not change (Table 2).
Depth refers to the topological depth of the network. It includes activation layers, batch normalization layers, etc. [28,29].

3. Results

3.1. Development of a Technique for Preparing and Marking an Image for Training Neural Networks

A neural network deep learning dataset of 4845 grape disease images was created by the authors. Eight categories of common grape diseases typical of the Black Sea region were studied: Mildew, Oidium, Anthracnose, Esca, Gray rot, Black rot, White rot, Grape rubella, Grape chlorosis, and bacterial cancer of grapes (Table 3).
LabelIMG was used to mark up images for the dataset. This software is often used in the field of machine learning, especially in computer vision tasks where a dataset needs to be labeled to train a model. Figure 9 shows the markup of one of the images for training the model in the first stage.
The dataset markup data for primary training is shown in Figure 10.
The graph of the algorithm efficiency for the last training of the model is shown in Figure 11 and it can be seen that the efficiency is at a fairly high level.
To separately assess the quality of the algorithm on each of the classes, there are precision (accuracy) and recall (completeness) metrics. Precision can be interpreted as the proportion of objects called positive by the classifier and at the same time are really positive, and recall shows what proportion of objects of a positive class out of all objects of a positive class the algorithm found. Confidence is the probability with which the found object will correspond to the class assigned to it. The F1 parameter provides a general estimate of the trade-off relationship between the accuracy and completeness of the model. It is used in problems where both the degree of true positive predictions and the degree of prediction of all true positive events are important.
Graphs showing the ratios of these quantities for the last training of the model are presented in Figure 12, Figure 13 and Figure 14.
The graphs show that the model has high accuracy and a small number of unrecognized lesions, which indicates the correct training, as well as high key indicators and no retraining of the model.

3.2. Development of the Interface and Program Module of the System

To automate the process of the primary processing of the results, a desktop version of the system was developed, in which a set of images (dataset) are loaded and then detected using the chosen neural network architecture. To upload a set of images, one must click on the “Upload Images” button in the menu bar or sidebar. After the user clicks on the button, the program will open the “Image Folder” dialog box to select the folder where the images are stored. To start the detection and recognition of objects in images, one must press the “Detection” button, which is located on the side menu. After pressing it, the search for affected areas of grapes in the images will start and the “Detection” tab will open. The result of the object detection is shown in Figure 15 and Figure 16.
To view information about the disease found in the “Detection” tab, one must click the “Information” button; after which, the “Disease Information” window will open with a description of the disease, instructions for treatment, and information about the nearest point that one can go to for help. The result of displaying information about the disease for one of the pictures is shown in the figure.
Also in the window is a tab “Objects”, which displays similar images where the disease was found in the form of a list of images. The page with the output results is shown in Figure 16.
At the moment, a mobile application for the detection of grape diseases is being developed. In the future, the applications will be integrated into a software and hardware complex that will allow for real-time monitoring of the state of fields with grapes.

4. Discussion

The yield of agricultural crops depends directly on the quality of field work, land cultivation, and timely receipt of information from sown areas through operational monitoring. Using large aircraft for these purposes is expensive and not always effective. Traditional methods of sampling plants and assessing their physical and chemical state in laboratories are time-consuming. And, besides, manual collection carries the risk of damage to cultures and the destruction of the studied specimens during laboratory work [40,41].
Traditional methods of controlling crop growth and development are based entirely on manual labor, and less often on the use of expensive aircraft. The monitoring of leaf blades, stem, and soil profiles carried out in this way for the presence of various problems has a number of disadvantages. This is significantly long and expensive, which in turn leads to a slowdown in the development of the agricultural sector in specific areas. It is also unable to determine as accurately as possible the whole picture of the state of crops as a whole. These are also factors that are directly determined by the work of people. In particular, we are talking about labor productivity and compliance with the technologies of cultivated crops [42,43,44].
Machine vision systems are becoming increasingly common onboard agricultural vehicles (autonomous and non-autonomous) for different tasks. The use of robotic systems, equipped with vision-based sensors, for site-specific treatments in Precision Agriculture (PA) is seeing continuous growth [45,46,47,48]. A common practice consists of image processing for deceases or crop identification. However, complex systems are rare. The developed application is part of the hardware and software complex shown in Figure 17 and Figure 18.
The software and hardware complex can be installed in any vehicle. Such a mobile system will allow for real-time monitoring of the state of the vineyards and will display it on an interactive map (Figure 19). The novelty of this study lies in the integration of software and hardware.
So far, a dataset of images of grape diseases typical of the Black Sea region has been gathered, a neural network for the recognition of grape diseases has been constructed, and a desktop application and a mobile application have been developed. In the experimental version, a hardware complex for automated data collection was implemented. Detailed economic calculations may be considered in future studies. Accuracy scores and metrics will be compared to the existing ones. The software package implemented the best approaches to a control system in viticulture using computer vision technologies.

5. Conclusions

Grapes are very susceptible to damage caused by disease, which can be caused by insects or fungi. The effect of this damage is further exacerbated by now-more-frequent events caused by global warming, such as unseasonable temperatures and extreme weather. Given these factors, combined with global instability and rising prices, the need to protect plants from disease cannot be overemphasized. To this end, this article evaluated the adaptation of deep learning convolutional neural networks to classify grape diseases according to the images of infected leaves.
Technology has penetrated every aspect of everyday life with varying degrees of success and influence. In this regard, artificial intelligence is one of the most promising and widespread areas for innovation. Deep learning algorithms can be used to solve many research problems. Convolutional neural networks are best suited for image-based applications. The results in this article demonstrate that the transfer learning approach has the potential to achieve robust performance. These results show that it is possible to classify grape diseases in such a way that the system satisfies performance requirements when deployed in the field. Moreover, the approach in this paper works with little overhead, no preprocessing or image processing, and no explicit feature extraction.
LabelIMG was used to prepare the dataset. LabelIMG is a data preparation software tool used for training computer models based on deep learning. It is designed to simplify and speed up the process of creating an image markup, which is a necessary step in the field of computer vision and object recognition.
The use of intelligent solutions using machine vision and video analytics allows companies to achieve benefits that positively affect the overall economic effect:
-
Saving time. A fully automated system not only works much faster, but can also work 24/7 if required.
-
Accuracy. Computer-vision-based decision making allows manufacturing companies to achieve higher levels of accuracy within acceptable tolerances. The combination of special equipment and advanced machine vision algorithms achieves a near-perfect level of precision in production and quality control.
In this study, the authors considered a new selective search algorithm for monitoring the state of plant development based on computer vision in viticulture, based on YOLOv5. The most difficult part of object detection is object localization. There are many ways to search for objects in an image; one of the methods is to use a sliding window of different sizes. As a result, the fast and accurate detection of grape health status was realized. The test results show that the accuracy was 97.5%, with a model size of 14.85 MB.
The software product commercialization plan is focused on the automation and robotization of agriculture, and will become the basis for adding the next set of similar software.
A dataset of images of grape diseases typical of the Black Sea region was created. The developed software and hardware complex can be installed in any vehicle. Such a mobile system will allow for real-time monitoring of the state of the vineyards and will display it on a map. The novelty of this study lies in the integration of software and hardware.

Author Contributions

Conceptualization, A.K. and M.R.; methodology, A.M.; software, M.R.; validation, O.S. and N.O.; writing—review and editing, N.O. and A.K.; project administration, D.N. and A.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mavridou, E.; Vrochidou, E.; Papakostas, G.A.; Pachidis, T.; Kaburlasos, V.G. Machine Vision Systems in Precision Agriculture for Crop Farming. J. Imaging 2019, 5, 89. [Google Scholar] [CrossRef]
  2. Tian, H.; Wang, T.; Liu, Y.; Qiao, X.; Li, Y. Computer vision technology in agricultural automation—A review. Inf. Process. Agric. 2020, 7, 1–19. [Google Scholar] [CrossRef]
  3. Rodríguez-Pulido, F.J.; Gómez-Robledo, L.; Melgosa, M.; Gordillo, B.; González-Miret, M.L.; Heredia, F.J. Ripeness estimation of grape berries and seeds by image analysis. Comput. Electron. Agric. 2012, 82, 128–133. [Google Scholar] [CrossRef]
  4. Barbole, M.D.; Jadhav, D.P. Comparative Analysis of Deep Learning Architectures for Grape Cluster Instance Segmentation. Inf. Technol. Ind. 2021, 9, 344–352. [Google Scholar]
  5. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  6. Zhang, C.; Ding, H.; Shi, Q.; Wang, Y. Grape Cluster Real-Time Detection in Complex Natural Scenes Based on YOLOv5s Deep Learning Network. Agriculture 2022, 12, 1242. [Google Scholar] [CrossRef]
  7. Zeng, M.; Gao, H.; Wan, L. Few-Shot Grape Leaf Diseases Classification Based on Generative Adversarial Network. J. Phys. Conf. Ser. 2021, 1883, 012093. [Google Scholar] [CrossRef]
  8. Arnó, J.; Casasnovas, M.; Ribes-Dasi, M.; Rosell, J. Review. Precision Viticulture. Research topics, challenges and opportunities in site-specific vineyard management. Span. J. Agric. Res. 2009, 7, 779–790. [Google Scholar] [CrossRef]
  9. Ireri, D.; Belal, E.; Okinda, C.; Makange, N.; Ji, C.Y. A computer vision system for defect discrimination and grading in tomatoes using machine learning and image processing. Artif. Intell. Agric. 2019, 2, 28–37. [Google Scholar] [CrossRef]
  10. Fina, F.; Birch, P.; Young, R.; Obu, J.; Faithpraise, B.; Chatwin, C. Automatic plant pest detection and recognition using k-means clustering algorithm and correspondence filters. Int. J. Adv. Biotechnol. Res. 2013, 4, 189–199. [Google Scholar]
  11. Li, L.; Zhang, S.; Wang, B. Plant disease detection and classification by deep learning—A review. IEEE Access 2021, 9, 56683–56698. [Google Scholar] [CrossRef]
  12. Zhou, C.; Zhang, Z.; Zhou, S.; Xing, J.; Wu, Q.; Song, J. Grape leaf spot identification under limited samples by fine grained-GAN. IEEE Access 2021, 9, 100480–100489. [Google Scholar] [CrossRef]
  13. Huang, Z.; Qin, A.; Lu, J.; Menon, A.; Gao, J. Grape Leaf Disease Detection and Classification Using Machine Learning. In Proceedings of the 2020 International Conferences on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics), Rhodes Island, Greece, 2–6 November 2020; pp. 870–877. [Google Scholar]
  14. Thet, K.Z.; Htwe, K.K.; Thein, M.M. Grape leaf diseases classification using convolutional neural network. In Proceedings of the 2020 International Conference on Advanced Information Technologies (ICAIT), Yangon, Myanmar, 4–5 November 2020; pp. 147–152. [Google Scholar]
  15. Ji, M.; Zhang, L.; Wu, Q. Automatic grape leaf diseases identification via UnitedModel based on multiple convolutional neural networks. Inf. Process. Agric. 2020, 7, 418–426. [Google Scholar] [CrossRef]
  16. Rong, D.; Ying, Y.B.; Rao, X.Q. Embedded vision detection of defective orange by fast adaptive lightness correction algorithm. Comput. Electron. Agric. 2017, 138, 48–59. [Google Scholar] [CrossRef]
  17. Fan, S.X.; Li, J.B.; Zhang, Y.H.; Tian, X.; Wang, Q.Y.; He, X.; Zhang, C.; Huang, W.Q. On line detection of defective apples using computer vision system combined with deep learning methods. J. Food Eng. 2020, 286, 110102. [Google Scholar] [CrossRef]
  18. Roy, K.; Chaudhuri, S.S.; Pramanik, S. Deep learning based real-time Industrial framework for rotten and fresh fruit detection using semantic segmentation. Microsyst. Technol. 2021, 27, 3365–3375. [Google Scholar] [CrossRef]
  19. Lee, S.H.; Goëau, H.; Bonnet, P.; Joly, A. New perspectives on plant disease characterization based on deep learning. Comput. Electron. Agric. 2020, 170, 105220. [Google Scholar] [CrossRef]
  20. Liu, B.; Tan, C.; Li, S.; He, J.; Wang, H. A data augmentation method based on generative adversarial networks for grape leaf disease identification. IEEE Access 2020, 8, 102188–102198. [Google Scholar] [CrossRef]
  21. Liu, B.; Ding, Z.; Tian, L.; He, D.; Li, S.; Wang, H. Grape Leaf Disease Identification Using Improved Deep Convolutional Neural Networks. Front. Plant Sci. 2020, 11, 1082. [Google Scholar] [CrossRef]
  22. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  23. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar] [CrossRef]
  24. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012, 256, 84–90. [Google Scholar] [CrossRef]
  25. Taye, M.M. Understanding of Machine Learning with Deep Learning: Architectures, Workflow, Applications and Future Directions. Computers 2023, 12, 91. [Google Scholar] [CrossRef]
  26. Apaydin, H.; Feizi, H.; Sattari, M.T.; Colak, M.S.; Shamshirband, S.; Chau, K.-W. Comparative Analysis of Recurrent Neural Network Architectures for Reservoir Inflow Forecasting. Water 2020, 12, 1500. [Google Scholar] [CrossRef]
  27. Ahmad, J.; Farman, H.; Jan, Z. Deep learning methods and applications. In Deep Learning: Convergence to Big Data Analytics; SpringerBriefs in Computer Science; Springer: Singapore, 2019; pp. 31–42. [Google Scholar]
  28. Wang, Y.; Xu, C.; Xu, C.; Xu, C.; Tao, D. Learning versatile filters for efficient convolutional neural networks. arXiv, 2018; arXiv:2109.09310. [Google Scholar] [CrossRef]
  29. Ansari, A.S.; Jawarneh, M.; Ritonga, M.; Jamwal, P.; Mohammadi, M.S.; Veluri, R.K.; Kumar, V.; Shah, M.A. Improved Support Vector Machine and Image Processing Enabled Methodology for Detection and Classification of Grape Leaf Disease. J. Food Qual. 2022, 2022, 9502475. [Google Scholar] [CrossRef]
  30. Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef] [PubMed]
  31. Alajas, O.J.; Concepcion, R.; Dadios, E.; Sybingco, E.; Mendigoria, C.H.; Aquino, H. Prediction of Grape Leaf Black Rot Damaged Surface Percentage Using Hybrid Linear Discriminant Analysis and Decision Tree. In Proceedings of the 2021 International Conference on Intelligent Technologies (CONIT), Hubli, India, 24–26 June 2021; pp. 1–6. [Google Scholar]
  32. Matese, A.; Di Gennaro, S.F. Technology in precision viticulture: A state of the art review. Int. J. Wine Res. 2015, 7, 69–81. [Google Scholar] [CrossRef]
  33. Das, A.J.; Wahi, A.; Kothari, I.; Raskar, R. Ultra-portable, wireless smartphone spectrometer for rapid, non-destructive testing of fruit ripeness. Sci. Rep. 2016, 6, 32504. [Google Scholar] [CrossRef]
  34. Kurtser, P.; Ringdahl, O.; Rotstein, N.; Berenstein, R.; Edan, Y. In-Field Grape Cluster Size Assessment for Vine Yield Estimation Using a Mobile Robot and a Consumer Level RGB-D Camera. IEEE Robot. Autom. Letters 2020, 5, 2031–2038. [Google Scholar] [CrossRef]
  35. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. arXiv 2016, arXiv:1506.02640. [Google Scholar]
  36. Liu, S.; Cossell, S.; Tang, J.; Dunn, G.; Whitty, M. A computer vision system for early stage grape yield estimation based on shoot detection. Comput. Electron. Agric. 2017, 137, 88–101. [Google Scholar] [CrossRef]
  37. Rudolph, R.; Herzog, K.; Töpfer, R.; Steinhage, V. Efficient identification, localization and quantification of grapevine inflorescences in unprepared field images using Fully Convolutional Networks. J. Grapevine Res. 2019, 58, 95–104. [Google Scholar]
  38. Rudenko, M.; Plugatar, Y.; Korzin, V.; Kazak, A.; Gallini, N.; Gorbunova, N. The Use of Computer Vision to Improve the Affinity of Rootstock-Graft Combinations and Identify Diseases of Grape Seedlings. Inventions 2023, 8, 92. [Google Scholar] [CrossRef]
  39. Kazak, A.; Plugatar, Y.; Johnson, J.; Grishin, Y.; Chetyrbok, P.; Korzin, V.; Kaur, P.; Kokodey, T. The Use of Machine Learning for Comparative Analysis of Amperometric and Chemiluminescent Methods for Determining Antioxidant Activity and Determining the Phenolic Profile of Wines. Appl. Syst. Innov. 2022, 5, 104. [Google Scholar] [CrossRef]
  40. Victorino, G.; Maia, G.; Queiroz, J.; Braga, R.; Marques, J.; Lopes, C. Grapevine yield prediction using image analysis—Improving the estimation of non-visible bunches. In Proceedings of the 12th European Federation for Information Technology in Agriculture, Food and the Environment (EFITA) Conference, Rhodes Island, Greece, 27–29 June 2019; p. 6. [Google Scholar]
  41. Klodt, M.; Herzog, K.; Töpfer, R.; Cremers, D. Field phenotyping of grapevine growth using dense stereo reconstruction. BMC Bioinform. 2015, 16, 143. [Google Scholar] [CrossRef] [PubMed]
  42. Li, H.; Li, C.; Li, G.; Chen, L. A real-time table grape detection method based on improved YOLOv4-tiny network in complex background. Biosyst. Eng. 2021, 212, 347–359. [Google Scholar] [CrossRef]
  43. Aquino, A.; Barrio, I.; Diago, M.P.; Millan, B.; Tardaguila, J. vitisBerry: An Android-smartphone application to early evaluate the number of grapevine berries by means of image analysis. Comput. Electron. Agric. 2018, 148, 19–28. [Google Scholar] [CrossRef]
  44. Lüling, N.; Reiser, D.; Straub, J.; Stana, A.; Griepentrog, H.W. Fruit Volume and Leaf-Area Determination of Cabbage by a Neural-Network-Based Instance Segmentation for Different Growth Stages. Sensors 2023, 23, 129. [Google Scholar] [CrossRef] [PubMed]
  45. Sousa, J.J.; Toscano, P.; Matese, A.; Di Gennaro, S.F.; Berton, A.; Gatti, M.; Poni, S.; Pádua, L.; Hruška, J.; Morais, R.; et al. UAV-Based Hyperspectral Monitoring Using Push-Broom and Snapshot Sensors: A Multisite Assessment for Precision Viticulture Applications. Sensors 2022, 22, 6574. [Google Scholar] [CrossRef]
  46. Barriguinha, A.; de Castro Neto, M.; Gil, A. Vineyard Yield Estimation, Prediction, and Forecasting: A Systematic Literature Review. Agronomy 2021, 11, 1789. [Google Scholar] [CrossRef]
  47. Pádua, L.; Adão, T.; Hruška, J.; Sousa, J.J.; Peres, E.; Morais, R.; Sousa, A. Very High Resolution Aerial Data to Support Multi-Temporal Precision Agriculture Information Management. Procedia Comput. Sci. 2017, 121, 407–414. [Google Scholar] [CrossRef]
  48. Zarco-Tejada, P.J.; Diaz-Varela, R.; Angileri, V.; Loudjani, P. Tree Height Quantification Using Very High Resolution Imagery Acquired from an Unmanned Aerial Vehicle (UAV) and Automatic 3D Photo-Reconstruction Methods. Eur. J. Agron. 2014, 55, 89–99. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of the stages of creating an intelligent system for monitoring the state of plant development based on computer vision in viticulture.
Figure 1. Schematic representation of the stages of creating an intelligent system for monitoring the state of plant development based on computer vision in viticulture.
Computation 11 00171 g001
Figure 2. Long-term dynamics of Oidium development in the Black Sea region of Russia, 2014–2022.
Figure 2. Long-term dynamics of Oidium development in the Black Sea region of Russia, 2014–2022.
Computation 11 00171 g002
Figure 3. Long-term dynamics of Mildew development in the Black Sea region of Russia, 2014–2022.
Figure 3. Long-term dynamics of Mildew development in the Black Sea region of Russia, 2014–2022.
Computation 11 00171 g003
Figure 4. Timeline of state-of-the-art object detection methods. The benchmarked methods are marked in red and boldfaced font.
Figure 4. Timeline of state-of-the-art object detection methods. The benchmarked methods are marked in red and boldfaced font.
Computation 11 00171 g004
Figure 5. RGB image divided into three color channels.
Figure 5. RGB image divided into three color channels.
Computation 11 00171 g005
Figure 6. Convolution operation.
Figure 6. Convolution operation.
Computation 11 00171 g006
Figure 7. Two-dimensional convolution.
Figure 7. Two-dimensional convolution.
Computation 11 00171 g007
Figure 8. Two-dimensional convolution process.
Figure 8. Two-dimensional convolution process.
Computation 11 00171 g008
Figure 9. Image labeling for neural network training using LabelIMG.
Figure 9. Image labeling for neural network training using LabelIMG.
Computation 11 00171 g009
Figure 10. Dataset markup data.
Figure 10. Dataset markup data.
Computation 11 00171 g010
Figure 11. Algorithm efficiency plot.
Figure 11. Algorithm efficiency plot.
Computation 11 00171 g011
Figure 12. Recall–confidence graph.
Figure 12. Recall–confidence graph.
Computation 11 00171 g012
Figure 13. Precision–recall graph.
Figure 13. Precision–recall graph.
Computation 11 00171 g013
Figure 14. F1–confidence graph.
Figure 14. F1–confidence graph.
Computation 11 00171 g014
Figure 15. Desktop application. Tab for description of the found disease.
Figure 15. Desktop application. Tab for description of the found disease.
Computation 11 00171 g015
Figure 16. Desktop application. Tab with objects that have a similar disease.
Figure 16. Desktop application. Tab with objects that have a similar disease.
Computation 11 00171 g016
Figure 17. Areas of application of the software and hardware complex.
Figure 17. Areas of application of the software and hardware complex.
Computation 11 00171 g017
Figure 18. Components of the software and hardware complex.
Figure 18. Components of the software and hardware complex.
Computation 11 00171 g018
Figure 19. Examples of models based on field monitoring. Fields with diseases and possible affected areas are highlighted in color.
Figure 19. Examples of models based on field monitoring. Fields with diseases and possible affected areas are highlighted in color.
Computation 11 00171 g019
Table 1. Classification of types of neural networks.
Table 1. Classification of types of neural networks.
Type of Neural
Network
Application PrincipleSupervised (+),
Unsupervised (−), or Mixed (±)
(±) Scope of Application
Perceptron RosenblattPattern recognition, decision making, forecasting, approximation, data analysis+Almost any application, except information optimization
HopfieldData compression and associative memoryThe structure of computer systems
KohonenClustering, data compression, data analysis, optimizationFinance, databases
Radial basis functions (RBF-network)Decision making and control, approximation, forecasting±Management structures, neurocontrol
ConvolutionalPattern recognition+Graphic data processing
PulseDecision making, pattern recognition, data analysis±Prosthetics, robotics, telecommunications, computer vision
Table 2. Comparison of different architectures of convolutional networks.
Table 2. Comparison of different architectures of convolutional networks.
ModelSize, MbAccuracy Top 1Accuracy Top 5
VGG165280.7870.946
InceptionV3920.7790.937
ResNet50980.7490.921
Xception880.7900.945
InceptionResNetV22150.8030.953
Table 3. Structure of the dataset of grape diseases.
Table 3. Structure of the dataset of grape diseases.
NumberNameNumber of Images
1Mildew1227
2Oidium1250
3Anthracnose608
4Esca250
5Gray rot540
6Black rot360
7White rot360
8Bacterial cancer of grapes250
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rudenko, M.; Kazak, A.; Oleinikov, N.; Mayorova, A.; Dorofeeva, A.; Nekhaychuk, D.; Shutova, O. Intelligent Monitoring System to Assess Plant Development State Based on Computer Vision in Viticulture. Computation 2023, 11, 171. https://doi.org/10.3390/computation11090171

AMA Style

Rudenko M, Kazak A, Oleinikov N, Mayorova A, Dorofeeva A, Nekhaychuk D, Shutova O. Intelligent Monitoring System to Assess Plant Development State Based on Computer Vision in Viticulture. Computation. 2023; 11(9):171. https://doi.org/10.3390/computation11090171

Chicago/Turabian Style

Rudenko, Marina, Anatoliy Kazak, Nikolay Oleinikov, Angela Mayorova, Anna Dorofeeva, Dmitry Nekhaychuk, and Olga Shutova. 2023. "Intelligent Monitoring System to Assess Plant Development State Based on Computer Vision in Viticulture" Computation 11, no. 9: 171. https://doi.org/10.3390/computation11090171

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop