Next Article in Journal
Similarity Analysis of Learning Interests among Majors Using Complex Networks
Previous Article in Journal
Noninvasive Blood Pressure Classification Based on Photoplethysmography Using K-Nearest Neighbors Algorithm: A Feasibility Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using Deep Learning for Image-Based Different Degrees of Ginkgo Leaf Disease Classification

School of Technology, Beijing Forestry University, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Information 2020, 11(2), 95; https://doi.org/10.3390/info11020095
Submission received: 27 December 2019 / Revised: 6 February 2020 / Accepted: 6 February 2020 / Published: 10 February 2020
(This article belongs to the Section Artificial Intelligence)

Abstract

:
Diseases from Ginkgo biloba have brought great losses to medicine and the economy. Therefore, if the degree of disease can be automatically identified in Ginkgo biloba leaves, people will take appropriate measures to avoid losses in advance. Deep learning has made great achievements in plant disease identification and classification. For this paper, the convolution neural network model was used to classify the different degrees of ginkgo leaf disease. This study used the VGGNet-16 and Inception V3 models. After preprocessing and training 1322 original images under laboratory conditions and 2408 original images under field conditions, 98.44% accuracy was achieved under laboratory conditions and 92.19% under field conditions with the VGG model. The Inception V3 model achieved 92.3% accuracy under laboratory conditions and 93.2% under field conditions. Thus, the Inception V3 model structure was more suitable for field conditions. To our knowledge, there is very little research on the classification of different degrees of the same plant disease. The success of this study will have a significant impact on the prediction and early prevention of ginkgo leaf blight.

1. Introduction

Plant diseases are the main factor endangering the development of agriculture around the world and cause serious losses every year. The treatment of plant diseases has attracted great attention. This research focused on identifying ginkgo leaf disease through leaf blight. Ginkgo biloba has a high medicinal value [1,2]. Its trees can be made into exquisite furniture, and its leaves have ornamental value. Ginkgo leaf blight has brought great losses to the economy, so this research focused on the need to diagnose ginkgo leaf disease in a timely and accurate manner.
Early detection and identification of plant disease is very important, so that people will be able to take appropriate preventive measures as soon as possible [3]. The changes caused by plant diseases are very complex and diverse; however, in traditional agricultural and forestry production, most forest producers judge a disease’s species and degree based on their experience in observing plant diseases. This requires forest farmers to have the knowledge and skill to identify disease symptoms. Lack of knowledge will lead to inconsistencies in plant disease identification and incorrect treatment and will ultimately delay the treatment period, which will result in unnecessary economic losses. Even if experts are invited to identify a disease, it will take some time. Therefore, it is necessary to implement an automatic system of plant disease recognition and classification.
In the study of automatic classification of plant diseases, some new techniques have been applied [4].
With the development of computational systems in recent years, more and more computer vision technologies have been applied to the recognition of plant diseases. Rumpf et al. [5] used support vector machines for the early detection and identification of healthy and diseased sugar beet leaves. Even using multiple classifications for healthy leaves and diseased leaves showing symptoms of three diseases, the authors achieved an accuracy higher than 86%. Depending on the type and stage of disease, the classification accuracy fell between 65% and 90%. Similarly, the accuracy of pattern recognition in wheat leaf diseases was classified using a support vector machine and was finished by Tian et al. [6]. The classification module was programmed with three feature sets: color features, shape features, and texture features. The method was flexible, and its recognition rate was high.
Computer vision technology is also widely used in plant species and disease classification and recognition [7]. Some researchers have studied a paper on plant identification using computer vision technology and made a detailed review [8]. One application in the field of computer vision technology is Leafsnap, which identifies tree species using photographs of leaves. Kumar et al. used Leafsnap to identify 184 tree species in the Northeastern United States by extracting features from leaf contours [9]. Their system obtained state-of-the-art performance with the real-world images from the new Leafsnap data set, which is the largest of its kind.
Artificial neural networks (ANNs) are also a common detection method [10,11,12]. Based on the achievements of modern neuroscience research, an ANN has been proposed [13]. It can make simple judgments by simulating the human brain, so it has been widely used in plant disease detection and recognition. Hati et al. [14] programmed it with 400 leaves from 20 plants and tested 134 leaves, achieving an accuracy of 92%.
Today, deep learning has become the most important detection method. Deep learning is a kind of machine learning that is based on the deep neural network with multiple hidden layers. It improves classification accuracy by building machine learning models with many hidden layers and programming a large quantity of data to extract features. Its basic tool is a convolutional neural network (CNN) [15]. In 2012, Hinton et al. took their CNN to the ImageNet Image Recognition Competition for the first time and won the championship [16], after which it attracted the attention of many researchers. Deep learning has also been introduced to plant species identification. For example, the deep CNN was used to classify white beans, red beans, and soybeans, for which a depth of five layers was determined to be the best [17]. Lee et al. [18] classified 44 species of plants, and their CNN’s highest accuracy was 99.6%. A CNN was also applied to plant specimens, and transfer learning was used by Carranza-Rojas et al. [19].
CNNs have been widely used in plant disease identification. For example, P. Ferentinos [20] programmed 25 plants and 58 distinct classes of [plant, disease] combinations and achieved a 99.53% success rate. Sladojevic et al. [21] pointed out that the model was able to recognize 13 different types of plant diseases from healthy leaves, with the ability to distinguish plant leaves from their surroundings. Mohanty et al. [22] programmed a deep CNN to identify 14 crops and 26 diseases with 99.35% accuracy. Brahimi et al. [23] programmed nine diseases of tomato leaves and achieved 99.18% accuracy.
However, most researchers have studied plant diseases in two directions, as follows:
  • Classification of diseases in different species of plants [20,21,22].
  • Classification of different diseases in the same plant [23,24].
Few studied the classification of disease degree within the same plant disease, and of those people, only some applied deep learning to the identification of diseased ginkgo leaves. However, it is important to take appropriate preventive measures by predicting the development of plant leaf disease. This research has classified the different degrees of ginkgo leaf disease. A CNN was chosen to classify and recognize the disease degree of ginkgo leaf blight.
The goal of this study was as follows:
  • To classify the different degrees of disease in Ginkgo biloba leaves using a deep learning model under laboratory and field conditions that takes into account sunshine, temperature, weather, and other factors.
The rest of the paper is divided into the following parts: Section 2 introduces the methods used, Section 3 discusses the results, and Section 4 presents our conclusions.

2. Materials and Methods

In this research, deep learning was used to recognize the disease classification of Ginkgo biloba leaves. This paper divides the entire process into several sections, which helps to describe it better.

2.1. Data Set

Ginkgo leaves were collected under laboratory conditions from Longyuan Huamu Park from 10 to 12 August 2016 and collected under real conditions from 3 September to 31 October 2018. The data set contained leaves classified as healthy, mildly diseased, and severely diseased. The laboratory images were taken by a Canon EOS 550D (19 million pixels) in a black box with 50 watts of natural illumination. The leaves were placed on flat white paper and were photographed from a consistent distance above the black box. The focal length was set to a fixed value, and the camera lens was perpendicular to the blade surface. There were four principles used in selecting the leaves:
  • Ginkgo leaves have a more complete shape.
  • Ginkgo leaves are flat and easy to photograph.
  • Ginkgo leaves’ surfaces are clean.
  • Ginkgo leaves’ periodic disease characteristics are clearly distinguishable.
The shooting process under laboratory conditions is shown in Figure 1.
The field photos were taken with a Huawei Honor 7C mobile phone, model LNDAL30 (19 million pixels), under real conditions. When shooting, we placed a piece of white paper on a board to act as the background of the photo. Figure 2 shows samples of a random class containing three representative images (healthy, mild, and severe) under laboratory conditions and three under real conditions.
For this paper, only some of the images were selected from the originals. The number of original images is shown in Table 1.

2.2. Image Preprocessing and Labeling

We preprocessed the original images to get better a feature extraction effect, which reduced the time needed for network programming. The digital camera took pictures ranging in size from 1887 to 6770 KB, while the pictures taken with the Huawei mobile phone ranged from 1270 to 2847 KB. The laboratory photos were cut from 1887–2404 KB to 157–540 KB, and their dimensions were reduced from 5184 × 3456 px to 1800 × 1200 px. Similarly, the field photos were rotated and cut from 1270–2847 KB to 298–643 KB, though their dimensions (4160 × 2080 px) remained unchanged. The process is shown in Table 2.

2.3. Data Augmentation

Although the deep learning neural network is very powerful, if there are not enough data, it will result in overfitting, which cannot achieve the desired results [25]. Many researchers have done a lot of work on this topic; for instance, data augmentation has been used to expand programming images. The augmentation methods included image rotation and clipping. After data augmentation, the new data set reached 15,670. Of those, 5569 were healthy, 5964 were mild, and 4137 were severe. The new data set is in Table 3.

2.4. Convolutional Neural Network Models

The advantage of an ANN is that it can conduct supervised learning during training. A CNN is a kind of deep feedforward ANN. CNNs have achieved great success in various types of image recognition, including in the field of plant disease diagnosis, and can improve the accuracy of plant disease diagnosis. During recognition and classification, the CNN can directly input the original image to extract features, thus reducing a lot of preprocessing.
The VGGNet-16 [26] and Inception V3 [27] models were chosen to test the data in this work. Compared with AlexNet, VGGNet increases the network depth and reduces the convolution kernel size, which can reduce the parameters and computation. In addition, the generalization performance of VGGNet is very good. VGGNet-19 has a large number of parameters, so we chose VGGNet-16.
In order to reduce the number of parameters, GoogLeNet model is proposed. It uses the Inception module. Inception V3 introduced factorization, which split a large two-dimensional convolution into two smaller one-dimensional convolutions (e.g., 3 × 3 convolutions into 1 × 3 convolutions and 3 × 1 convolutions). Figure 3 shows the structure of the Inception V3 module.
ResNet model has a deeper network. The data we want to train do not need such a deep network. Therefore, we did not choose ResNet model.
Based on Tensorflow, the VGGNet16 and Inception V3 models were used to program the data of Ginkgo biloba leaves.

2.5. Training Data Sets

The data set was divided into two parts: a training set and a test set. Eighty percent of the data were used as a training set to program the network and 20% as a test set. The images in the VGGNet model were cropped to 224 × 224 px, while the Inception V3 cropped them to 299 × 299 px. Two different CNN models were programmed using the training parameters shown in Table 4.

3. Results and Discussion

This research was programmed with 80% of the data set and tested with 20% of the data set. The results are presented in Table 5 and Table 6.
For the VGGNet-16 model in Table 5, the initial learning rates were set to 0.01, 0.001, and 0.0001. Under laboratory conditions, the accuracy increased gradually and reached 98.44%, then stabilized. Under field conditions, the accuracy increased first and then decreased, which was different from the expected outcome. When the learning rate was 0.001, the accuracy rate was highest (92.19%). To verify whether the learning rate consistently increases first and then decreases, experiments with learning rates of 0.005 and 0.0005 were added. The final results showed that under field conditions, the accuracy increased first and then decreased with the decrease in learning rate. The reason was that when it was lower than 0.001, the learning rate was too small, resulting in slow convergence rate and parameter update range. Therefore, we should continue to increase the number of training steps in order to make the accuracy converge to the best. Figure 4 and Figure 5 show the accuracy curve of the VGG model under laboratory and field conditions.
For the Inception V3 model in Table 6, the initial learning rates were set to 0.01, 0.001, and 0.0001. Under laboratory and field conditions, the accuracy decreased uniformly. Thus, when the learning rate was 0.01, the accuracy rate was the highest (92.3% and 93.2%, respectively). When it is lower than 0.001, we should continue to increase the number of training steps in order to make the accuracy converge to the best. Figure 6 and Figure 7 show the accuracy curve of the Inception V3 model under laboratory and field conditions.
Between the two models, the accuracy of the VGG model is higher than that of the Inception V3 model under laboratory conditions, but the accuracy of the Inception V3 model is higher than that of the VGG model under field conditions. For the VGG model, the accuracy under laboratory conditions is higher than it is under field conditions. This demonstrates the fact that classification under field conditions is much more difficult and complex than it is under laboratory conditions. Under field conditions, the increase in network parameters, slow programming, and disappearance of gradient led to a decrease in accuracy. This shows that it is important to capture photos under field conditions to automatically detect and classify plant diseases. For the Inception V3 model, the accuracies of the two conditions were close. The results meant that the Inception V3 model was more stable than the VGG model. With Inception V3, two losses were added at different depths to avoid the disappearance of gradient, and it also used the Inception module. Through these, the depth and width were increased, and the number of parameters was reduced, making the model more stable and adaptable. It can be seen from Figure 7 that the convergence effect of the accuracy curve is better.
Therefore, when we want to classify and recognize diseased leaves under field conditions, we can choose the Inception V3 model. It can be said that the model has achieved good results. There were also many studies that showed the advantages of using deep learning models in plant disease detection and classification [28].

4. Conclusions

In this study, a CNN was applied to the field of ginkgo leaf diseases, which have not been studied yet, and classified the different degrees of ginkgo leaf disease. This direction is different from the ones taken by other studies of plant diseases. Of the original images in the data set, 1319 were taken under laboratory conditions and 2408 under field conditions. After pretreatment and data augmentation, the VGG16 and Inception V3 model structures were used to program the data set. The VGG model achieved 98.44% accuracy under laboratory conditions and 92.19% under field conditions; however, the results were quite different. On the other hand, the Inception V3 model achieved consistent accuracies under both laboratory and field conditions. This proved that the Inception V3 model structure is more suitable to classify the different degrees of ginkgo leaf disease in field conditions.
The goal of our research has been mostly completed. We believe that with some more work, our findings will have a practical impact on the prevention of disease in ginkgo leaves.

Author Contributions

Conceptualization, K.L. and J.L. (Jianhui Lin); methodology, K.L.; validation, J.L. (Jianhui Lin); formal analysis, K.L. and J.L. (Jianhui Lin); resources, K.L. and J.L. (Jinrong Liu); writing—original draft preparation, K.L.; writing—review and editing, K.L. and J.L. (Jianhui Lin); supervision, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by “the Fundamental Research Funds for the Central Universities (NO. 2015ZCQ-GX-03).”

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dekosky, S.T.; Williamson, J.D.; Fitzpatrick, A.L.; Kronmal, R.A.; Ives, D.G.; Saxton, J.A. Ginkgo biloba for prevention of dementia: A randomized controlled trial. JAMA. 2008, 300, 2253–2262. [Google Scholar] [CrossRef] [PubMed]
  2. Watanabe, C.M.H.; Wolffram, S.; Ader, P.; Rimbach, G.; Packer, L.; Maguire, J.J. The in vivo neuromodulatory effects of the herbal medicine ginkgo biloba. Proc. Natl. Acad. Sci. USA 2001, 98, 6577–6580. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Sally, A.M.; Beed, F.D.; Harmon, C.L. Plant disease diagnostic capabilities and networks. Annu. Rev. Phytopathol. 2009, 47, 15–38. [Google Scholar]
  4. Sankaran, S.; Mishra, A.; Ehsani, R. A review of advanced techniques for detecting plant diseases. Comput. Electron. Agric. 2010, 72, 1–13. [Google Scholar] [CrossRef]
  5. Rumpf, T.; Mahlein, A.K.; Steiner, U.; Oerke, E.C.; Dehne, H.W.; Plumer, L. Early detection and classification of plant diseases with Support Vector Machines based on hyperspectral reflectance. Comput. Electron. Agric. 2010, 74, 91–99. [Google Scholar] [CrossRef]
  6. Tian, Y.; Zhao, C.J.; Lu, S.L.; Guo, X.Y. Multiple Classifier Combination For Recognition Of Wheat Leaf Diseases. Intell. Autom. Soft Comput. 2011, 17, 519–529. [Google Scholar] [CrossRef]
  7. Wilf, P.; Zhang, S.; Chikkerur, S.; Little, S.A.; Wing, S.L.; Serre, T. Computer vision cracks the leaf code. Proc. Natl. Acad. Sci. USA 2016, 113, 3305–3310. [Google Scholar] [CrossRef] [Green Version]
  8. Wäldchen, J.; Mäder, P. Plant species identification using computer vision techniques: A systematic literature review. Arch Comput. Method E. 2018, 25, 507–543. [Google Scholar] [CrossRef] [Green Version]
  9. Kumar, N.; Belhumeur, P.N.; Biswas, A.; Jacobs, D.W.; Kress, W.J. Leafsnap: A computer vision system for automatic plant species identification. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2012; pp. 502–516. [Google Scholar]
  10. Wu, Q.F.; Lin, K.H.; Zhou, C.G. Feature extraction and automatic recognition of plant leaf using artificial neural network. Adv. Artif. Intell. 2007, 3, 5–12. [Google Scholar]
  11. Hong, F. Extraction of Leaf Vein Features Based on Artificial Neural Network-Studies on the Living Plant Identification Ⅰ. Chin. Bull. Bot. 2004, 21, 429–436. (In Chinese) [Google Scholar]
  12. Rastogi, A.; Arora, R.; Sharma, S. Leaf disease detection and grading using computer vision technology and fuzzy logic. In Proceedings of the 2nd International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 19–20 February 2015; pp. 500–505. [Google Scholar]
  13. Mehrotra, K.; Mohan, C.K.; Ranka, S. Elements of Artificial Neural Networks. A Bradford Book; The MIT Press: Cambridge, MA, USA; London, UK, 1997. [Google Scholar]
  14. Hati, S.; Sajeevan, G. Plant recognition from leaf image through artificial neural network. IJCA 2013, 62, 15–18. [Google Scholar] [CrossRef]
  15. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document Recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  16. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  17. Grinblat, G.L.; Uzal, L.C.; Larese, M.G.; Granitto, P.M. Deep learning for plant identification using vein morphological patterns. Comput. Electron. Agric. 2016, 127, 418–424. [Google Scholar] [CrossRef]
  18. Lee, S.H.; Chan, C.S.; Wilkin, P.; Remagnino, P. Deep-plant: Plant identification with convolutional neural networks. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 452–456. [Google Scholar]
  19. Carranza-Rojas, J.; Goeau, H.; Bonnet, P.; Mata-Montero, E.; Joly, A. Going deeper in the automated identification of Herbarium specimens. BMC Evol. Biol. 2017, 17, 181. [Google Scholar] [CrossRef] [Green Version]
  20. Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
  21. Sladojevic, S.; Arsenovic, M.; Anderla, A.; Culibrk, D.; Stefanovic, D. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification. Comput. Intell. Neurosc. 2016, 2016, 3289801. [Google Scholar] [CrossRef] [Green Version]
  22. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using Deep Learning for Image-Based Plant Disease Detection. Front Plant Sci. 2016, 17, 1419. [Google Scholar] [CrossRef] [Green Version]
  23. Brahimi, M.; Boukhalfa, K.; Moussaoui, A. Deep learning for tomato diseases: Classification and symptoms visualization. Appl. Artif. Intell. 2017, 31, 299–315. [Google Scholar] [CrossRef]
  24. Lu, Y.; Yi, S.J.; Zeng, N.Y.; Liu, Y.R.; Zhang, Y. Identification of rice diseases using deep convolutional neural networks. Neurocomputing 2017, 267, 378–384. [Google Scholar] [CrossRef]
  25. Hawkins, D.M. The problem of overfitting. J. Chem. Inf. Comput. Sci. 2004, 35, 1–4. [Google Scholar] [CrossRef] [PubMed]
  26. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  27. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the Computer Vision and Pattern Recognition 2016, Las Vegas, NV, USA, 27 June–28 July 2016; pp. 2818–2826. [Google Scholar]
  28. Schmidhuber, J. Deep Learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. The shooting process under laboratory conditions.
Figure 1. The shooting process under laboratory conditions.
Information 11 00095 g001
Figure 2. Samples of images under laboratory (top) and field (bottom) conditions.
Figure 2. Samples of images under laboratory (top) and field (bottom) conditions.
Information 11 00095 g002
Figure 3. The network structure of Inception V3.
Figure 3. The network structure of Inception V3.
Information 11 00095 g003
Figure 4. Accuracy curve of VGG model under laboratory conditions.
Figure 4. Accuracy curve of VGG model under laboratory conditions.
Information 11 00095 g004aInformation 11 00095 g004b
Figure 5. Accuracy curve of VGG model under field conditions.
Figure 5. Accuracy curve of VGG model under field conditions.
Information 11 00095 g005aInformation 11 00095 g005b
Figure 6. Accuracy curve of Inception V3 model under laboratory conditions.
Figure 6. Accuracy curve of Inception V3 model under laboratory conditions.
Information 11 00095 g006
Figure 7. Accuracy curve of Inception V3 model under field conditions.
Figure 7. Accuracy curve of Inception V3 model under field conditions.
Information 11 00095 g007
Table 1. The number of original images.
Table 1. The number of original images.
Disease DegreeLaboratory ConditionsField Conditions
Healthy32687
Mild6711945
Severe322376
Table 2. The process of image preprocessing.
Table 2. The process of image preprocessing.
Laboratory ConditionsField Conditions
Original picture size (KB)1887–67701270–2847
Original picture size (pixels)5184 × 34564160 × 2080
Experimental picture size (KB)157–540298–643
Experimental picture size (pixels)1800 × 12004160 × 2080
Table 3. The number of the new data set.
Table 3. The number of the new data set.
Disease DegreeLaboratory ConditionsField Conditions
Healthy55695569
Mild59645964
Severe41374137
Total15,67015,670
Table 4. The training parameters of different CNN models.
Table 4. The training parameters of different CNN models.
ParametersVGG-16Inception V3
Batch-size6464
Step20004000
Input-width224299
Input-height224299
Learningrate0.01–0.0001
Table 5. The results of VGG16 model.
Table 5. The results of VGG16 model.
Learning RateVGG16
Laboratory ConditionsField Conditions
AccuracyLossAccuracyLoss
0.0193.75%0.1581.25%0.51
0.00598.44%0.0587.50%0.26
0.00198.44%0.0292.19%0.17
0.000598.44%0.0389.06%0.24
0.000198.44%0.0585.94%0.37
Table 6. The results of Inception V3 model.
Table 6. The results of Inception V3 model.
Learning RateInception V3
Laboratory ConditionsField Conditions
AccuracyAccuracy
0.0192.30%93.20%
0.00188.60%89.00%
0.000173.40%73.20%

Share and Cite

MDPI and ACS Style

Li, K.; Lin, J.; Liu, J.; Zhao, Y. Using Deep Learning for Image-Based Different Degrees of Ginkgo Leaf Disease Classification. Information 2020, 11, 95. https://doi.org/10.3390/info11020095

AMA Style

Li K, Lin J, Liu J, Zhao Y. Using Deep Learning for Image-Based Different Degrees of Ginkgo Leaf Disease Classification. Information. 2020; 11(2):95. https://doi.org/10.3390/info11020095

Chicago/Turabian Style

Li, Kaizhou, Jianhui Lin, Jinrong Liu, and Yandong Zhao. 2020. "Using Deep Learning for Image-Based Different Degrees of Ginkgo Leaf Disease Classification" Information 11, no. 2: 95. https://doi.org/10.3390/info11020095

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop