Analysis of New RGB Vegetation Indices for PHYVV and TMV Identification in Jalapeño Pepper (Capsicum annuum) Leaves Using CNNs-Based Model
Abstract
:1. Introduction
2. Results
2.1. Image Processing Results
2.2. Data Description and CNN Architectures Parameters
2.3. Experimental Environment
- The average, maximum, and minimum Top-1 accuracies for each model. Top-1 accuracy measures the number of times the answer with the highest probability given by the models matches the expected answer. It is presented as the ratio of the number of correct answers to the total number of answers.
- The precision of each class. This is the ratio of the number of correctly predicted positive instances to the total number of predicted positive instances, see Equation (1). An associated term, macro-precision, is also used and it measures the average precision per class.
- 3.
- The recall of each class that is the ratio of the number of correctly predicted positive instances to the total number of instances in a class, see Equation (2). Macro-recall measures the average recall per class.
- 4.
- F1-score is the weighted average of precision and recall, see Equation (3). Macro-F1-score measures the average F1-score per class.
2.3.1. Models from Scratch
2.3.2. Pre-Trained Models and Data Augmentation
3. Discussion
4. Materials and Methods
4.1. Field Site
4.2. Biological Materials
4.3. Image Dataset
4.4. Vegetation Index Algorithms
4.4.1. Normalized Red–Blue Vegetation Index
4.4.2. Normalized Green–Blue Vegetation Index
4.4.3. Jet Color Scale
4.5. Deep-Learning Models
- Factorizing convolution to reduce the number of parameters (connections) without decreasing the network efficiency.
- Factorizing into smaller convolution to reduce the number of parameters. For example, a 5 × 5 convolution filter has 25 parameters, this filter is replaced by two 3 × 3 convolution filters, 3 × 3 + 3 × 3 = 18 parameters, this is a reduction of 28% of the number of parameters.
- Factorizing into asymmetric convolution is to factorize a standard two-dimensional convolution kernel into two one-dimension convolution kernels. For example, a 3 × 3 convolution filter (9 parameters) could be replaced by a 1 × 3 convolution filter followed by a 3 × 1 convolution filter (1 × 3 + 3× 1 = 6 parameters). This is a reduction of 33% on the number of parameters.
- Auxiliary classifier is a small CNN inserted between layers during training, and the loss incurred is added to the main network loss.
- Efficient Grid Size Reduction.
4.6. Image Processing Experiment
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Guest, D.I. Plant Pathology, Principles. In Encyclopedia of Applied Plant Sciences, 2nd ed.; Thomas, B., Murphy, D., Murray, B., Eds.; Academic Press: Cambridge, MA, USA, 2017; Volume 3, pp. 129–136. [Google Scholar]
- Savary, S.; Willocquet, L.; Pethybridge, S.; Esker, P.; McRoberts, N.; Nelson, A. The global burden of pathogens and pests on major food crops. Nat. Ecol. Evol. 2019, 3, 430–439. [Google Scholar] [CrossRef]
- Oliveira, C.M.; Auad, A.M.; Mendes, S.M.; Frizzas, M.R. Crop losses and the economic impact of insect pests on Brazilian agriculture. Crop Prot. 2014, 56, 50–54. [Google Scholar] [CrossRef] [Green Version]
- Agatz, A.; Ashauer, R.; Sweeney, P.; Brown, C. A knowledge-based approach to designing control strategies for agricultural pests. Agric. Syst. 2020, 183, 102865. [Google Scholar] [CrossRef]
- Cisternas, I.; Velásquez, I.; Caro, A.; Rodríguez, A. Systematic literature review of implementations of precision agriculture. Comput. Electron. Agric. 2020, 176, 105626. [Google Scholar] [CrossRef]
- Karthikeyan, L.; Chawla, I.; Mishra, A. A review of remote sensing applications in agriculture for food security: Crop growth and yield, irrigation, and crop losses. J. Hydrol. 2020, 586, 124905. [Google Scholar] [CrossRef]
- Lu, Y.; Young, S. A survey of public datasets for computer vision tasks in precision agriculture. Comput. Electron. Agric. 2020, 178, 105760. [Google Scholar] [CrossRef]
- Atila, Ü.; Uçar, M.; Akyol, K.; Uçar, E. Plant leaf disease classification using Efficient Net deep learning model. Ecol. Inform. 2021, 61, 101182. [Google Scholar] [CrossRef]
- Arnal-Barbedo, J. Plant disease identification from individual lesions and spots using deep learning. Biosyst. Eng. 2019, 180, 96–107. [Google Scholar] [CrossRef]
- Chen, Z.; Wang, X. Model for estimation of total nitrogen content in sandalwood leaves based on nonlinear mixed effects and dummy variables using multispectral images. Chemometr. Intell. Lab. Syst. 2019, 195, 103874. [Google Scholar] [CrossRef]
- Ampatzidis, Y.; Partel, V.; Costa, L. Agroview: Cloud-based application to process, analyze and visualize UAV-collected data for precision agriculture applications utilizing artificial intelligence. Comput. Electron. Agric. 2020, 174, 105457. [Google Scholar] [CrossRef]
- Griffel, L.; Delparte, D.; Edwards, J. Using Support Vector Machines classification to differentiate spectral signatures of potato plants infected with Potato Virus Y. Comput. Electron. Agric. 2018, 153, 318–324. [Google Scholar] [CrossRef]
- Argüeso, D.; Picon, A.; Irusta, U.; Medela, A.; San-Emeterio, M.; Bereciartua, A.; Alvarez-Gila, A. Few-Shot Learning approach for plant disease classification using images taken in the field. Comput. Electron. Agric. 2020, 175, 105542. [Google Scholar] [CrossRef]
- Rangarajan, A.; Purushothaman, R.; Ramesh, A. Tomato crop disease classification using pre-trained deep learning algorithm. Procedia Comput. Sci. 2018, 133, 1040–1047. [Google Scholar] [CrossRef]
- Liu, Z.; Gao, J.; Yang, G.; He, Y. Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network. Sci. Rep. 2016, 6, 20410. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, R.; Zhang, J.; Dong, W.; Yu, J.; Xie, C.J.; Li, R.; Chen, T.; Chen, H. A Crop Pests Image Classification Algorithm Based on Deep Convolutional Neural Network. Telkomnika 2017, 15, 1239–1246. [Google Scholar] [CrossRef] [Green Version]
- Cheng, X.; Zhang, Y.; Chen, Y.; Wu, Y.; Yue, Y. Pest identification via deep residual learning in complex background. Comput. Electron. Agric. 2017, 141, 351–356. [Google Scholar] [CrossRef]
- Lu, Y.; Yi, S.; Zeng, N.; Liu, Y.; Zhang, Y. Identification of rice diseases using deep convolutional neural networks. Neurocomputing 2017, 267, 378–384. [Google Scholar] [CrossRef]
- Barbedo, J. Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification. Comput. Electron. Agric. 2018, 153, 46–53. [Google Scholar] [CrossRef]
- Thenmozhi, K.; Srinivasulu, R.U. Crop pest classification based on deep convolutional neural network and transfer learning. Comput. Electron. Agric. 2019, 164, 104906. [Google Scholar] [CrossRef]
- Picon, A.; Seitz, M.; Alvarez-Gila, A.; Mohnke, P.; Ortiz-Barredo, A.; Echazarra, J. Crop conditional Convolutional Neural Networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions. Comput. Electron. Agric. 2019, 167, 105093. [Google Scholar] [CrossRef]
- Barman, U.; Choudhury, R.; Sahu, D.; Barman, G. Comparison of convolution neural networks for smartphone image based real time classification of citrus leaf disease. Comput. Electron. Agric. 2020, 177, 105661. [Google Scholar] [CrossRef]
- Kang, H.; Chen, C. Fast implementation of real-time fruit detection in apple orchards using deep learning. Comput. Electron. Agric. 2020, 168, 105108. [Google Scholar] [CrossRef]
- Alves, A.; Souza, W.; Borges, D. Cotton pests classification in field-based images using deep residual networks. Comput. Electron. Agric. 2020, 174, 105488. [Google Scholar] [CrossRef]
- Rosales-Mendoza, S.; Govea-Alonso, D.; Monreal-Escalante, E.; Fragoso, G.; Sciutto, E. Developing plant-based vaccines against neglected tropical diseases: Where are we? Vaccine 2012, 31, 40–48. [Google Scholar] [CrossRef]
- Guevara-Olvera, L.; Ruíz-Nito, M.; Rangel-Cano, R.; Torres-Pacheco, I.; Rivera-Bustamante, R.; Muñoz-Sánchez, C.; González-Chavira, M.; Cruz-Hernandez, A.; Guevara-González, R. Expression of a germin-like protein gene (CchGLP) from a geminivirus-resistant pepper (Capsicum chinense Jacq.) enhances tolerance to geminivirus infection in transgenic tobacco. Physiol. Mol. Plant Pathol. 2012, 78, 45–50. [Google Scholar] [CrossRef]
- Gao, L.; Wang, X.; Johnson, B.; Tian, Q.; Wang, Y.; Verrelst, J.; Mu, X.; Gu, X. Remote sensing algorithms for estimation of fractional vegetation cover using pure vegetation index values: A review. J. Photogramm. Remote Sens. 2020, 159, 364–377. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
- Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Ji, Q.; Huang, J.; He, W.; Sun, Y. Optimized Deep Convolutional Neural Networks for Identification of Macular Diseases from Optical Coherence Tomography Images. Algorithms 2019, 12, 51. [Google Scholar] [CrossRef] [Green Version]
- Howard, A.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Mohd, A.M.; Hashim, N.; Bejo, S.K. Comparison of laser backscattering imaging and computer vision system for grading of seedless watermelons. J. Food Meas. Charact. 2020, 14, 69–77. [Google Scholar] [CrossRef]
Parameters | VGG-16 | Xception | Inception v3 | MobileNet v2 |
---|---|---|---|---|
Optimizer | SGD | SGD | SGD | SGD |
Learning rate | 1e-3 | 1e-4 | 1e-3/1e-4 | 1e-3/1e-4 |
Momentum | 0.9 | 0.9 | 0.9 | 0.9 |
Weight decay | 1e-4 | 1e-5 | 1e-5 | 1e-5 |
Epochs | 100 | 100 | 100 | 100 |
Batch size | 16 | 16 | 16 | 16 |
Dataset | VGG-16 (avg/min/max) | Xception (avg/min/max) | Inception v3 (avg/min/max) | MobileNet v2 (avg/min/max) |
---|---|---|---|---|
RGB | 73.3/71.4/75 | 80.3/79.5/81.6 | 77.5/71.4/79.5 | 73.4/69.3/75.0 |
NRBVI | 71.6/70.1/80.2 | 75.0/74.4/81.6 | 76.6/75.5/77.5 | 70.6/65.1/71.6 |
NRBVI-Jet | 73.3/71.3/75.7 | 83.3/81.6/83.6 | 82.3/79.9/85.7 | 77.5/70.0/81.6 |
NGBVI | 70.0/59.1/70.4 | 76.6/63.3/76.9 | 75.5/68.3/79.5 | 70.0/66.6/73.4 |
NGBVI-Jet | 73.1/65.3/75.2 | 82.9/80.2/85.7 | 81.6/79.5/83.6 | 71.4/70.0/73.4 |
Class Name | VGG-16 | Xception | Inception v3 | MobileNet v2 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Precision | Recall | F1-Score | Precision | Recall | F1-Score | Precision | Recall | F1-Score | Precision | Recall | F1-Score | |
HEALTHY | 0.74 | 0.85 | 0.79 | 0.87 | 1.0 | 0.93 | 0.8 | 1.0 | 0.89 | 0.7 | 0.95 | 0.81 |
PHYVV | 0.79 | 0.55 | 0.65 | 0.78 | 0.7 | 0.74 | 0.72 | 0.65 | 0.68 | 0.55 | 0.55 | 0.55 |
TMV | 0.7 | 0.8 | 0.74 | 0.84 | 0.8 | 0.82 | 0.82 | 0.7 | 0.76 | 0.92 | 0.6 | 0.73 |
Macro avg | 0.74 | 0.73 | 0.73 | 0.83 | 0.83 | 0.83 | 0.78 | 0.78 | 0.78 | 0.73 | 0.7 | 0.7 |
Dataset | VGG-16 (avg/min/max) | Xception (avg/min/max) | Inception v3 (avg/min/max) | MobileNet v2 (avg/min/max) |
---|---|---|---|---|
RGB | 93.3/91.6/93.6 | 93.3/91.8/93.4 | 91.8/91.6/93.8 | 93.3/91.8/93.8 |
NRBVI | 93.3/91.8/95.9 | 96.6/95.2/97.9 | 93.8/91.6/95.9 | 91.8/83.3/93.8 |
NRBVI-Jet | 91.6/89.7/91.9 | 95.0/91.8/96.6 | 89.7/87.7/90.0 | 87.76/83.3/89.8 |
NGBVI | 96.6/89.7/96.9 | 98.3/95.9/98.9 | 95.0/93.8/97.9 | 92.3/89.7/95.0 |
NGBVI-Jet | 95.0/81.6/95.2 | 93.3/81.2/94.7 | 89.7/85.7/91.8 | 91.8/89.7/93.4 |
Class Name | VGG-16 | Xception | Inception v3 | MobileNet v2 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Precision | Recall | F1-Score | Precision | Recall | F1-Score | Precision | Recall | F1-Score | Precision | Recall | F1-Score | |
HEALTHY | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
PHYVV | 0.91 | 1.0 | 0.95 | 0.95 | 1.0 | 0.98 | 0.87 | 1.0 | 0.93 | 0.9 | 0.95 | 0.93 |
TMV | 1.0 | 0.9 | 0.95 | 1.0 | 0.95 | 0.97 | 1.0 | 0.85 | 0.92 | 0.95 | 0.9 | 0.92 |
Macro avg | 0.97 | 0.97 | 0.97 | 0.98 | 0.98 | 0.98 | 0.96 | 0.95 | 0.95 | 0.95 | 0.95 | 0.95 |
Input | Operator | Expansion Factor | Output Channels | Number of Repeated Layers | Stride |
---|---|---|---|---|---|
2242 × 3 | Conv2d | - | 32 | 1 | 2 |
1122 × 32 | bottleneck | 1 | 16 | 1 | 1 |
1122 × 16 | bottleneck | 6 | 24 | 2 | 2 |
562 × 24 | bottleneck | 6 | 32 | 3 | 2 |
282 × 32 | bottleneck | 6 | 64 | 4 | 2 |
142 × 64 | bottleneck | 6 | 96 | 3 | 1 |
142 × 96 | bottleneck | 6 | 160 | 3 | 2 |
72 × 160 | bottleneck | 6 | 320 | 1 | 1 |
72 × 320 | bottleneck | - | 1280 | 1 | 1 |
72 × 1280 | Avgpool 7 × 7 | - | - | 1 | - |
1 × 1 × 1280 | Conv2d 1 × 1 | - | - | k | - |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yee-Rendon, A.; Torres-Pacheco, I.; Trujillo-Lopez, A.S.; Romero-Bringas, K.P.; Millan-Almaraz, J.R. Analysis of New RGB Vegetation Indices for PHYVV and TMV Identification in Jalapeño Pepper (Capsicum annuum) Leaves Using CNNs-Based Model. Plants 2021, 10, 1977. https://doi.org/10.3390/plants10101977
Yee-Rendon A, Torres-Pacheco I, Trujillo-Lopez AS, Romero-Bringas KP, Millan-Almaraz JR. Analysis of New RGB Vegetation Indices for PHYVV and TMV Identification in Jalapeño Pepper (Capsicum annuum) Leaves Using CNNs-Based Model. Plants. 2021; 10(10):1977. https://doi.org/10.3390/plants10101977
Chicago/Turabian StyleYee-Rendon, Arturo, Irineo Torres-Pacheco, Angelica Sarahy Trujillo-Lopez, Karen Paola Romero-Bringas, and Jesus Roberto Millan-Almaraz. 2021. "Analysis of New RGB Vegetation Indices for PHYVV and TMV Identification in Jalapeño Pepper (Capsicum annuum) Leaves Using CNNs-Based Model" Plants 10, no. 10: 1977. https://doi.org/10.3390/plants10101977