Vision Transformers in Optimization of AI-Based Early Detection of Botrytis cinerea
Abstract
:1. Introduction
2. Materials and Methods
2.1. Experimental Setup and Inoculation Processes
2.2. Multi-Spectral Imaging and Annotation
2.3. Deep Learning Segmentation Models
2.4. Model Training
2.5. Evaluation Metrics
2.6. Cut-and-Paste Augmentation Technique
3. Results and Discussion
3.1. Assessment of Deep Learning Segmentation Models
3.2. Early-Stage Evaluation
3.3. Qualitative Results of Disease Severity Levels
3.4. Qualitative Results of Biotic and Abiotic Plant Stress Factors
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
Architecture | Encoder | IoU (Healthy) | IoU (Invisible) | IoU (Visible) | Recall (Healthy) | Recall (Invisible) | Recall (Visible) |
---|---|---|---|---|---|---|---|
PAN | MobileViT-S | 0.742 | 0.247 | 0.360 | 0.848 | 0.435 | 0.612 |
PAN | MobileViTV2-125 | 0.600 | 0.135 | 0.197 | 0.710 | 0.253 | 0.391 |
MA-Net | MobileViT-S | 0.711 | 9.69 × 10−16 | 0.146 | 0.905 | 9.69 × 10−16 | 0.310 |
MA-Net | MobileViTV2-125 | 0.692 | 9.69 × 10−16 | 0.213 | 0.827 | 9.69 × 10−16 | 0.504 |
DeepLabV3+ | MobileViT-S | 0.790 | 0.279 | 0.470 | 0.883 | 0.370 | 0.761 |
U-Net++ | MobileViT-S | 0.778 | 0.293 | 0.470 | 0.880 | 0.303 | 0.635 |
U-Net++ 1 | MobileViTV2-125 | 0.658 | 0.429 | 0.490 | 0.770 | 0.503 | 0.782 |
U-Net++ 2 | MobileViTV2-125 | 0.752 | 0.671 | 0.604 | 0.865 | 0.795 | 0.781 |
Appendix B
References
- Williamson, B.; Tudzynski, B.; Tudzynski, P.; Van Kan, J.A.L. Botrytis cinerea: The cause of grey mould disease. Mol. Plant Pathol. 2007, 8, 561–580. [Google Scholar] [CrossRef] [PubMed]
- Li, H.; Chen, Y.; Zhang, Z.; Li, B.; Qin, G.; Tian, S. Pathogenic mechanisms and control strategies of Botrytis cinerea causing post-harvest decay in fruits and vegetables. Food Qual. Saf. 2018, 2, 111–119. [Google Scholar] [CrossRef]
- Latorre, B.A.; Elfar, K.; Ferrada, E.E. Gray mold caused by Botrytis cinerea limits grape production in Chile. Cienc. Investig. Agrar. 2015, 42, 305–330. [Google Scholar] [CrossRef]
- Al-Sarayreh, M.; Reis, M.M.; Yan, W.Q.; Klette, R. Potential of deep learning and snapshot hyperspectral imaging for classification of species in meat. Food Control 2020, 117, 107332. [Google Scholar] [CrossRef]
- Romanazzi, G.; Smilanick, J.L.; Feliziani, E.; Droby, S. Integrated management of postharvest gray mold on fruit crops. Postharvest Biol. Technol. 2016, 113, 69–76. [Google Scholar] [CrossRef]
- Leroux, P.; Fritz, R.; Debieu, D.; Albertini, C.; Lanen, C.; Bach, J.; Gredt, M.; Chapeland, F. Mechanisms of resistance to fungicides in field strains of Botrytis cinerea. Pest Manag. Sci. 2002, 58, 876–888. [Google Scholar] [CrossRef] [PubMed]
- Bilkiss, M.; Shiddiky, M.J.A.; Ford, R. Advanced Diagnostic Approaches for Necrotrophic Fungal Pathogens of Temperate Legumes with a Focus on Botrytis spp. Front. Microbiol. 2019, 10, 1889. [Google Scholar] [CrossRef] [PubMed]
- Rosslenbroich, H.-J.; Stuebler, D. Botrytis cinerea—History of chemical control and novel fungicides for its management. Crop Prot. 2000, 19, 557–561. [Google Scholar] [CrossRef]
- Wäldchen, J.; Mäder, P. Machine learning for image based species identification. Methods Ecol. Evol. 2018, 9, 2216–2225. [Google Scholar] [CrossRef]
- Giakoumoglou, N.; Pechlivani, E.M.; Tzovaras, D. Generate-Paste-Blend-Detect: Synthetic dataset for object detection in the agriculture domain. Smart Agric. Technol. 2023, 5, 100258. [Google Scholar] [CrossRef]
- Tsiakas, K.; Papadimitriou, A.; Pechlivani, E.M.; Giakoumis, D.; Frangakis, N.; Gasteratos, A.; Tzovaras, D. An Autonomous Navigation Framework for Holonomic Mobile Robots in Confined Agricultural Environments. Robotics 2023, 12, 146. [Google Scholar] [CrossRef]
- Pechlivani, E.M.; Gkogkos, G.; Giakoumoglou, N.; Hadjigeorgiou, I.; Tzovaras, D. Towards Sustainable Farming: A Robust Decision Support System’s Architecture for Agriculture 4.0. In Proceedings of the 2023 24th International Conference on Digital Signal Processing (DSP), Rhodes (Rodos), Greece, 11–13 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar] [CrossRef]
- Robertson, S.; Azizpour, H.; Smith, K.; Hartman, J. Digital image analysis in breast pathology—From image processing techniques to artificial intelligence. Transl. Res. 2018, 194, 19–35. [Google Scholar] [CrossRef] [PubMed]
- Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput. Intell. Neurosci. 2018, 2018, 7068349. [Google Scholar] [CrossRef] [PubMed]
- Saleem, M.H.; Potgieter, J.; Arif, K.M. Plant Disease Detection and Classification by Deep Learning. Plants 2019, 8, 468. [Google Scholar] [CrossRef] [PubMed]
- Shoaib, M.; Shah, B.; Ei-Sappagh, S.; Ali, A.; Ullah, A.; Alenezi, F.; Gechev, T.; Hussain, T.; Ali, F. An advanced deep learning models-based plant disease detection: A review of recent research. Front. Plant Sci. 2023, 14, 1158933. [Google Scholar] [CrossRef]
- Khan, S.; Naseer, M.; Hayat, M.; Zamir, S.W.; Khan, F.S.; Shah, M. Transformers in Vision: A Survey. ACM Comput. Surv. 2022, 54, 1–41. [Google Scholar] [CrossRef]
- Bahdanau, D.; Cho, K.; Bengio, Y. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv 2016, arXiv:1409.0473. [Google Scholar] [CrossRef]
- Jamil, S.; Piran, M.J.; Kwon, O.-J. A Comprehensive Survey of Transformers for Computer Vision. Drones 2023, 7, 287. [Google Scholar] [CrossRef]
- Sykes, J.; Denby, K.; Franks, D.W. Computer vision for plant pathology: A review with examples from cocoa agriculture. Appl. Plant Sci. 2023, 12, e11559. [Google Scholar] [CrossRef]
- Dhanya, V.G.; Subeesh, A.; Kushwaha, N.L.; Vishwakarma, D.K.; Kumar, T.N.; Ritika, G.; Singh, A.N. Deep learning based computer vision approaches for smart agricultural applications. Artif. Intell. Agric. 2022, 6, 211–229. [Google Scholar] [CrossRef]
- Thisanke, H.; Deshan, C.; Chamith, K.; Seneviratne, S.; Vidanaarachchi, R.; Herath, D. Semantic segmentation using Vision Transformers: A survey. Eng. Appl. Artif. Intell. 2023, 126, 106669. [Google Scholar] [CrossRef]
- Remez, T.; Huang, J.; Brown, M. Learning to Segment via Cut-and-Paste. arXiv 2018, arXiv:1803.06414. [Google Scholar] [CrossRef]
- Dirr, J.; Bauer, J.C.; Gebauer, D.; Daub, R. Cut-paste image generation for instance segmentation for robotic picking of industrial parts. Int. J. Adv. Manuf. Technol. 2024, 130, 191–201. [Google Scholar] [CrossRef]
- Omia, E.; Bae, H.; Park, E.; Kim, M.S.; Baek, I.; Kabenge, K.; Cho, B.-K. Remote Sensing in Field Crop Monitoring: A Comprehensive Review of Sensor Systems, Data Analyses and Recent Advances. Remote Sens. 2023, 15, 354. [Google Scholar] [CrossRef]
- Pechlivani, E.M.; Papadimitriou, A.; Pemas, S.; Giakoumoglou, N.; Tzovaras, D. Low-Cost Hyperspectral Imaging Device for Portable Remote Sensing. Instruments 2023, 7, 32. [Google Scholar] [CrossRef]
- Fahrentrapp, J.; Ria, F.; Geilhausen, M.; Panassiti, B. Detection of Gray Mold Leaf Infections Prior to Visual Symptom Appearance Using a Five-Band Multispectral Sensor. Front. Plant Sci. 2019, 10, 628. [Google Scholar] [CrossRef] [PubMed]
- Sahin, H.M.; Miftahushudur, T.; Grieve, B.; Yin, H. Segmentation of weeds and crops using multispectral imaging and CRF-enhanced U-Net. Comput. Electron. Agric. 2023, 211, 107956. [Google Scholar] [CrossRef]
- Giakoumoglou, N.; Pechlivani, E.M.; Katsoulas, N.; Tzovaras, D. White Flies and Black Aphids Detection in Field Vegetable Crops using Deep Learning. In Proceedings of the 2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS), Genova, Italy, 5–7 December 2022; pp. 1–6. [Google Scholar] [CrossRef]
- Giakoumoglou, N.; Pechlivani, E.-M.; Frangakis, N.; Tzovaras, D. Enhancing Tuta absoluta Detection on Tomato Plants: Ensemble Techniques and Deep Learning. AI 2023, 4, 996–1009. [Google Scholar] [CrossRef]
- Giakoumoglou, N.; Pechlivani, E.M.; Sakelliou, A.; Klaridopoulos, C.; Frangakis, N.; Tzovaras, D. Deep learning-based multi-spectral identification of grey mould. Smart Agric. Technol. 2023, 4, 100174. [Google Scholar] [CrossRef]
- Bhujel, A.; Khan, F.; Basak, J.K.; Jaihuni, H.; Sihalath, T.; Moon, B.-E.; Park, J.; Kim, H.-T. Detection of gray mold disease and its severity on strawberry using deep learning networks. J. Plant Dis. Prot. 2022, 129, 579–592. [Google Scholar] [CrossRef]
- Sánchez, M.G.; Miramontes-Varo, V.; Chocoteco, J.A.; Vidal, V. Identification and Classification of Botrytis Disease in Pomegranate with Machine Learning. In Intelligent Computing 1229 (Advances in Intelligent Systems and Computing 1229); Arai, K., Kapoor, S., Bhatia, R., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 582–598. [Google Scholar] [CrossRef]
- Ilyas, T.; Khan, A.; Umraiz, M.; Jeong, Y.; Kim, H. Multi-Scale Context Aggregation for Strawberry Fruit Recognition and Disease Phenotyping. IEEE Access 2021, 9, 124491–124504. [Google Scholar] [CrossRef]
- Meng, L.; Audenaert, K.; Van Labeke, M.-C.; Höfte, M. Imaging Detection of Botrytis cinerea On Strawberry Leaves Upon Mycelial Infection. SSRN 2023, preprint. [Google Scholar] [CrossRef]
- Wang, C.; Du, P.; Wu, H.; Li, J.; Zhao, C.; Zhu, H. A cucumber leaf disease severity classification method based on the fusion of DeepLabV3+ and U-Net. Comput. Electron. Agric. 2021, 189, 106373. [Google Scholar] [CrossRef]
- Qasrawi, R.; Amro, M.; Zaghal, R.; Sawafteh, M.; Polo, S.V. Machine Learning Techniques for Tomato Plant Diseases Clustering, Prediction and Classification. In Proceedings of the 2021 International Conference on Promising Electronic Technologies (ICPET), Deir El-Balah, Palestine, 17–18 November 2021; IEEE: New York, NY, USA, 2021; pp. 40–45. [Google Scholar] [CrossRef]
- Giakoumoglou, N.; Kalogeropoulou, E.; Klaridopoulos, C.; Pechlivani, E.M.; Christakakis, P.; Markellou, E.; Frangakis, N.; Tzovaras, D. Early detection of Botrytis cinerea symptoms using deep learning multi-spectral image segmentation. Smart Agric. Technol. 2024, 8, 100481. [Google Scholar] [CrossRef]
- O’Sullivan, C. U-Net Explained: Understanding Its Image Segmentation Architecture. Medium. Available online: https://towardsdatascience.com/u-net-explained-understanding-its-image-segmentation-architecture-56e4842e313a (accessed on 24 May 2024).
- Decognet, V.; Bardin, M.; Trottin-Caudal, Y.; Nicot, P.C. Rapid Change in the Genetic Diversity of Botrytis cinerea Populations After the Introduction of Strains in a Tomato Glasshouse. Phytopathology 2009, 99, 185–193. [Google Scholar] [CrossRef] [PubMed]
- La Camera, S.; L’haridon, F.; Astier, J.; Zander, M.; Abou-Mansour, E.; Page, G.; Thurow, C.; Wendehenne, D.; Gatz, C.; Métraux, J.-P.; et al. The glutaredoxin ATGRXS13 is required to facilitate Botrytis cinerea infection of Arabidopsis thaliana plants: Role of ATGRXS13 during B. cinerea infection. Plant J. 2011, 68, 507–519. [Google Scholar] [CrossRef] [PubMed]
- De Meyer, G.; Bigirimana, J.; Elad, Y.; Höfte, M. Induced systemic resistance in Trichoderma harzianum T39 biocontrol of Botrytis cinerea. Eur. J. Plant Pathol. 1998, 104, 279–286. [Google Scholar] [CrossRef]
- Lee Campbell, C.; Madden, L.V. Introduction to Plant Disease Epidemiology, 1st ed.; Wiley-Interscience: New York, NY, USA, 1990. [Google Scholar]
- IBM SPSS Statistics for Windows, Version 27.0; IBM Corp.: Armonk, NY, USA, 2020.
- “Roboflow” (Version 1.0) [Software]. Available online: https://roboflow.com (accessed on 3 March 2024).
- Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. arXiv 2018, arXiv:1807.10165. [Google Scholar] [CrossRef]
- Li, H.; Xiong, P.; An, J.; Wang, L. Pyramid Attention Network for Semantic Segmentation. arXiv 2018, arXiv:1805.10180. [Google Scholar] [CrossRef]
- Fan, T.; Wang, G.; Li, Y.; Wang, H. MA-Net: A Multi-Scale Attention Network for Liver and Tumor Segmentation. IEEE Access 2020, 8, 179656–179665. [Google Scholar] [CrossRef]
- Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. arXiv 2018, arXiv:1802.02611. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar] [CrossRef]
- Chen, L.-C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar] [CrossRef]
- Mehta, S.; Rastegari, M. MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer arXiv 2022, arXiv:2110.02178. Available online: http://arxiv.org/abs/2110.02178 (accessed on 31 October 2023).
- Mehta, S.; Rastegari, M. Separable Self-attention for Mobile Vision Transformers. arXiv 2022, arXiv:2206.02680. Available online: http://arxiv.org/abs/2206.02680 (accessed on 31 October 2023).
- Sudre, C.H.; Li, W.; Vercauteren, T.; Ourselin, S.; Cardoso, M.J. Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations. arXiv 2017, arXiv:1707.03237. [Google Scholar] [CrossRef]
- Loshchilov, I.; Hutter, F. Decoupled Weight Decay Regularization. arXiv 2019, arXiv:1711.05101. Available online: http://arxiv.org/abs/1711.05101 (accessed on 31 October 2023).
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 88–90. [Google Scholar] [CrossRef]
- Ghiasi, G.; Cui, Y.; Srinivas, A.; Qian, R.; Lin, T.-Y.; Cubuk, E.D.; Le, Q.V.; Zoph, B. Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation. arXiv 2021, arXiv:2012.07177. [Google Scholar] [CrossRef]
- Dvornik, N.; Mairal, J.; Schmid, C. Modeling Visual Context is Key to Augmenting Object Detection Datasets. arXiv 2018, arXiv:1807.07428. [Google Scholar] [CrossRef]
- Dwibedi, D.; Misra, I.; Hebert, M. Cut, Paste and Learn: Surprisingly Easy Synthesis for Instance Detection. arXiv 2017, arXiv:1708.01642. [Google Scholar] [CrossRef]
- Gull, A.; Lone, A.A.; Wani, N.U.I. Biotic and abiotic stresses in plants. In Abiotic and Biotic Stress in Plants; IntechOpen: London, UK, 2019; pp. 1–19. [Google Scholar]
Class | Set | Count | Proportion |
---|---|---|---|
0-healthy | Train | 6405 | 88.30% |
Valid | 2045 | 89.37% | |
1-invisible-early | Train | 112 | 1.54% |
Valid | 56 | 2.44% | |
2-invisible-late | Train | 83 | 1.14% |
Valid | 18 | 0.78% | |
3-visible-light | Train | 106 | 1.46% |
Valid | 20 | 0.87% | |
4-visible-moderate | Train | 123 | 1.69% |
Valid | 39 | 1.7% | |
5-visible-heavy | Train | 424 | 5.84% |
Valid | 110 | 4.8% |
Class | Set | Count | Proportion |
---|---|---|---|
0-healthy | Train | 6405 | 88.30% |
Valid | 2045 | 89.37% | |
1-invisible | Train | 195 | 2.68% |
Valid | 74 | 3.23% | |
2-visible | Train | 653 | 9% |
Valid | 169 | 7.38% |
Architecture | Encoder | Parameters | Accuracy | mDSC | mIoU | Recall | Epoch |
---|---|---|---|---|---|---|---|
PAN | MobileViT-S | 5.09 M | 0.875 | 0.626 | 0.603 | 0.733 | 85 |
PAN | MobileViTV2-1.25 | 7.16 M | 0.828 | 0.557 | 0.516 | 0.620 | 27 |
MA-Net | MobileViT-S | 18.67 M | 0.848 | 0.528 | 0.490 | 0.585 | 8 |
MA-Net | MobileViTV2-1.25 | 23.21 M | 0.847 | 0.546 | 0.501 | 0.613 | 74 |
DeepLabV3+ | MobileViTV2-1.25 | 8.13 M | 0.899 | 0.666 | 0.652 | 0.767 | 80 |
U-Net++ | MobileViT-S | 8.96 M | 0.905 | 0.653 | 0.718 | 0.790 | 88 |
U-Net++ 1 | MobileViTV2-1.25 | 17.84 M | 0.906 | 0.771 | 0.750 | 0.848 | 8 |
U-Net++ 2 | MobileViTV2-1.25 | 17.84 M | 0.919 | 0.792 | 0.816 | 0.885 | 89 |
Architecture | Encoder | IoU (Healthy) | IoU (Invisible) | IoU (Visible) | Recall (Healthy) | Recall (Invisible) | Recall (Visible) |
---|---|---|---|---|---|---|---|
U-Net++ 3 | MobileViTV2-125 | 0.658 | 0.429 | 0.490 | 0.770 | 0.503 | 0.782 |
U-Net++ 4 | MobileViTV2-125 | 0.752 | 0.671 | 0.604 | 0.865 | 0.795 | 0.781 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Christakakis, P.; Giakoumoglou, N.; Kapetas, D.; Tzovaras, D.; Pechlivani, E.-M. Vision Transformers in Optimization of AI-Based Early Detection of Botrytis cinerea. AI 2024, 5, 1301-1323. https://doi.org/10.3390/ai5030063
Christakakis P, Giakoumoglou N, Kapetas D, Tzovaras D, Pechlivani E-M. Vision Transformers in Optimization of AI-Based Early Detection of Botrytis cinerea. AI. 2024; 5(3):1301-1323. https://doi.org/10.3390/ai5030063
Chicago/Turabian StyleChristakakis, Panagiotis, Nikolaos Giakoumoglou, Dimitrios Kapetas, Dimitrios Tzovaras, and Eleftheria-Maria Pechlivani. 2024. "Vision Transformers in Optimization of AI-Based Early Detection of Botrytis cinerea" AI 5, no. 3: 1301-1323. https://doi.org/10.3390/ai5030063
APA StyleChristakakis, P., Giakoumoglou, N., Kapetas, D., Tzovaras, D., & Pechlivani, E. -M. (2024). Vision Transformers in Optimization of AI-Based Early Detection of Botrytis cinerea. AI, 5(3), 1301-1323. https://doi.org/10.3390/ai5030063