Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = tea disease image recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 12237 KiB  
Article
Detection Model of Tea Disease Severity under Low Light Intensity Based on YOLOv8 and EnlightenGAN
by Rong Ye, Guoqi Shao, Ziyi Yang, Yuchen Sun, Quan Gao and Tong Li
Plants 2024, 13(10), 1377; https://doi.org/10.3390/plants13101377 - 15 May 2024
Cited by 8 | Viewed by 2165
Abstract
In response to the challenge of low recognition rates for similar phenotypic symptoms of tea diseases in low-light environments and the difficulty in detecting small lesions, a novel adaptive method for tea disease severity detection is proposed. This method integrates an image enhancement [...] Read more.
In response to the challenge of low recognition rates for similar phenotypic symptoms of tea diseases in low-light environments and the difficulty in detecting small lesions, a novel adaptive method for tea disease severity detection is proposed. This method integrates an image enhancement algorithm based on an improved EnlightenGAN network and an enhanced version of YOLO v8. The approach involves first enhancing the EnlightenGAN network through non-paired training on low-light-intensity images of various tea diseases, guiding the generation of high-quality disease images. This step aims to expand the dataset and improve lesion characteristics and texture details in low-light conditions. Subsequently, the YOLO v8 network incorporates ResNet50 as its backbone, integrating channel and spatial attention modules to extract key features from disease feature maps effectively. The introduction of adaptive spatial feature fusion in the Neck part of the YOLOv8 module further enhances detection accuracy, particularly for small disease targets in complex backgrounds. Additionally, the model architecture is optimized by replacing traditional Conv blocks with ODConv blocks and introducing a new ODC2f block to reduce parameters, improve performance, and switch the loss function from CIOU to EIOU for a faster and more accurate recognition of small targets. Experimental results demonstrate that YOLOv8-ASFF achieves a tea disease detection accuracy of 87.47% and a mean average precision (mAP) of 95.26%. These results show a 2.47 percentage point improvement over YOLOv8, and a significant lead of 9.11, 9.55, and 7.08 percentage points over CornerNet, SSD, YOLOv5, and other models, respectively. The ability to swiftly and accurately detect tea diseases can offer robust theoretical support for assessing tea disease severity and managing tea growth. Moreover, its compatibility with edge computing devices and practical application in agriculture further enhance its value. Full article
(This article belongs to the Special Issue Research on Plant Pathology and Disease Management)
Show Figures

Figure 1

17 pages, 3499 KiB  
Article
Study on the Tea Pest Classification Model Using a Convolutional and Embedded Iterative Region of Interest Encoding Transformer
by Baishao Zhan, Ming Li, Wei Luo, Peng Li, Xiaoli Li and Hailiang Zhang
Biology 2023, 12(7), 1017; https://doi.org/10.3390/biology12071017 - 17 Jul 2023
Cited by 6 | Viewed by 1936
Abstract
Tea diseases are one of the main causes of tea yield reduction, and the use of computer vision for classification and diagnosis is an effective means of tea disease management. However, the random location of lesions, high symptom similarity, and complex background make [...] Read more.
Tea diseases are one of the main causes of tea yield reduction, and the use of computer vision for classification and diagnosis is an effective means of tea disease management. However, the random location of lesions, high symptom similarity, and complex background make the recognition and classification of tea images difficult. Therefore, this paper proposes a tea disease IterationVIT diagnosis model that integrates a convolution and iterative transformer. The convolution consists of a superimposed bottleneck layer for extracting the local features of tea leaves. The iterative algorithm incorporates the attention mechanism and bilinear interpolation operation to obtain disease location information by continuously updating the region of interest in location information. The transformer module uses a multi-head attention mechanism for global feature extraction. A total of 3544 images of red leaf spot, algal leaf spot, bird’s eye disease, gray wilt, white spot, anthracnose, brown wilt, and healthy tea leaves collected under natural light were used as samples and input into the IterationVIT model for training. The results show that when the patch size is 16, the model performed better with an IterationVIT classification accuracy of 98% and F1 measure of 96.5%, which is superior to mainstream methods such as VIT, Efficient, Shuffle, Mobile, Vgg, etc. In order to verify the robustness of the model, the original images of the test set were blurred, noise- was added and highlighted, and then the images were input into the IterationVIT model. The classification accuracy still reached over 80%. When 60% of the training set was randomly selected, the classification accuracy of the IterationVIT model test set was 8% higher than that of mainstream models, with the ability to analyze fewer samples. Model generalizability was performed using three sets of plant leaf public datasets, and the experimental results were all able to achieve comparable levels of generalizability to the data in this paper. Finally, this paper visualized and interpreted the model using the CAM method to obtain the pixel-level thermal map of tea diseases, and the results show that the established IterationVIT model can accurately capture the location of diseases, which further verifies the effectiveness of the model. Full article
(This article belongs to the Section Plant Science)
Show Figures

Figure 1

16 pages, 10184 KiB  
Article
An Information Entropy Masked Vision Transformer (IEM-ViT) Model for Recognition of Tea Diseases
by Jiahong Zhang, Honglie Guo, Jin Guo and Jing Zhang
Agronomy 2023, 13(4), 1156; https://doi.org/10.3390/agronomy13041156 - 19 Apr 2023
Cited by 7 | Viewed by 2331
Abstract
Tea is one of the most popular drinks in the world. The rapid and accurate recognition of tea diseases is of great significance for taking targeted preventive measures. In this paper, an information entropy masked vision transformation (IEM-ViT) model was proposed for the [...] Read more.
Tea is one of the most popular drinks in the world. The rapid and accurate recognition of tea diseases is of great significance for taking targeted preventive measures. In this paper, an information entropy masked vision transformation (IEM-ViT) model was proposed for the rapid and accurate recognition of tea diseases. The information entropy weighting (IEW) method was used to calculate the IE of each segment of the image, so that the model could learn the maximum amount of knowledge and information more quickly and accurately. An asymmetric encoder–decoder architecture was used in the masked autoencoder (MAE), where the encoder operated on only a subset of visible patches and the decoder recovered the labeled masked patches, reconstructing the missing pixels for parameter sharing and data augmentation. The experimental results showed that the proposed IEM-ViT had an accuracy of 93.78% for recognizing the seven types of tea diseases. In comparison to the currently common image recognition algorithms including the ResNet18, VGG16, and VGG19, the recognition accuracy was improved by nearly 20%. Additionally, in comparison to the other six published tea disease recognition methods, the proposed IEM-ViT model could recognize more types of tea diseases and the accuracy was improved simultaneously. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

18 pages, 3749 KiB  
Article
YOLO-Tea: A Tea Disease Detection Model Improved by YOLOv5
by Zhenyang Xue, Renjie Xu, Di Bai and Haifeng Lin
Forests 2023, 14(2), 415; https://doi.org/10.3390/f14020415 - 17 Feb 2023
Cited by 128 | Viewed by 11177
Abstract
Diseases and insect pests of tea leaves cause huge economic losses to the tea industry every year, so the accurate identification of them is significant. Convolutional neural networks (CNNs) can automatically extract features from images of tea leaves suffering from insect and disease [...] Read more.
Diseases and insect pests of tea leaves cause huge economic losses to the tea industry every year, so the accurate identification of them is significant. Convolutional neural networks (CNNs) can automatically extract features from images of tea leaves suffering from insect and disease infestation. However, photographs of tea tree leaves taken in a natural environment have problems such as leaf shading, illumination, and small-sized objects. Affected by these problems, traditional CNNs cannot have a satisfactory recognition performance. To address this challenge, we propose YOLO-Tea, an improved model based on You Only Look Once version 5 (YOLOv5). Firstly, we integrated self-attention and convolution (ACmix), and convolutional block attention module (CBAM) to YOLOv5 to allow our proposed model to better focus on tea tree leaf diseases and insect pests. Secondly, to enhance the feature extraction capability of our model, we replaced the spatial pyramid pooling fast (SPPF) module in the original YOLOv5 with the receptive field block (RFB) module. Finally, we reduced the resource consumption of our model by incorporating a global context network (GCNet). This is essential especially when the model operates on resource-constrained edge devices. When compared to YOLOv5s, our proposed YOLO-Tea improved by 0.3%–15.0% over all test data. YOLO-Tea’s AP0.5, APTLB, and APGMB outperformed Faster R-CNN and SSD by 5.5%, 1.8%, 7.0% and 7.7%, 7.8%, 5.2%. YOLO-Tea has shown its promising potential to be applied in real-world tree disease detection systems. Full article
Show Figures

Figure 1

13 pages, 1555 KiB  
Article
Visual Tea Leaf Disease Recognition Using a Convolutional Neural Network Model
by Jing Chen, Qi Liu and Lingwang Gao
Symmetry 2019, 11(3), 343; https://doi.org/10.3390/sym11030343 - 7 Mar 2019
Cited by 166 | Viewed by 21893
Abstract
The rapid, recent development of image recognition technologies has led to the widespread use of convolutional neural networks (CNNs) in automated image classification and in the recognition of plant diseases. Aims: The aim of the present study was to develop a deep CNNs [...] Read more.
The rapid, recent development of image recognition technologies has led to the widespread use of convolutional neural networks (CNNs) in automated image classification and in the recognition of plant diseases. Aims: The aim of the present study was to develop a deep CNNs to identify tea plant disease types from leaf images. Materials: A CNNs model named LeafNet was developed with different sized feature extractor filters that automatically extract the features of tea plant diseases from images. DSIFT (dense scale-invariant feature transform) features are also extracted and used to construct a bag of visual words (BOVW) model that is then used to classify diseases via support vector machine(SVM) and multi-layer perceptron(MLP) classifiers. The performance of the three classifiers in disease recognition were then individually evaluated. Results: The LeafNet algorithm identified tea leaf diseases most accurately, with an average classification accuracy of 90.16%, while that of the SVM algorithm was 60.62% and that of the MLP algorithm was 70.77%. Conclusions: The LeafNet was clearly superior in the recognition of tea leaf diseases compared to the MLP and SVM algorithms. Consequently, the LeafNet can be used in future applications to improve the efficiency and accuracy of disease diagnoses in tea plants. Full article
Show Figures

Figure 1

Back to TopTop