Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = tea buds picking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 10112 KiB  
Article
A Lightweight Tea Bud-Grading Detection Model for Embedded Applications
by Lingling Tang, Yang Yang, Chenyu Fan and Tao Pang
Agronomy 2025, 15(3), 582; https://doi.org/10.3390/agronomy15030582 - 26 Feb 2025
Viewed by 685
Abstract
The conventional hand-picking of tea buds is inefficient and leads to inconsistent quality. Innovations in tea bud identification and automated grading are essential for enhancing industry competitiveness. Key breakthroughs include detection accuracy and lightweight model deployment. Traditional image recognition struggles with variable weather [...] Read more.
The conventional hand-picking of tea buds is inefficient and leads to inconsistent quality. Innovations in tea bud identification and automated grading are essential for enhancing industry competitiveness. Key breakthroughs include detection accuracy and lightweight model deployment. Traditional image recognition struggles with variable weather conditions, while high-precision models are often too bulky for mobile applications. This study proposed a lightweight YOLOV5 model, which was tested on three tea types across different weather scenarios. It incorporated a lightweight convolutional network and a compact feature extraction layer, which significantly reduced parameter computation. The model achieved 92.43% precision and 87.25% mean average precision (mAP), weighing only 4.98 MB and improving accuracy by 6.73% and 2.11% while reducing parameters by 2 MB and 141.02 MB compared to YOLOV5n6 and YOLOV5l6. Unlike networks that detected single or dual tea grades, this model offered refined grading with advantages in both precision and size, making it suitable for embedded devices with limited resources. Thus, the YOLOV5n6_MobileNetV3 model enhanced tea bud recognition accuracy and supported intelligent harvesting research and technology. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

21 pages, 4678 KiB  
Article
TBF-YOLOv8n: A Lightweight Tea Bud Detection Model Based on YOLOv8n Improvements
by Wenhui Fang and Weizhen Chen
Sensors 2025, 25(2), 547; https://doi.org/10.3390/s25020547 - 18 Jan 2025
Cited by 2 | Viewed by 1009
Abstract
Tea bud localization detection not only ensures tea quality, improves picking efficiency, and advances intelligent harvesting, but also fosters tea industry upgrades and enhances economic benefits. To solve the problem of the high computational complexity of deep learning detection models, we developed the [...] Read more.
Tea bud localization detection not only ensures tea quality, improves picking efficiency, and advances intelligent harvesting, but also fosters tea industry upgrades and enhances economic benefits. To solve the problem of the high computational complexity of deep learning detection models, we developed the Tea Bud DSCF-YOLOv8n (TBF-YOLOv8n)lightweight detection model. Improvement of the Cross Stage Partial Bottleneck Module with Two Convolutions(C2f) module via efficient Distributed Shift Convolution (DSConv) yields the C2f module with DSConv(DSCf)module, which reduces the model’s size. Additionally, the coordinate attention (CA) mechanism is incorporated to mitigate interference from irrelevant factors, thereby improving mean accuracy. Furthermore, the SIOU_Loss (SCYLLA-IOU_Loss) function and the Dynamic Sample(DySample)up-sampling operator are implemented to accelerate convergence and enhance both average precision and detection accuracy. The experimental results show that compared to the YOLOv8n model, the TBF-YOLOv8n model has a 3.7% increase in accuracy, a 1.1% increase in average accuracy, a 44.4% reduction in gigabit floating point operations (GFLOPs), and a 13.4% reduction in the total number of parameters included in the model. In comparison experiments with a variety of lightweight detection models, the TBF-YOLOv8n still performs well in terms of detection accuracy while remaining more lightweight. In conclusion, the TBF-YOLOv8n model achieves a commendable balance between efficiency and precision, offering valuable insights for advancing intelligent tea bud harvesting technologies. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

19 pages, 13021 KiB  
Article
GLS-YOLO: A Lightweight Tea Bud Detection Model in Complex Scenarios
by Shanshan Li, Zhe Zhang and Shijun Li
Agronomy 2024, 14(12), 2939; https://doi.org/10.3390/agronomy14122939 - 10 Dec 2024
Cited by 2 | Viewed by 1040
Abstract
The efficiency of tea bud harvesting has been greatly enhanced, and human labor intensity significantly reduced, through the mechanization and intelligent management of tea plantations. A key challenge for harvesting machinery is ensuring both the freshness of tea buds and the integrity of [...] Read more.
The efficiency of tea bud harvesting has been greatly enhanced, and human labor intensity significantly reduced, through the mechanization and intelligent management of tea plantations. A key challenge for harvesting machinery is ensuring both the freshness of tea buds and the integrity of the tea plants. However, achieving precise harvesting requires complex computational models, which can limit practical deployment. To address the demand for high-precision yet lightweight tea bud detection, this study proposes the GLS-YOLO detection model, based on YOLOv8. The model leverages GhostNetV2 as its backbone network, replacing standard convolutions with depthwise separable convolutions, resulting in substantial reductions in computational load and memory consumption. Additionally, the C2f-LC module is integrated into the improved model, combining cross-covariance fusion with a lightweight contextual attention mechanism to enhance feature recognition and extraction quality. To tackle the challenges posed by varying poses and occlusions of tea buds, Shape-IoU was employed as the loss function to improve the scoring of similarly shaped objects, reducing false positives and false negatives while improving the detection of non-rectangular or irregularly shaped objects. Experimental results demonstrate the model’s superior performance, achieving an AP@0.5 of 90.55%. Compared to the original YOLOv8, the model size was reduced by 38.85%, and the number of parameters decreased by 39.95%. This study presents innovative advances in agricultural robotics by significantly improving the accuracy and efficiency of tea bud harvesting, simplifying the configuration process for harvesting systems, and effectively lowering the technological barriers for real-world applications. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

19 pages, 4786 KiB  
Article
RT-DETR-Tea: A Multi-Species Tea Bud Detection Model for Unstructured Environments
by Yiyong Chen, Yang Guo, Jianlong Li, Bo Zhou, Jiaming Chen, Man Zhang, Yingying Cui and Jinchi Tang
Agriculture 2024, 14(12), 2256; https://doi.org/10.3390/agriculture14122256 - 10 Dec 2024
Cited by 3 | Viewed by 1477
Abstract
Accurate bud detection is a prerequisite for automatic tea picking and yield statistics; however, current research suffers from missed detection due to the variety of singleness and false detection under complex backgrounds. Traditional target detection models are mainly based on CNN, but CNN [...] Read more.
Accurate bud detection is a prerequisite for automatic tea picking and yield statistics; however, current research suffers from missed detection due to the variety of singleness and false detection under complex backgrounds. Traditional target detection models are mainly based on CNN, but CNN can only achieve the extraction of local feature information, which is a lack of advantages for the accurate identification of targets in complex environments, and Transformer can be a good solution to the problem. Therefore, based on a multi-variety tea bud dataset, this study proposes RT-DETR-Tea, an improved object detection model under the real-time detection Transformer (RT-DETR) framework. This model uses cascaded group attention to replace the multi-head self-attention (MHSA) mechanism in the attention-based intra-scale feature interaction (AIFI) module, effectively optimizing deep features and enriching the semantic information of features. The original cross-scale feature-fusion module (CCFM) mechanism is improved to establish the gather-and-distribute-Tea (GD-Tea) mechanism for multi-level feature fusion, which can effectively fuse low-level and high-level semantic information and large and small tea bud features in natural environments. The submodule of DilatedReparamBlock in UniRepLKNet was employed to improve RepC3 to achieve an efficient fusion of tea bud feature information and ensure the accuracy of the detection head. Ablation experiments show that the precision and mean average precision of the proposed RT-DETR-Tea model are 96.1% and 79.7%, respectively, which are increased by 5.2% and 2.4% compared to those of the original model, indicating the model’s effectiveness. The model also shows good detection performance on the newly constructed tea bud dataset. Compared with other detection algorithms, the improved RT-DETR-Tea model demonstrates superior tea bud detection performance, providing effective technical support for smart tea garden management and production. Full article
Show Figures

Figure 1

14 pages, 3385 KiB  
Article
Tea Bud Detection Model in a Real Picking Environment Based on an Improved YOLOv5
by Hongfei Li, Min Kong and Yun Shi
Biomimetics 2024, 9(11), 692; https://doi.org/10.3390/biomimetics9110692 - 13 Nov 2024
Cited by 3 | Viewed by 1506
Abstract
The detection of tea bud targets is the foundation of automated picking of premium tea. This article proposes a high-performance tea bud detection model to address issues such as complex environments, small target tea buds, and blurry device focus in tea bud detection. [...] Read more.
The detection of tea bud targets is the foundation of automated picking of premium tea. This article proposes a high-performance tea bud detection model to address issues such as complex environments, small target tea buds, and blurry device focus in tea bud detection. During the spring tea-picking stage, we collect tea bud images from mountainous tea gardens and annotate them. YOLOv5 tea is an improvement based on YOLOv5, which uses the efficient Simplified Spatial Pyramid Pooling Fast (SimSPPF) in the backbone for easy deployment on tea bud-picking equipment. The neck network adopts the Bidirectional Feature Pyramid Network (BiFPN) structure. It fully integrates deep and shallow feature information, achieving the effect of fusing features at different scales and improving the detection accuracy of focused fuzzy tea buds. It replaces the independent CBS convolution module in traditional neck networks with Omni-Dimensional Dynamic Convolution (ODConv), processing different weights from spatial size, input channel, output channel, and convolution kernel to improve the detection of small targets and occluded tea buds. The experimental results show that the improved model has improved precision, recall, and mean average precision by 4.4%, 2.3%, and 3.2%, respectively, compared to the initial model, and the inference speed of the model has also been improved. This study has theoretical and practical significance for tea bud harvesting in complex environments. Full article
Show Figures

Figure 1

18 pages, 7770 KiB  
Article
Vision-Based Localization Method for Picking Points in Tea-Harvesting Robots
by Jingwen Yang, Xin Li, Xin Wang, Leiyang Fu and Shaowen Li
Sensors 2024, 24(21), 6777; https://doi.org/10.3390/s24216777 - 22 Oct 2024
Cited by 3 | Viewed by 1707
Abstract
To address the issue of accurately recognizing and locating picking points for tea-picking robots in unstructured environments, a visual positioning method based on RGB-D information fusion is proposed. First, an improved T-YOLOv8n model is proposed, which improves detection and segmentation performance across multi-scale [...] Read more.
To address the issue of accurately recognizing and locating picking points for tea-picking robots in unstructured environments, a visual positioning method based on RGB-D information fusion is proposed. First, an improved T-YOLOv8n model is proposed, which improves detection and segmentation performance across multi-scale scenes through network architecture and loss function optimizations. In the far-view test set, the detection accuracy of tea buds reached 80.8%; for the near-view test set, the mAP0.5 values for tea stem detection in bounding boxes and masks reached 93.6% and 93.7%, respectively, showing improvements of 9.1% and 14.1% over the baseline model. Secondly, a layered visual servoing strategy for near and far views was designed, integrating the RealSense depth sensor with robotic arm cooperation. This strategy identifies the region of interest (ROI) of the tea bud in the far view and fuses the stem mask information with depth data to calculate the three-dimensional coordinates of the picking point. The experiments show that this method achieved a picking point localization success rate of 86.4%, with a mean depth measurement error of 1.43 mm. The proposed method improves the accuracy of picking point recognition and reduces depth information fluctuations, providing technical support for the intelligent and rapid picking of premium tea. Full article
Show Figures

Figure 1

23 pages, 25042 KiB  
Article
Segmentation Network for Multi-Shape Tea Bud Leaves Based on Attention and Path Feature Aggregation
by Tianci Chen, Haoxin Li, Jinhong Lv, Jiazheng Chen and Weibin Wu
Agriculture 2024, 14(8), 1388; https://doi.org/10.3390/agriculture14081388 - 17 Aug 2024
Cited by 1 | Viewed by 1083
Abstract
Accurately detecting tea bud leaves is crucial for the automation of tea picking robots. However, challenges arise due to tea stem occlusion and overlapping of buds and leaves, presenting varied shapes of one bud–one leaf targets in the field of view, making precise [...] Read more.
Accurately detecting tea bud leaves is crucial for the automation of tea picking robots. However, challenges arise due to tea stem occlusion and overlapping of buds and leaves, presenting varied shapes of one bud–one leaf targets in the field of view, making precise segmentation of tea bud leaves challenging. To improve the segmentation accuracy of one bud–one leaf targets with different shapes and fine granularity, this study proposes a novel semantic segmentation model for tea bud leaves. The method designs a hierarchical Transformer block based on a self-attention mechanism in the encoding network, which is beneficial for capturing long-range dependencies between features and enhancing the representation of common features. Then, a multi-path feature aggregation module is designed to effectively merge the feature outputs of encoder blocks with decoder outputs, thereby alleviating the loss of fine-grained features caused by downsampling. Furthermore, a refined polarized attention mechanism is employed after the aggregation module to perform polarized filtering on features in channel and spatial dimensions, enhancing the output of fine-grained features. The experimental results demonstrate that the proposed Unet-Enhanced model achieves segmentation performance well on one bud–one leaf targets with different shapes, with a mean intersection over union (mIoU) of 91.18% and a mean pixel accuracy (mPA) of 95.10%. The semantic segmentation network can accurately segment tea bud leaves, providing a decision-making basis for the spatial positioning of tea picking robots. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

16 pages, 5187 KiB  
Article
Development of a Premium Tea-Picking Robot Incorporating Deep Learning and Computer Vision for Leaf Detection
by Luofa Wu, Helai Liu, Chun Ye and Yanqi Wu
Appl. Sci. 2024, 14(13), 5748; https://doi.org/10.3390/app14135748 - 1 Jul 2024
Cited by 7 | Viewed by 2647
Abstract
Premium tea holds a significant place in Chinese tea culture, enjoying immense popularity among domestic consumers and an esteemed reputation in the international market, thereby significantly impacting the Chinese economy. To tackle challenges associated with the labor-intensive and inefficient manual picking process of [...] Read more.
Premium tea holds a significant place in Chinese tea culture, enjoying immense popularity among domestic consumers and an esteemed reputation in the international market, thereby significantly impacting the Chinese economy. To tackle challenges associated with the labor-intensive and inefficient manual picking process of premium tea, and to elevate the competitiveness of the premium tea sector, our research team has developed and rigorously tested a premium tea-picking robot that harnesses deep learning and computer vision for precise leaf recognition. This innovative technology has been patented by the China National Intellectual Property Administration (ZL202111236676.7). In our study, we constructed a deep-learning model that, through comprehensive data training, enabled the robot to accurately recognize tea buds. By integrating computer vision techniques, we achieved exact positioning of the tea buds. From a hardware perspective, we employed a high-performance robotic arm to ensure stable and efficient picking operations even in complex environments. During the experimental phase, we conducted detailed validations on the practical application of the YOLOv8 algorithm in tea bud identification. When compared to the YOLOv5 algorithm, YOLOv8 exhibited superior accuracy and reliability. Furthermore, we performed comprehensive testing on the path planning for the picking robotic arm, evaluating various algorithms to determine the most effective path planning approach for the picking process. Ultimately, we conducted field tests to assess the robot’s performance. The results indicated a 62.02% success rate for the entire picking process of the premium tea-picking robot, with an average picking time of approximately 1.86 s per qualified tea bud. This study provides a solid foundation for further research, development, and deployment of premium tea-picking robots, serving as a valuable reference for the design of other crop-picking robots as well. Full article
(This article belongs to the Special Issue Applied Computer Vision in Industry and Agriculture)
Show Figures

Figure 1

16 pages, 8874 KiB  
Article
Recognition Model for Tea Grading and Counting Based on the Improved YOLOv8n
by Yuxin Xia, Zejun Wang, Zhiyong Cao, Yaping Chen, Limei Li, Lijiao Chen, Shihao Zhang, Chun Wang, Hongxu Li and Baijuan Wang
Agronomy 2024, 14(6), 1251; https://doi.org/10.3390/agronomy14061251 - 10 Jun 2024
Cited by 7 | Viewed by 1650
Abstract
Grading tea leaves efficiently in a natural environment is a crucial technological foundation for the automation of tea-picking robots. In this study, to solve the problems of dense distribution, limited feature-extraction ability, and false detection in the field of tea grading recognition, an [...] Read more.
Grading tea leaves efficiently in a natural environment is a crucial technological foundation for the automation of tea-picking robots. In this study, to solve the problems of dense distribution, limited feature-extraction ability, and false detection in the field of tea grading recognition, an improved YOLOv8n model for tea grading and counting recognition was proposed. Firstly, the SPD-Conv module was embedded into the backbone of the network model to enhance the deep feature-extraction ability of the target. Secondly, the Super-Token Vision Transformer was integrated to reduce the model’s attention to redundant information, thus improving its perception ability for tea. Subsequently, the loss function was improved to MPDIoU, which accelerated the convergence speed and optimized the performance. Finally, a classification-positioning counting function was added to achieve the purpose of classification counting. The experimental results showed that, compared to the original model, the precision, recall and average precision improved by 17.6%, 19.3%, and 18.7%, respectively. The average precision of single bud, one bud with one leaf, and one bud with two leaves were 88.5%, 89.5% and 89.1%. In this study, the improved model demonstrated strong robustness and proved suitable for tea grading and edge-picking equipment, laying a solid foundation for the mechanization of the tea industry. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

16 pages, 5596 KiB  
Article
Optimizing Efficiency of Tea Harvester Leaf-Collection Pipeline: Numerical Simulation and Experimental Validation
by Zhe Du, Liyuan Zhang, Xinping Li, Xin Jin and Fan Yu
Agriculture 2024, 14(5), 653; https://doi.org/10.3390/agriculture14050653 - 23 Apr 2024
Viewed by 1562
Abstract
To address the challenges of missed and disorderly picking in tea harvesters, this study focused on the leaf-collection pipeline and utilized Fluent simulation 19.0 software. A single-factor test identified key parameters affecting airflow velocity. An orthogonal test evaluated the main pipe taper, number [...] Read more.
To address the challenges of missed and disorderly picking in tea harvesters, this study focused on the leaf-collection pipeline and utilized Fluent simulation 19.0 software. A single-factor test identified key parameters affecting airflow velocity. An orthogonal test evaluated the main pipe taper, number of branch pipes, and branch pipe outlet diameter, with average outlet wind speed and wind speed non-uniformity as indicators. The optimal parameters were a main pipe taper of 25.5 mm, 10 branch pipes, and an inner diameter of 17.10 mm for the outlet, resulting in 10.73 m/s average wind speed and 8.24% non-uniformity. Validation tests showed errors under 1%. Further optimization on the internal structure’s extension length led to 11.02 m/s average wind speed and 8.04% non-uniformity. Field experiments demonstrated a 3.40% stalk leakage rate and 90.36% bud leaf integrity rate; the optimized structure of the leaf-collecting pipeline significantly improved the uniformity of airflow and the picking efficiency. These findings offer valuable insights and practical benefits for enhancing the efficiency of tea harvesters. Full article
(This article belongs to the Special Issue Smart Mechanization and Automation in Agriculture)
Show Figures

Figure 1

18 pages, 1824 KiB  
Article
Design and Control Simulation Analysis of Tender Tea Bud Picking Manipulator
by Peng Xue, Qing Li and Guodong Fu
Appl. Sci. 2024, 14(2), 928; https://doi.org/10.3390/app14020928 - 22 Jan 2024
Cited by 7 | Viewed by 2081
Abstract
Aiming at the current complex problem of the mechanized high-quality picking of tender tea buds, this paper designs a tender tea bud-picking manipulator. In the picking process, the quality of the petiole and leaf blade of the tender tea bud is crucial, as [...] Read more.
Aiming at the current complex problem of the mechanized high-quality picking of tender tea buds, this paper designs a tender tea bud-picking manipulator. In the picking process, the quality of the petiole and leaf blade of the tender tea bud is crucial, as the traditional cutting picking method destroys the cell structure of the tender tea buds, resulting in rapid oxidation of the cuts, thus losing the bright green appearance and pure taste. For this reason, this paper draws on the quality requirements of tender tea buds and traditional manual picking technology, simulating the process of the manual picking action, putting forward a ‘rotary pull-up’ clamping and ripping picking method, and designing the corresponding actuating structure. Using PVDF material piezoelectric thin-film sensors to detect the clamping force of the tender tea bud picking, the corresponding sensor hardware circuit is designed. In addition, the finite element analysis method is also used to carry out stress analysis on the mechanical fingers to verify the rationality of the automatic mechanism to ensure the high-quality picking of tender tea buds. In terms of the control of the manipulator, an SMC-PID control method is designed by using MATLAB/Simulink 2021 and Adam 2020 software for joint simulation. The way to control the closed-loop system angle and angular velocity error feedback is by adjusting the PID parameters, which quickly converts the sliding mode control to the sliding mode surface. The simulation results show that the SMC-PID control method proposed in this paper can meet the demand in tender tea bud picking and simultaneously has high control accuracy, response speed, and stability. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

17 pages, 7194 KiB  
Article
Detection and Localization of Tea Bud Based on Improved YOLOv5s and 3D Point Cloud Processing
by Lixue Zhu, Zhihao Zhang, Guichao Lin, Pinlan Chen, Xiaomin Li and Shiang Zhang
Agronomy 2023, 13(9), 2412; https://doi.org/10.3390/agronomy13092412 - 19 Sep 2023
Cited by 12 | Viewed by 2225
Abstract
Currently, the detection and localization of tea buds within the unstructured tea plantation environment are greatly challenged due to their small size, significant morphological and growth height variations, and dense spatial distribution. To solve this problem, this study applies an enhanced version of [...] Read more.
Currently, the detection and localization of tea buds within the unstructured tea plantation environment are greatly challenged due to their small size, significant morphological and growth height variations, and dense spatial distribution. To solve this problem, this study applies an enhanced version of the YOLOv5 algorithm for tea bud detection in a wide field of view. Also, small-size tea bud localization based on 3D point cloud technology is used to facilitate the detection of tea buds and the identification of picking points for a renowned tea-picking robot. To enhance the YOLOv5 network, the Efficient Channel Attention Network (ECANet) module and Bi-directional Feature Pyramid Network (BiFPN) are incorporated. After acquiring the 3D point cloud for the region of interest in the detection results, the 3D point cloud of the tea bud is extracted using the DBSCAN clustering algorithm to determine the 3D coordinates of the tea bud picking points. Principal component analysis is then utilized to fit the minimum outer cuboid to the 3D point cloud of tea buds, thereby solving for the 3D coordinates of the picking points. To evaluate the effectiveness of the proposed algorithm, an experiment is conducted using a collected tea image test set, resulting in a detection precision of 94.4% and a recall rate of 90.38%. Additionally, a field experiment is conducted in a tea experimental field to assess localization accuracy, with mean absolute errors of 3.159 mm, 6.918 mm, and 7.185 mm observed in the x, y, and z directions, respectively. The average time consumed for detection and localization is 0.129 s, which fulfills the requirements of well-known tea plucking robots in outdoor tea gardens for quick identification and exact placement of small-sized tea shoots with a wide field of view. Full article
(This article belongs to the Collection Advances of Agricultural Robotics in Sustainable Agriculture 4.0)
Show Figures

Figure 1

24 pages, 8173 KiB  
Article
Tea-YOLOv8s: A Tea Bud Detection Model Based on Deep Learning and Computer Vision
by Shuang Xie and Hongwei Sun
Sensors 2023, 23(14), 6576; https://doi.org/10.3390/s23146576 - 21 Jul 2023
Cited by 39 | Viewed by 4897
Abstract
Tea bud target detection is essential for mechanized selective harvesting. To address the challenges of low detection precision caused by the complex backgrounds of tea leaves, this paper introduces a novel model called Tea-YOLOv8s. First, multiple data augmentation techniques are employed to increase [...] Read more.
Tea bud target detection is essential for mechanized selective harvesting. To address the challenges of low detection precision caused by the complex backgrounds of tea leaves, this paper introduces a novel model called Tea-YOLOv8s. First, multiple data augmentation techniques are employed to increase the amount of information in the images and improve their quality. Then, the Tea-YOLOv8s model combines deformable convolutions, attention mechanisms, and improved spatial pyramid pooling, thereby enhancing the model’s ability to learn complex object invariance, reducing interference from irrelevant factors, and enabling multi-feature fusion, resulting in improved detection precision. Finally, the improved YOLOv8 model is compared with other models to validate the effectiveness of the proposed improvements. The research results demonstrate that the Tea-YOLOv8s model achieves a mean average precision of 88.27% and an inference time of 37.1 ms, with an increase in the parameters and calculation amount by 15.4 M and 17.5 G, respectively. In conclusion, although the proposed approach increases the model’s parameters and calculation amount, it significantly improves various aspects compared to mainstream YOLO detection models and has the potential to be applied to tea buds picked by mechanization equipment. Full article
(This article belongs to the Special Issue Perception and Imaging for Smart Agriculture)
Show Figures

Figure 1

23 pages, 8157 KiB  
Article
Tea Bud Detection and 3D Pose Estimation in the Field with a Depth Camera Based on Improved YOLOv5 and the Optimal Pose-Vertices Search Method
by Zhiwei Chen, Jianneng Chen, Yang Li, Zhiyong Gui and Taojie Yu
Agriculture 2023, 13(7), 1405; https://doi.org/10.3390/agriculture13071405 - 14 Jul 2023
Cited by 5 | Viewed by 2341
Abstract
The precise detection and positioning of tea buds are among the major issues in tea picking automation. In this study, a novel algorithm for detecting tea buds and estimating their poses in a field environment was proposed by using a depth camera. This [...] Read more.
The precise detection and positioning of tea buds are among the major issues in tea picking automation. In this study, a novel algorithm for detecting tea buds and estimating their poses in a field environment was proposed by using a depth camera. This algorithm introduces some improvements to the YOLOv5l architecture. A Coordinate Attention Mechanism (CAM) was inserted into the neck part to accurately position the elements of interest, a BiFPN was used to enhance the small object detection ability, and a GhostConv module replaced the original Conv module in the backbone to reduce the model size and speed up model inference. After testing, the proposed detection model achieved an mAP of 85.2%, a speed of 87.71 FPS, a parameter number of 29.25 M, and a FLOPs value of 59.8 G, which are all better than those achieved with the original model. Next, an optimal pose-vertices search method (OPVSM) was developed to estimate the pose of tea by constructing a graph model to fit the pointcloud. This method could accurately estimate the poses of tea buds, with an overall accuracy of 90%, and it was more flexible and adaptive to the variations in tea buds in terms of size, color, and shape features. Additionally, the experiments demonstrated that the OPVSM could correctly establish the pose of tea buds through pointcloud downsampling by using voxel filtering with a 2 mm × 2 mm × 1 mm grid, and this process could effectively reduce the size of the pointcloud to smaller than 800 to ensure that the algorithm could be run within 0.2 s. The results demonstrate the effectiveness of the proposed algorithm for tea bud detection and pose estimation in a field setting. Furthermore, the proposed algorithm has the potential to be used in tea picking robots and also can be extended to other crops and objects, making it a valuable tool for precision agriculture and robotic applications. Full article
(This article belongs to the Special Issue Advances in Agricultural Engineering Technologies and Application)
Show Figures

Figure 1

12 pages, 2568 KiB  
Article
Internode Length Is Correlated with GA3 Content and Is Crucial to the Harvesting Performance of Tea-Picking Machines
by Yao Luo, Qianqian Yu, Yinghua Xie, Chaojie Xu, Letian Cheng, Qing Shi, Yeyun Li, Xianchen Zhang and Zhougao Shen
Plants 2023, 12(13), 2508; https://doi.org/10.3390/plants12132508 - 30 Jun 2023
Cited by 4 | Viewed by 2314
Abstract
High labor costs and labor shortages are limiting factors affecting the tea industry in Anhui Province. Thus, exploiting the full mechanization of shoot harvesting is an urgent task in the tea industry. Tea quality is greatly influenced by the integrity rate of tea [...] Read more.
High labor costs and labor shortages are limiting factors affecting the tea industry in Anhui Province. Thus, exploiting the full mechanization of shoot harvesting is an urgent task in the tea industry. Tea quality is greatly influenced by the integrity rate of tea leaves; therefore, it is important to choose tea cultivars suitable for machine picking. In this study, seven tea cultivars were used to investigate the relationship between internode length and blade angle with respect to newly formed tea shoots and machine harvesting in field experiments (Xuanchen City, Kuiling village) conducted throughout the year (in the autumn of 2021, in the early spring of 2022, and in the summer of 2022). Our results showed that the internode length (L2 or L4) had a significant and positive correlation with the integrity rate of tea buds and leaves in seven tea cultivars over three seasons. However, no significant correlation was found between the blade angle and the integrity rate of tea buds and leaves. In addition, a strong and positive correlation was found between the levels of GA1 (R2 > 0.7), GA3 (R2 > 0.85), and IAA (R2 > 0.6) regarding the internodes and internode lengths of the seven tea cultivars. Moreover, the relative expression levels of CsGA20ox, CsGA3ox1, and CsGA3ox2 in Echa1 (the longer internode) were significantly higher compared with those in Zhenong113 (the shorter internode). Overall, our results show that the internode length is an important factor for the machine harvesting of tea leaves and that the level of GA3 is strongly associated with internode length. Full article
Show Figures

Figure 1

Back to TopTop