Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (13)

Search Parameters:
Keywords = tea bud target detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3116 KB  
Article
Integrated Transcriptomic and Metabolomic Analysis Reveals Metabolic Heterosis in Hybrid Tea Plants (Camellia sinensis)
by Yu Lei, Jihua Duan, Feiyi Huang, Ding Ding, Yankai Kang, Yi Luo, Yingyu Chen, Nianci Xie and Saijun Li
Genes 2025, 16(12), 1457; https://doi.org/10.3390/genes16121457 - 5 Dec 2025
Viewed by 237
Abstract
Background: Heterosis (hybrid vigor) is a fundamental phenomenon in plant breeding, but its molecular basis remains poorly understood in perennial crops such as tea (Camellia sinensis). This study aimed to elucidate the molecular mechanisms underlying heterosis in tea and its hybrids [...] Read more.
Background: Heterosis (hybrid vigor) is a fundamental phenomenon in plant breeding, but its molecular basis remains poorly understood in perennial crops such as tea (Camellia sinensis). This study aimed to elucidate the molecular mechanisms underlying heterosis in tea and its hybrids by performing integrated transcriptomic and metabolomic analyses of F1 hybrids derived from two elite cultivars, Fuding Dabaicha (FD) and Baojing Huangjincha 1 (HJC). Methods: Comprehensive RNA sequencing and widely targeted metabolomic profiling were conducted on the parental lines and F1 hybrids at the one-bud-one-leaf stage. Primary metabolites (including amino acids, nucleotides, saccharides, and fatty acids) were quantified, and gene expression profiles were obtained. Transcriptomic and metabolomic datasets were integrated using KEGG pathway enrichment and co-expression network analysis to identify coordinated molecular changes underlying heterosis. Results: Metabolomic profiling detected 977 primary metabolites, many of which displayed non-additive accumulation patterns. Notably, linoleic acid derivatives (9(S)-HODE, 13(S)-HODE) and nucleotides (guanosine, uridine) exhibited significant positive mid-parent heterosis. Transcriptomic analysis revealed extensive non-additive gene expression in F1 hybrids, and upregulated genes were enriched in fatty acid metabolism, nucleotide biosynthesis, and stress signaling pathways. Integrated analysis demonstrated strong coordination between differential gene expression and metabolite accumulation, especially in linoleic acid metabolism, cutin/suberine biosynthesis, and pyrimidine metabolism. Positive correlations between elevated fatty acid levels and transcript abundance of lipid metabolism genes suggest that the transcriptional remodeling of lipid pathways contributes to heterosis. Conclusions: These findings provide novel insights into tea plant heterosis and identify potential molecular targets for breeding high-quality cultivars. Full article
(This article belongs to the Special Issue 5Gs in Crop Genetic and Genomic Improvement: 2025–2026)
Show Figures

Figure 1

20 pages, 10998 KB  
Article
A Novel Semi-Hydroponic Root Observation System Combined with Unsupervised Semantic Segmentation for Root Phenotyping
by Kunhong Li, Siyue Xu, Christoph Menz, Feng Yang, Helder Fraga, João A. Santos, Bing Liu and Chenyao Yang
Agronomy 2025, 15(12), 2794; https://doi.org/10.3390/agronomy15122794 - 4 Dec 2025
Viewed by 342
Abstract
Root system analysis remains methodologically challenging in plant research: traditional soil cultivation obstructs comprehensive root observation, whereas hydroponic visualization lacks ecological relevance due to soil environment exclusion—a critical limitation for crops like soybean. This manuscript developed a cost-effective hybrid imaging system integrating transparent [...] Read more.
Root system analysis remains methodologically challenging in plant research: traditional soil cultivation obstructs comprehensive root observation, whereas hydroponic visualization lacks ecological relevance due to soil environment exclusion—a critical limitation for crops like soybean. This manuscript developed a cost-effective hybrid imaging system integrating transparent acrylic plates, semi-permeable membranes, and natural soil substrates with high-resolution imaging and controlled illumination, enabling non-destructive root monitoring in quasi-natural soil conditions. Complementing this hardware innovation, this manuscript proposed an unsupervised semantic segmentation algorithm that synergizes path planning with an enhanced DBSCAN framework, achieving the precise extraction of primary and lateral root architectures. Experimental validation demonstrated superior performance in soybean root analysis, with segmentation metrics reaching 0.8444 accuracy, 0.9203 recall, 0.8743 F1-score, and 0.7921 mIoU—significantly outperforming existing unsupervised methods (p<0.01). Strong correlations (R2 > 0.94) with WinRHIZO in quantifying root length, projected area, dimensional parameters, and lateral root counts confirmed system reliability. This soil-compatible phenotyping platform establishes new opportunities for root research, with future developments targeting multi-crop adaptability and complex soil condition applications through modular hardware redesign and 3D reconstruction algorithm integration. Full article
Show Figures

Figure 1

17 pages, 2475 KB  
Article
YOLO-LMTB: A Lightweight Detection Model for Multi-Scale Tea Buds in Agriculture
by Guofeng Xia, Yanchuan Guo, Qihang Wei, Yiwen Cen, Loujing Feng and Yang Yu
Sensors 2025, 25(20), 6400; https://doi.org/10.3390/s25206400 - 16 Oct 2025
Viewed by 675
Abstract
Tea bud targets are typically located in complex environments characterized by multi-scale variations, high density, and strong color resemblance to the background, which pose significant challenges for rapid and accurate detection. To address these issues, this study presents YOLO-LMTB, a lightweight multi-scale detection [...] Read more.
Tea bud targets are typically located in complex environments characterized by multi-scale variations, high density, and strong color resemblance to the background, which pose significant challenges for rapid and accurate detection. To address these issues, this study presents YOLO-LMTB, a lightweight multi-scale detection model based on the YOLOv11n architecture. First, a Multi-scale Edge-Refinement Context Aggregator (MERCA) module is proposed to replace the original C3k2 block in the backbone. MERCA captures multi-scale contextual features through hierarchical receptive field collaboration and refines edge details, thereby significantly improving the perception of fine structures in tea buds. Furthermore, a Dynamic Hyperbolic Token Statistics Transformer (DHTST) module is developed to replace the original PSA block. This module dynamically adjusts feature responses and statistical measures through attention weighting using learnable threshold parameters, effectively enhancing discriminative features while suppressing background interference. Additionally, a Bidirectional Feature Pyramid Network (BiFPN) is introduced to replace the original network structure, enabling the adaptive fusion of semantically rich and spatially precise features via bidirectional cross-scale connections while reducing computational complexity. In the self-built tea bud dataset, experimental results demonstrate that compared to the original model, the YO-LO-LMTB model achieves a 2.9% improvement in precision (P), along with increases of 1.6% and 2.0% in mAP50 and mAP50-95, respectively. Simultaneously, the number of parameters decreased by 28.3%, and the model size reduced by 22.6%. To further validate the effectiveness of the improvement scheme, experiments were also conducted using public datasets. The results demonstrate that each enhancement module can boost the model’s detection performance and exhibits strong generalization capabilities. The model not only excels in multi-scale tea bud detection but also offers a valuable reference for reducing computational complexity, thereby providing a technical foundation for the practical application of intelligent tea-picking systems. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

21 pages, 9034 KB  
Article
TeaBudNet: A Lightweight Framework for Robust Small Tea Bud Detection in Outdoor Environments via Weight-FPN and Adaptive Pruning
by Yi Li, Zhiyan Zhang, Jie Zhang, Jingsha Shi, Xiaoyang Zhu, Bingyu Chen, Yi Lan, Yanling Jiang, Wanyi Cai, Xianming Tan, Zhaohong Lu, Hailin Peng, Dandan Tang, Yaning Zhu, Liqiang Tan, Kunhong Li, Feng Yang and Chenyao Yang
Agronomy 2025, 15(8), 1990; https://doi.org/10.3390/agronomy15081990 - 19 Aug 2025
Cited by 1 | Viewed by 1015
Abstract
The accurate detection of tea buds in outdoor environments is crucial for the intelligent management of modern tea plantations. However, this task remains challenging due to the small size of tea buds and the limited computational capabilities of the edge devices commonly used [...] Read more.
The accurate detection of tea buds in outdoor environments is crucial for the intelligent management of modern tea plantations. However, this task remains challenging due to the small size of tea buds and the limited computational capabilities of the edge devices commonly used in the field. Existing object detection models are typically burdened by high computational costs and parameter loads while often delivering suboptimal accuracy, thus limiting their practical deployment. To address these challenges, we propose TeaBudNet, a lightweight and robust detection framework tailored for small tea bud identification under outdoor conditions. Central to our approach is the introduction of Weight-FPN, an enhanced variant of the BiFPN designed to preserve fine-grained spatial information, thereby improving detection sensitivity to small targets. Additionally, we incorporate a novel P2 detection layer that integrates high-resolution shallow features, enhancing the network’s ability to capture detailed contour information critical for precise localization. To further optimize efficiency, we present a Group–Taylor pruning strategy, which leverages Taylor expansion to perform structured, non-global pruning. This strategy ensures a consistent layerwise evaluation while significantly reducing computational overhead. Extensive experiments on a self-built multi-category tea dataset demonstrate that TeaBudNet surpasses state-of-the-art models, achieving +5.0% gains in AP@50 while reducing parameters and computational cost by 50% and 3%, respectively. The framework has been successfully deployed on Huawei Atlas 200I DKA2 developer kits in real-world tea plantation settings, underscoring its practical value and scalability for accurate outdoor tea bud detection. Full article
(This article belongs to the Special Issue Application of Machine Learning and Modelling in Food Crops)
Show Figures

Figure 1

20 pages, 2415 KB  
Article
Integrated Transcriptomic and Targeted Metabolomic Analyses Elucidate the Molecular Mechanism Underlying Dihydromyricetin Synthesis in Nekemias grossedentata
by Fuwen Wu, Zhi Feng, Zhi Yao, Peiling Zhang, Yiqiang Wang and Meng Li
Plants 2025, 14(10), 1561; https://doi.org/10.3390/plants14101561 - 21 May 2025
Cited by 1 | Viewed by 920
Abstract
Nekemias grossedentata (Hand.-Mazz.) J. Wen & Z. L. Nie is a medicinal and edible plant with a high dihydromyricetin (DHM) content in its bud tips. Vine tea made from its bud tips has served as a health tea and Chinese herbal medicine for [...] Read more.
Nekemias grossedentata (Hand.-Mazz.) J. Wen & Z. L. Nie is a medicinal and edible plant with a high dihydromyricetin (DHM) content in its bud tips. Vine tea made from its bud tips has served as a health tea and Chinese herbal medicine for nearly 700 years. However, the molecular mechanisms underlying the high DHM content in N. grossedentata bud tips remain inadequately elucidated. This study conducted qualitative and quantitative analyses of bud tip flavonoids utilizing HPLC and targeted metabolomics. Core genes influencing the substantial synthesis of DHM in N. grossedentata were identified through integrated transcriptome and metabolome analyses. The results revealed that 65 flavonoid metabolites were detected in bud tips, with DHM as the predominant flavonoid (37.5%), followed by myricetin (0.144%) and taxifolin (0.141%). Correlation analysis revealed a significant positive correlation between NgF3′5′H3 expression and DHM content. Co-expression analysis and qRT-PCR validation demonstrated a significant positive correlation between NgMYB71 and NgF3′5′H3, with consistent expression trends across three periods and four tissues. Consequently, NgF3′5′H3 and NgMYB71 were identified as core genes influencing the substantial synthesis of DHM in N. grossedentata. Elevated NgMYB71 expression in bud tips induced high NgF3′5′H3 expression, facilitating extensive DHM synthesis in bud tips. Molecular docking analysis revealed that NgF3′5′H3 had a strong binding affinity for taxifolin. NgF3′5′H3 was the pivotal core node gene in the dihydromyricetin biosynthesis pathway in N. grossedentata and was highly expressed in bud tips. The strong specific binding of NgF3′5′H3 to dihydromyricetin precursor metabolites catalyzed their conversion into DHM, resulting in higher DHM contents in bud tips than in other tissues or plants. This study aimed to elucidate the molecular mechanisms underlying the substantial synthesis of DHM in N. grossedentata, providing a theoretical foundation for enhancing DHM production and developing N. grossedentata resources. Full article
(This article belongs to the Section Plant Genetics, Genomics and Biotechnology)
Show Figures

Figure 1

19 pages, 3114 KB  
Article
PC-YOLO11s: A Lightweight and Effective Feature Extraction Method for Small Target Image Detection
by Zhou Wang, Yuting Su, Feng Kang, Lijin Wang, Yaohua Lin, Qingshou Wu, Huicheng Li and Zhiling Cai
Sensors 2025, 25(2), 348; https://doi.org/10.3390/s25020348 - 9 Jan 2025
Cited by 30 | Viewed by 7611
Abstract
Compared with conventional targets, small objects often face challenges such as smaller size, lower resolution, weaker contrast, and more background interference, making their detection more difficult. To address this issue, this paper proposes an improved small object detection method based on the YOLO11 [...] Read more.
Compared with conventional targets, small objects often face challenges such as smaller size, lower resolution, weaker contrast, and more background interference, making their detection more difficult. To address this issue, this paper proposes an improved small object detection method based on the YOLO11 model—PC-YOLO11s. The core innovation of PC-YOLO11s lies in the optimization of the detection network structure, which includes the following aspects: Firstly, PC-YOLO11s has adjusted the hierarchical structure of the detection network and added a P2 layer specifically for small object detection. By extracting the feature information of small objects in the high-resolution stage of the image, the P2 layer helps the network better capture small objects. At the same time, in order to reduce unnecessary calculations and lower the complexity of the model, we removed the P5 layer. In addition, we have introduced the coordinate spatial attention mechanism, which can help the network more accurately obtain the spatial and positional features required for small targets, thereby further improving detection accuracy. In the VisDrone2019 datasets, experimental results show that PC-YOLO11s outperforms other existing YOLO-series models in overall performance. Compared with the baseline YOLO11s model, PC-YOLO11s mAP@0.5 increased from 39.5% to 43.8%, mAP@0.5:0.95 increased from 23.6% to 26.3%, and the parameter count decreased from 9.416M to 7.103M. Not only that, we also applied PC-YOLO11s to tea bud datasets, and experiments showed that its performance is superior to other YOLO-series models. Experiments have shown that PC-YOLO11s exhibits excellent performance in small object detection tasks, with strong accuracy improvement and good generalization ability, which can meet the needs of small object detection in practical applications. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

19 pages, 13021 KB  
Article
GLS-YOLO: A Lightweight Tea Bud Detection Model in Complex Scenarios
by Shanshan Li, Zhe Zhang and Shijun Li
Agronomy 2024, 14(12), 2939; https://doi.org/10.3390/agronomy14122939 - 10 Dec 2024
Cited by 4 | Viewed by 1452
Abstract
The efficiency of tea bud harvesting has been greatly enhanced, and human labor intensity significantly reduced, through the mechanization and intelligent management of tea plantations. A key challenge for harvesting machinery is ensuring both the freshness of tea buds and the integrity of [...] Read more.
The efficiency of tea bud harvesting has been greatly enhanced, and human labor intensity significantly reduced, through the mechanization and intelligent management of tea plantations. A key challenge for harvesting machinery is ensuring both the freshness of tea buds and the integrity of the tea plants. However, achieving precise harvesting requires complex computational models, which can limit practical deployment. To address the demand for high-precision yet lightweight tea bud detection, this study proposes the GLS-YOLO detection model, based on YOLOv8. The model leverages GhostNetV2 as its backbone network, replacing standard convolutions with depthwise separable convolutions, resulting in substantial reductions in computational load and memory consumption. Additionally, the C2f-LC module is integrated into the improved model, combining cross-covariance fusion with a lightweight contextual attention mechanism to enhance feature recognition and extraction quality. To tackle the challenges posed by varying poses and occlusions of tea buds, Shape-IoU was employed as the loss function to improve the scoring of similarly shaped objects, reducing false positives and false negatives while improving the detection of non-rectangular or irregularly shaped objects. Experimental results demonstrate the model’s superior performance, achieving an AP@0.5 of 90.55%. Compared to the original YOLOv8, the model size was reduced by 38.85%, and the number of parameters decreased by 39.95%. This study presents innovative advances in agricultural robotics by significantly improving the accuracy and efficiency of tea bud harvesting, simplifying the configuration process for harvesting systems, and effectively lowering the technological barriers for real-world applications. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

19 pages, 4786 KB  
Article
RT-DETR-Tea: A Multi-Species Tea Bud Detection Model for Unstructured Environments
by Yiyong Chen, Yang Guo, Jianlong Li, Bo Zhou, Jiaming Chen, Man Zhang, Yingying Cui and Jinchi Tang
Agriculture 2024, 14(12), 2256; https://doi.org/10.3390/agriculture14122256 - 10 Dec 2024
Cited by 7 | Viewed by 2290
Abstract
Accurate bud detection is a prerequisite for automatic tea picking and yield statistics; however, current research suffers from missed detection due to the variety of singleness and false detection under complex backgrounds. Traditional target detection models are mainly based on CNN, but CNN [...] Read more.
Accurate bud detection is a prerequisite for automatic tea picking and yield statistics; however, current research suffers from missed detection due to the variety of singleness and false detection under complex backgrounds. Traditional target detection models are mainly based on CNN, but CNN can only achieve the extraction of local feature information, which is a lack of advantages for the accurate identification of targets in complex environments, and Transformer can be a good solution to the problem. Therefore, based on a multi-variety tea bud dataset, this study proposes RT-DETR-Tea, an improved object detection model under the real-time detection Transformer (RT-DETR) framework. This model uses cascaded group attention to replace the multi-head self-attention (MHSA) mechanism in the attention-based intra-scale feature interaction (AIFI) module, effectively optimizing deep features and enriching the semantic information of features. The original cross-scale feature-fusion module (CCFM) mechanism is improved to establish the gather-and-distribute-Tea (GD-Tea) mechanism for multi-level feature fusion, which can effectively fuse low-level and high-level semantic information and large and small tea bud features in natural environments. The submodule of DilatedReparamBlock in UniRepLKNet was employed to improve RepC3 to achieve an efficient fusion of tea bud feature information and ensure the accuracy of the detection head. Ablation experiments show that the precision and mean average precision of the proposed RT-DETR-Tea model are 96.1% and 79.7%, respectively, which are increased by 5.2% and 2.4% compared to those of the original model, indicating the model’s effectiveness. The model also shows good detection performance on the newly constructed tea bud dataset. Compared with other detection algorithms, the improved RT-DETR-Tea model demonstrates superior tea bud detection performance, providing effective technical support for smart tea garden management and production. Full article
Show Figures

Figure 1

14 pages, 3385 KB  
Article
Tea Bud Detection Model in a Real Picking Environment Based on an Improved YOLOv5
by Hongfei Li, Min Kong and Yun Shi
Biomimetics 2024, 9(11), 692; https://doi.org/10.3390/biomimetics9110692 - 13 Nov 2024
Cited by 3 | Viewed by 2266
Abstract
The detection of tea bud targets is the foundation of automated picking of premium tea. This article proposes a high-performance tea bud detection model to address issues such as complex environments, small target tea buds, and blurry device focus in tea bud detection. [...] Read more.
The detection of tea bud targets is the foundation of automated picking of premium tea. This article proposes a high-performance tea bud detection model to address issues such as complex environments, small target tea buds, and blurry device focus in tea bud detection. During the spring tea-picking stage, we collect tea bud images from mountainous tea gardens and annotate them. YOLOv5 tea is an improvement based on YOLOv5, which uses the efficient Simplified Spatial Pyramid Pooling Fast (SimSPPF) in the backbone for easy deployment on tea bud-picking equipment. The neck network adopts the Bidirectional Feature Pyramid Network (BiFPN) structure. It fully integrates deep and shallow feature information, achieving the effect of fusing features at different scales and improving the detection accuracy of focused fuzzy tea buds. It replaces the independent CBS convolution module in traditional neck networks with Omni-Dimensional Dynamic Convolution (ODConv), processing different weights from spatial size, input channel, output channel, and convolution kernel to improve the detection of small targets and occluded tea buds. The experimental results show that the improved model has improved precision, recall, and mean average precision by 4.4%, 2.3%, and 3.2%, respectively, compared to the initial model, and the inference speed of the model has also been improved. This study has theoretical and practical significance for tea bud harvesting in complex environments. Full article
Show Figures

Figure 1

23 pages, 25042 KB  
Article
Segmentation Network for Multi-Shape Tea Bud Leaves Based on Attention and Path Feature Aggregation
by Tianci Chen, Haoxin Li, Jinhong Lv, Jiazheng Chen and Weibin Wu
Agriculture 2024, 14(8), 1388; https://doi.org/10.3390/agriculture14081388 - 17 Aug 2024
Cited by 2 | Viewed by 1368
Abstract
Accurately detecting tea bud leaves is crucial for the automation of tea picking robots. However, challenges arise due to tea stem occlusion and overlapping of buds and leaves, presenting varied shapes of one bud–one leaf targets in the field of view, making precise [...] Read more.
Accurately detecting tea bud leaves is crucial for the automation of tea picking robots. However, challenges arise due to tea stem occlusion and overlapping of buds and leaves, presenting varied shapes of one bud–one leaf targets in the field of view, making precise segmentation of tea bud leaves challenging. To improve the segmentation accuracy of one bud–one leaf targets with different shapes and fine granularity, this study proposes a novel semantic segmentation model for tea bud leaves. The method designs a hierarchical Transformer block based on a self-attention mechanism in the encoding network, which is beneficial for capturing long-range dependencies between features and enhancing the representation of common features. Then, a multi-path feature aggregation module is designed to effectively merge the feature outputs of encoder blocks with decoder outputs, thereby alleviating the loss of fine-grained features caused by downsampling. Furthermore, a refined polarized attention mechanism is employed after the aggregation module to perform polarized filtering on features in channel and spatial dimensions, enhancing the output of fine-grained features. The experimental results demonstrate that the proposed Unet-Enhanced model achieves segmentation performance well on one bud–one leaf targets with different shapes, with a mean intersection over union (mIoU) of 91.18% and a mean pixel accuracy (mPA) of 95.10%. The semantic segmentation network can accurately segment tea bud leaves, providing a decision-making basis for the spatial positioning of tea picking robots. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

16 pages, 8874 KB  
Article
Recognition Model for Tea Grading and Counting Based on the Improved YOLOv8n
by Yuxin Xia, Zejun Wang, Zhiyong Cao, Yaping Chen, Limei Li, Lijiao Chen, Shihao Zhang, Chun Wang, Hongxu Li and Baijuan Wang
Agronomy 2024, 14(6), 1251; https://doi.org/10.3390/agronomy14061251 - 10 Jun 2024
Cited by 12 | Viewed by 2031
Abstract
Grading tea leaves efficiently in a natural environment is a crucial technological foundation for the automation of tea-picking robots. In this study, to solve the problems of dense distribution, limited feature-extraction ability, and false detection in the field of tea grading recognition, an [...] Read more.
Grading tea leaves efficiently in a natural environment is a crucial technological foundation for the automation of tea-picking robots. In this study, to solve the problems of dense distribution, limited feature-extraction ability, and false detection in the field of tea grading recognition, an improved YOLOv8n model for tea grading and counting recognition was proposed. Firstly, the SPD-Conv module was embedded into the backbone of the network model to enhance the deep feature-extraction ability of the target. Secondly, the Super-Token Vision Transformer was integrated to reduce the model’s attention to redundant information, thus improving its perception ability for tea. Subsequently, the loss function was improved to MPDIoU, which accelerated the convergence speed and optimized the performance. Finally, a classification-positioning counting function was added to achieve the purpose of classification counting. The experimental results showed that, compared to the original model, the precision, recall and average precision improved by 17.6%, 19.3%, and 18.7%, respectively. The average precision of single bud, one bud with one leaf, and one bud with two leaves were 88.5%, 89.5% and 89.1%. In this study, the improved model demonstrated strong robustness and proved suitable for tea grading and edge-picking equipment, laying a solid foundation for the mechanization of the tea industry. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

24 pages, 8173 KB  
Article
Tea-YOLOv8s: A Tea Bud Detection Model Based on Deep Learning and Computer Vision
by Shuang Xie and Hongwei Sun
Sensors 2023, 23(14), 6576; https://doi.org/10.3390/s23146576 - 21 Jul 2023
Cited by 49 | Viewed by 5625
Abstract
Tea bud target detection is essential for mechanized selective harvesting. To address the challenges of low detection precision caused by the complex backgrounds of tea leaves, this paper introduces a novel model called Tea-YOLOv8s. First, multiple data augmentation techniques are employed to increase [...] Read more.
Tea bud target detection is essential for mechanized selective harvesting. To address the challenges of low detection precision caused by the complex backgrounds of tea leaves, this paper introduces a novel model called Tea-YOLOv8s. First, multiple data augmentation techniques are employed to increase the amount of information in the images and improve their quality. Then, the Tea-YOLOv8s model combines deformable convolutions, attention mechanisms, and improved spatial pyramid pooling, thereby enhancing the model’s ability to learn complex object invariance, reducing interference from irrelevant factors, and enabling multi-feature fusion, resulting in improved detection precision. Finally, the improved YOLOv8 model is compared with other models to validate the effectiveness of the proposed improvements. The research results demonstrate that the Tea-YOLOv8s model achieves a mean average precision of 88.27% and an inference time of 37.1 ms, with an increase in the parameters and calculation amount by 15.4 M and 17.5 G, respectively. In conclusion, although the proposed approach increases the model’s parameters and calculation amount, it significantly improves various aspects compared to mainstream YOLO detection models and has the potential to be applied to tea buds picked by mechanization equipment. Full article
(This article belongs to the Special Issue Perception and Imaging for Smart Agriculture)
Show Figures

Figure 1

24 pages, 13562 KB  
Article
Tea Bud and Picking Point Detection Based on Deep Learning
by Junquan Meng, Yaxiong Wang, Jiaming Zhang, Siyuan Tong, Chongchong Chen, Chenxi Zhang, Yilin An and Feng Kang
Forests 2023, 14(6), 1188; https://doi.org/10.3390/f14061188 - 8 Jun 2023
Cited by 14 | Viewed by 2957
Abstract
The tea industry is one of China’s most important industries. The picking of famous tea still relies on manual methods, with low efficiency, labor shortages and high labor costs, which restrict the development of the tea industry. These labor-intensive picking methods urgently need [...] Read more.
The tea industry is one of China’s most important industries. The picking of famous tea still relies on manual methods, with low efficiency, labor shortages and high labor costs, which restrict the development of the tea industry. These labor-intensive picking methods urgently need to be transformed into intelligent and automated picking. In response to difficulties in identification of tea buds and positioning of picking points, this study took the one bud with one leaf grade of the Fuyun 6 tea species under complex background as the research object, and proposed a method based on deep learning, combining object detection and semantic segmentation networks, to first detect the tea buds, then segment the picking area from the tea bud detection box, and then obtain the picking point from the picking area. An improved YOLOX-tiny model and an improved PSP-net model were used to detect tea buds and their picking areas, respectively; the two models were combined at the inference end, and the centroid of the picking area was taken as the picking point. The YOLOX-tiny model for tea bud detection was modified by replacing its activation function with the Mish function and using a content-aware reassembly of feature module to implement the upsampling operation. The detection effects of the YOLOX-tiny model were improved, and the mean average precision and recall rate of the improved model reached 97.42% and 95.09%, respectively. This study also proposed an improved PSP-net semantic segmentation model for segmenting the picking area inside a detection box. The PSP-net was modified by replacing its backbone network with the lightweight network MobileNetV2 and by replacing conventional convolution in its feature fusion part with Omni-Dimensional Dynamic Convolution. The model’s lightweight characteristics were significantly improved and its segmentation accuracy for the picking area was also improved. The mean intersection over union and mean pixel accuracy of the improved PSP-net model are 88.83% and 92.96%, respectively, while its computation and parameter amounts are reduced by 95.71% and 96.10%, respectively, compared to the original PSP-net. The method proposed in this study achieves a mean intersection over union and mean pixel accuracy of 83.27% and 86.51% for the overall picking area segmentation, respectively, and the detecting rate of picking point identification reaches 95.6%. Moreover, its detection speed satisfies the requirements of real-time detection, providing a theoretical basis for the automated picking of famous tea. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Back to TopTop