Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = grape picking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 12646 KB  
Article
A Vision-Based Information Processing Framework for Vineyard Grape Picking Using Two-Stage Segmentation and Morphological Perception
by Yifei Peng, Jun Sun, Zhaoqi Wu, Jinye Gao, Lei Shi and Zhiyan Shi
Horticulturae 2025, 11(9), 1039; https://doi.org/10.3390/horticulturae11091039 - 2 Sep 2025
Viewed by 104
Abstract
To achieve efficient vineyard grape picking, a vision-based information processing framework integrating two-stage segmentation with morphological perception is proposed. In the first stage, an improved YOLOv8s-seg model is employed for coarse segmentation, incorporating two key enhancements: first, a dynamic deformation feature aggregation module [...] Read more.
To achieve efficient vineyard grape picking, a vision-based information processing framework integrating two-stage segmentation with morphological perception is proposed. In the first stage, an improved YOLOv8s-seg model is employed for coarse segmentation, incorporating two key enhancements: first, a dynamic deformation feature aggregation module (DDFAM), which facilitates the extraction of complex structural and morphological features; and second, an efficient asymmetric decoupled head (EADHead), which improves boundary awareness while reducing parameter redundancy. Compared with mainstream segmentation models, the improved model achieves superior performance, attaining the highest mAP@0.5 of 86.75%, a lightweight structure with 10.34 M parameters, and a real-time inference speed of 10.02 ms per image. In the second stage, the fine segmentation of fruit stems is performed using an improved OTSU thresholding algorithm, which is applied to a single-channel image derived from the hue component of the HSV color space, thereby enhancing robustness under complex lighting conditions. Morphological features extracted from the preprocessed fruit stem, including centroid coordinates and a skeleton constructed via medial axis transform (MAT), are further utilized to establish the spatial relationships with a picking point and cutting axis. The visualization analysis confirms the high feasibility and adaptability of the proposed framework, providing essential technical support for the automation of grape harvesting. Full article
Show Figures

Figure 1

17 pages, 5705 KB  
Article
Cherry Tomato Bunch and Picking Point Detection for Robotic Harvesting Using an RGB-D Sensor and a StarBL-YOLO Network
by Pengyu Li, Ming Wen, Zhi Zeng and Yibin Tian
Horticulturae 2025, 11(8), 949; https://doi.org/10.3390/horticulturae11080949 - 11 Aug 2025
Viewed by 585
Abstract
For fruit harvesting robots, rapid and accurate detection of fruits and picking points is one of the main challenges for their practical deployment. Several fruits typically grow in clusters or bunches, such as grapes, cherry tomatoes, and blueberries. For such clustered fruits, it [...] Read more.
For fruit harvesting robots, rapid and accurate detection of fruits and picking points is one of the main challenges for their practical deployment. Several fruits typically grow in clusters or bunches, such as grapes, cherry tomatoes, and blueberries. For such clustered fruits, it is desired for them to be picked by bunches instead of individually. This study proposes utilizing a low-cost off-the-shelf RGB-D sensor mounted on the end effector and a lightweight improved YOLOv8-Pose neural network to detect cherry tomato bunches and picking points for robotic harvesting. The problem of occlusion and overlap is alleviated by merging RGB and depth images from the RGB-D sensor. To enhance detection robustness in complex backgrounds and reduce the complexity of the model, the Starblock module from StarNet and the coordinate attention mechanism are incorporated into the YOLOv8-Pose network, termed StarBL-YOLO, to improve the efficiency of feature extraction and reinforce spatial information. Additionally, we replaced the original OKS loss function with the L1 loss function for keypoint loss calculation, which improves the accuracy in picking points localization. The proposed method has been evaluated on a dataset with 843 cherry tomato RGB-D image pairs acquired by a harvesting robot at a commercial greenhouse farm. Experimental results demonstrate that the proposed StarBL-YOLO model achieves a 12% reduction in model parameters compared to the original YOLOv8-Pose while improving detection accuracy for cherry tomato bunches and picking points. Specifically, the model shows significant improvements across all metrics: for computational efficiency, model size (−11.60%) and GFLOPs (−7.23%); for pickable bunch detection, mAP50 (+4.4%) and mAP50-95 (+4.7%); for non-pickable bunch detection, mAP50 (+8.0%) and mAP50-95 (+6.2%); and for picking point detection, mAP50 (+4.3%), mAP50-95 (+4.6%), and RMSE (−23.98%). These results validate that StarBL-YOLO substantially enhances detection accuracy for cherry tomato bunches and picking points while improving computational efficiency, which is valuable for resource-constrained edge-computing deployment for harvesting robots. Full article
(This article belongs to the Special Issue Advanced Automation for Tree Fruit Orchards and Vineyards)
Show Figures

Figure 1

30 pages, 92065 KB  
Article
A Picking Point Localization Method for Table Grapes Based on PGSS-YOLOv11s and Morphological Strategies
by Jin Lu, Zhongji Cao, Jin Wang, Zhao Wang, Jia Zhao and Minjie Zhang
Agriculture 2025, 15(15), 1622; https://doi.org/10.3390/agriculture15151622 - 26 Jul 2025
Viewed by 418
Abstract
During the automated picking of table grapes, the automatic recognition and segmentation of grape pedicels, along with the positioning of picking points, are vital components for all the following operations of the harvesting robot. In the actual scene of a grape plantation, however, [...] Read more.
During the automated picking of table grapes, the automatic recognition and segmentation of grape pedicels, along with the positioning of picking points, are vital components for all the following operations of the harvesting robot. In the actual scene of a grape plantation, however, it is extremely difficult to accurately and efficiently identify and segment grape pedicels and then reliably locate the picking points. This is attributable to the low distinguishability between grape pedicels and the surrounding environment such as branches, as well as the impacts of other conditions like weather, lighting, and occlusion, which are coupled with the requirements for model deployment on edge devices with limited computing resources. To address these issues, this study proposes a novel picking point localization method for table grapes based on an instance segmentation network called Progressive Global-Local Structure-Sensitive Segmentation (PGSS-YOLOv11s) and a simple combination strategy of morphological operators. More specifically, the network PGSS-YOLOv11s is composed of an original backbone of the YOLOv11s-seg, a spatial feature aggregation module (SFAM), an adaptive feature fusion module (AFFM), and a detail-enhanced convolutional shared detection head (DE-SCSH). And the PGSS-YOLOv11s have been trained with a new grape segmentation dataset called Grape-⊥, which includes 4455 grape pixel-level instances with the annotation of ⊥-shaped regions. After the PGSS-YOLOv11s segments the ⊥-shaped regions of grapes, some morphological operations such as erosion, dilation, and skeletonization are combined to effectively extract grape pedicels and locate picking points. Finally, several experiments have been conducted to confirm the validity, effectiveness, and superiority of the proposed method. Compared with the other state-of-the-art models, the main metrics F1 score and mask mAP@0.5 of the PGSS-YOLOv11s reached 94.6% and 95.2% on the Grape-⊥ dataset, as well as 85.4% and 90.0% on the Winegrape dataset. Multi-scenario tests indicated that the success rate of positioning the picking points reached up to 89.44%. In orchards, real-time tests on the edge device demonstrated the practical performance of our method. Nevertheless, for grapes with short pedicels or occluded pedicels, the designed morphological algorithm exhibited the loss of picking point calculations. In future work, we will enrich the grape dataset by collecting images under different lighting conditions, from various shooting angles, and including more grape varieties to improve the method’s generalization performance. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

29 pages, 10327 KB  
Article
Simulation and Testing of Grapevine Branch Crushing and Collection Components
by Lei He, Zhimin Wang, Long Song, Pengyu Bao and Silin Cao
Agriculture 2024, 14(9), 1583; https://doi.org/10.3390/agriculture14091583 - 11 Sep 2024
Cited by 2 | Viewed by 1167
Abstract
Aiming at the problem of the low rate of resource utilization of large amounts of grape branch pruning and the high cost of leaving the garden, we design a kind of grape branch picking and crushing collection machine that integrates the collection of [...] Read more.
Aiming at the problem of the low rate of resource utilization of large amounts of grape branch pruning and the high cost of leaving the garden, we design a kind of grape branch picking and crushing collection machine that integrates the collection of strips, the picking up, crushing, and collecting operations. The crushing and collecting parts of the machine are simulated, analyzed, and tested. Using the method of numerical simulation, combined with the results of the pre-branch material properties measurement, the branch crushing process is simulated based on LS-DYNA software. Our analysis found that in the branch destruction process, not only does knife cutting exist, but the bending fracture of the opposite side of the cutting place also exists. With the increase in the knife roller speed, the cutting resistance of the tool increases, reaching 2690 N at 2500 r/min. In the cutting simulation under different tool edge angles, the cutting resistance of the tool is the smallest when the edge angle is 55°, which is 1860 N, and this edge angle is more suitable for branch crushing and cutting. In the cutting simulation under different cutting edge angles, the cutting resistance of the tool is the smallest when the edge angle is 55°, which is 1860 N, and this edge angle is more suitable for branch crushing and cutting. Using Fluent software to analyze the characteristics of the airflow field of the pulverizing device, it was found that with the increase in the knife roller speed, the inlet flow and negative pressure of the pulverizing chamber increase. When the knife roller speed is 2500 r/min, the inlet flow rate and negative pressure are 1.92 kg/s and 37.16 Pa, respectively, which will be favorable to the feeding of the branches, but the speed is too high and will also lead to the enhancement of the vortex in some areas within the pulverizing device, which will in turn affect the feeding of the branches as well as the throwing out of pulverized materials. Therefore, the speed range of the pulverizing knife roller was finally determined to be 1800~2220 r/min. Based on the ANSYS/Model module modal analysis of the crushing knife roller, the knife roller of the first six orders of the intrinsic frequency and vibration pattern, the crushing knife roller of the lowest order had a modal intrinsic frequency of 137.42 Hz, much larger than the crushing knife roller operating frequency of 37 Hz, above which the machine will not resonate during operation. The research results can provide a theoretical basis and technical support for other similar crops to be crushed and collected. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

19 pages, 15752 KB  
Article
Research on a Trellis Grape Stem Recognition Method Based on YOLOv8n-GP
by Tong Jiang, Yane Li, Hailin Feng, Jian Wu, Weihai Sun and Yaoping Ruan
Agriculture 2024, 14(9), 1449; https://doi.org/10.3390/agriculture14091449 - 25 Aug 2024
Cited by 7 | Viewed by 1607
Abstract
Grapes are an important cash crop that contributes to the rapid development of the agricultural economy. The harvesting of ripe fruits is one of the crucial steps in the grape production process. However, at present, the picking methods are mainly manual, resulting in [...] Read more.
Grapes are an important cash crop that contributes to the rapid development of the agricultural economy. The harvesting of ripe fruits is one of the crucial steps in the grape production process. However, at present, the picking methods are mainly manual, resulting in wasted time and high costs. Therefore, it is particularly important to implement intelligent grape picking, in which the accurate detection of grape stems is a key step to achieve intelligent harvesting. In this study, a trellis grape stem detection model, YOLOv8n-GP, was proposed by combining the SENetV2 attention module and CARAFE upsampling operator with YOLOv8n-pose. Specifically, this study first embedded the SENetV2 attention module at the bottom of the backbone network to enhance the model’s ability to extract key feature information. Then, we utilized the CARAFE upsampling operator to replace the upsampling modules in the neck network, expanding the sensory field of the model without increasing its parameters. Finally, to validate the detection performance of YOLOv8n-GP, we examined the effectiveness of the various keypoint detection models constructed with YOLOv8n-pose, YOLOv5-pose, YOLOv7-pose, and YOLOv7-Tiny-pose. Experimental results show that the precision, recall, mAP, and mAP-kp of YOLOv8n-GP reached 91.6%, 91.3%, 97.1%, and 95.4%, which improved by 3.7%, 3.6%, 4.6%, and 4.0%, respectively, compared to YOLOv8n-pose. Furthermore, YOLOv8n-GP exhibits superior detection performance compared with the other keypoint detection models in terms of each evaluation indicator. The experimental results demonstrate that YOLOv8n-GP can detect trellis grape stems efficiently and accurately, providing technical support for advancing intelligent grape harvesting. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

21 pages, 10587 KB  
Article
Detection and Instance Segmentation of Grape Clusters in Orchard Environments Using an Improved Mask R-CNN Model
by Xiang Huang, Dongdong Peng, Hengnian Qi, Lei Zhou and Chu Zhang
Agriculture 2024, 14(6), 918; https://doi.org/10.3390/agriculture14060918 - 10 Jun 2024
Cited by 12 | Viewed by 2198
Abstract
Accurately segmenting grape clusters and detecting grape varieties in orchards is beneficial for orchard staff to accurately understand the distribution, yield, growth information, and efficient mechanical harvesting of different grapes. However, factors, such as lighting changes, grape overlap, branch and leaf occlusion, similarity [...] Read more.
Accurately segmenting grape clusters and detecting grape varieties in orchards is beneficial for orchard staff to accurately understand the distribution, yield, growth information, and efficient mechanical harvesting of different grapes. However, factors, such as lighting changes, grape overlap, branch and leaf occlusion, similarity in fruit and background colors, as well as the high similarity between some different grape varieties, bring tremendous difficulties in the identification and segmentation of different varieties of grape clusters. To resolve these difficulties, this study proposed an improved Mask R-CNN model by assembling an efficient channel attention (ECA) module into the residual layer of the backbone network and a dual attention network (DANet) into the mask branch. The experimental results showed that the improved Mask R-CNN model can accurately segment clusters of eight grape varieties under various conditions. The bbox_mAP and mask_mAP on the test set were 0.905 and 0.821, respectively. The results were 1.4% and 1.5% higher than the original Mask R-CNN model, respectively. The effectiveness of the ECA module and DANet module on other instance segmentation models was explored as comparison, which provided a certain ideological reference for model improvement and optimization. The results of the improved Mask R-CNN model in this study were superior to other classic instance segmentation models. It indicated that the improved model could effectively, rapidly, and accurately segment grape clusters and detect grape varieties in orchards. This study provides technical support for orchard staff and grape-picking robots to pick grapes intelligently. Full article
(This article belongs to the Special Issue Advanced Image Processing in Agricultural Applications)
Show Figures

Figure 1

15 pages, 22709 KB  
Article
Lightweight-Improved YOLOv5s Model for Grape Fruit and Stem Recognition
by Junhong Zhao, Xingzhi Yao, Yu Wang, Zhenfeng Yi, Yuming Xie and Xingxing Zhou
Agriculture 2024, 14(5), 774; https://doi.org/10.3390/agriculture14050774 - 17 May 2024
Cited by 8 | Viewed by 1833
Abstract
Mechanized harvesting is the key technology to solving the high cost and low efficiency of manual harvesting, and the key to realizing mechanized harvesting lies in the accurate and fast identification and localization of targets. In this paper, a lightweight YOLOv5s model is [...] Read more.
Mechanized harvesting is the key technology to solving the high cost and low efficiency of manual harvesting, and the key to realizing mechanized harvesting lies in the accurate and fast identification and localization of targets. In this paper, a lightweight YOLOv5s model is improved for efficiently identifying grape fruits and stems. On the one hand, it improves the CSP module in YOLOv5s using the Ghost module, reducing model parameters through ghost feature maps and cost-effective linear operations. On the other hand, it replaces traditional convolutions with deep convolutions to further reduce the model’s computational load. The model is trained on datasets under different environments (normal light, low light, strong light, noise) to enhance the model’s generalization and robustness. The model is applied to the recognition of grape fruits and stems, and the experimental results show that the overall accuracy, recall rate, mAP, and F1 score of the model are 96.8%, 97.7%, 98.6%, and 97.2% respectively. The average detection time on a GPU is 4.5 ms, with a frame rate of 221 FPS, and the weight size generated during training is 5.8 MB. Compared to the original YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x models under the specific orchard environment of a grape greenhouse, the proposed model improves accuracy by 1%, decreases the recall rate by 0.2%, increases the F1 score by 0.4%, and maintains the same mAP. In terms of weight size, it is reduced by 61.1% compared to the original model, and is only 1.8% and 5.5% of the Faster-RCNN and SSD models, respectively. The FPS is increased by 43.5% compared to the original model, and is 11.05 times and 8.84 times that of the Faster-RCNN and SSD models, respectively. On a CPU, the average detection time is 23.9 ms, with a frame rate of 41.9 FPS, representing a 31% improvement over the original model. The test results demonstrate that the lightweight-improved YOLOv5s model proposed in the study, while maintaining accuracy, significantly reduces the model size, enhances recognition speed, and can provide fast and accurate identification and localization for robotic harvesting. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

15 pages, 7267 KB  
Article
An Unstructured Orchard Grape Detection Method Utilizing YOLOv5s
by Wenhao Wang, Yun Shi, Wanfu Liu and Zijin Che
Agriculture 2024, 14(2), 262; https://doi.org/10.3390/agriculture14020262 - 6 Feb 2024
Cited by 14 | Viewed by 2265
Abstract
Rising labor costs and a workforce shortage have impeded the development and economic benefits of the global grape industry. Research and development of intelligent grape harvesting technologies is desperately needed. Therefore, rapid and accurate identification of grapes is crucial for intelligent grape harvesting. [...] Read more.
Rising labor costs and a workforce shortage have impeded the development and economic benefits of the global grape industry. Research and development of intelligent grape harvesting technologies is desperately needed. Therefore, rapid and accurate identification of grapes is crucial for intelligent grape harvesting. However, object detection algorithms encounter multiple challenges in unstructured vineyards, such as similar background colors, light obstruction from greenhouses and leaves, and fruit occlusion. All of these factors contribute to the difficulty of correctly identifying grapes. The GrapeDetectNet (GDN), based on the YOLO (You Only Look Once) v5s, is proposed to improve grape detection accuracy and recall in unstructured vineyards. dual-channel feature extraction attention (DCFE) is a new attention structure introduced in GDN. We also use dynamic snake convolution (DS-Conv) in the backbone network. We collected an independent dataset of 1280 images after a strict selection process to evaluate GDN’s performance. The dataset encompasses examples of Shine Muscat and unripe Kyoho grapes, covering a range of complex outdoor situations. The results of the experiment demonstrate that GDN performed outstandingly on this dataset. Compared to YOLOv5s, this model increased metrics such as 2.02% of mAP0.5:0.95, 2.5% of mAP0.5, 1.4% of precision, 1.6% of recall, and 1.5% of F1 score. Finally, we test the method on a grape-picking robot, and the results show that our algorithm works remarkably well in harvesting experiments. The results indicate that the GDN grape detection model in this study exhibits high detection accuracy. It is proficient in identifying grapes and demonstrates good robustness in unstructured vineyards, providing a valuable empirical reference for the practical application of intelligent grape harvesting technology. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

24 pages, 12270 KB  
Article
Realtime Picking Point Decision Algorithm of Trellis Grape for High-Speed Robotic Cut-and-Catch Harvesting
by Zhujie Xu, Jizhan Liu, Jie Wang, Lianjiang Cai, Yucheng Jin, Shengyi Zhao and Binbin Xie
Agronomy 2023, 13(6), 1618; https://doi.org/10.3390/agronomy13061618 - 15 Jun 2023
Cited by 27 | Viewed by 3500
Abstract
For high-speed robotic cut-and-catch harvesting, efficient trellis grape recognition and picking point positioning are crucial factors. In this study, a new method for the rapid positioning of picking points based on synchronous inference for multi-grapes was proposed. Firstly, a three-dimensional region of interest [...] Read more.
For high-speed robotic cut-and-catch harvesting, efficient trellis grape recognition and picking point positioning are crucial factors. In this study, a new method for the rapid positioning of picking points based on synchronous inference for multi-grapes was proposed. Firstly, a three-dimensional region of interest for a finite number of grapes was constructed according to the “eye to hand” configuration. Then, a feature-enhanced recognition deep learning model called YOLO v4-SE combined with multi-channel inputs of RGB and depth images was put forward to identify occluded or overlapping grapes and synchronously infer picking points upwards of the prediction boxes of the multi-grapes imaged completely in the three-dimensional region of interest (ROI). Finally, the accuracy of each dimension of the picking points was corrected, and the global continuous picking sequence was planned in the three-dimensional ROI. The recognition experiment in the field showed that YOLO v4-SE has good detection performance in various samples with different interference. The positioning experiment, using a different number of grape bunches from the field, demonstrated that the average recognition success rate is 97% and the average positioning success rate is 93.5%; the average recognition time is 0.0864 s; and the average positioning time is 0.0842 s. The average positioning errors of the x, y, and z directions are 2.598, 2.012, and 1.378 mm, respectively. The average positioning error of the Euclidean distance between the true picking point and the predicted picking point is 7.69 mm. In field synchronous harvesting experiments with different fruiting densities, the average recognition success rate is 97%; the average positioning success rate is 93.606%; and the average picking success rate is 92.78%. The average picking speed is 6.18 s×bunch1, which meets the harvesting requirements for high-speed cut-and-catch harvesting robots. This method is promising for overcoming time-consuming harvesting caused by the problematic positioning of the grape stem. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

16 pages, 7812 KB  
Article
Experimental Tests in Production of Ready-to-Drink Primitive Wine with Different Modes of Circulation of the Fermenting Must
by Filippo Catalano, Roberto Romaniello, Michela Orsino, Claudio Perone, Biagio Bianchi and Ferruccio Giametta
Appl. Sci. 2023, 13(10), 5941; https://doi.org/10.3390/app13105941 - 11 May 2023
Cited by 5 | Viewed by 2207
Abstract
Energy efficiency is an increasingly important issue in the wine industry worldwide. The focus on quality in wine production has led to increased attention being paid to the product at all stages of processing. The interaction with mechanical components is considered one of [...] Read more.
Energy efficiency is an increasingly important issue in the wine industry worldwide. The focus on quality in wine production has led to increased attention being paid to the product at all stages of processing. The interaction with mechanical components is considered one of the possible critical points in the vinification process, and it becomes fundamental to optimize specific points in the wine production line using the best extraction technique. Therefore, in this work, experimental monitoring of two types of product circulation systems in fermentation was carried out in a winery in Puglia (Italy). In particular, the functional performance and energy consumption of two identical vinification lines were monitored, in which the only variables were two types of circulating systems for the fermenting must: pump-over and pneumatic cap breaking. During the trials, a homogeneous batch of Primitivo grapes was processed, hand-picked and taken to the winery within 1 h of harvesting, where a “ready-to-drink” wine production line was set up. A net quantity of 1000 hL of destemmed grapes was placed in two identical vertical steel tanks. Both wine tanks were monitored and equipped with an automated assembly system and a pneumatic marc breaker. Once both tanks were filled, a first break of the cap was carried out using a pneumatic system in one tank and an automatic pump-over in the other. For the grapes and type of wine studied, the pneumatic system showed better functional performance in terms of vinification speed and energy consumption; on the other hand, the pump-over system performed better in analytical terms. Finally, the results obtained highlight the need for further studies on equipment design to obtain significant benefits in terms of wine production costs while maintaining the quality standards required for “ready-to-drink” wines. Full article
(This article belongs to the Special Issue Innovations in Agri-Food Plants)
Show Figures

Figure 1

19 pages, 6641 KB  
Article
Grape-Bunch Identification and Location of Picking Points on Occluded Fruit Axis Based on YOLOv5-GAP
by Tao Zhang, Fengyun Wu, Mei Wang, Zhaoyi Chen, Lanyun Li and Xiangjun Zou
Horticulturae 2023, 9(4), 498; https://doi.org/10.3390/horticulturae9040498 - 16 Apr 2023
Cited by 26 | Viewed by 3421
Abstract
Due to the short fruit axis, many leaves, and complex background of grapes, most grape cluster axes are blocked from view, which increases robot positioning difficulty in harvesting. This study discussed the location method for picking points in the case of partial occlusion [...] Read more.
Due to the short fruit axis, many leaves, and complex background of grapes, most grape cluster axes are blocked from view, which increases robot positioning difficulty in harvesting. This study discussed the location method for picking points in the case of partial occlusion and proposed a grape cluster-detection algorithm “You Only Look Once v5-GAP” based on “You Only Look Once v5”. First, the Conv layer of the first layer of the YOLOv5 algorithm Backbone was changed to the Focus layer, then a convolution attention operation was performed on the first three C3 structures, the C3 structure layer was changed, and the Transformer in the Bottleneck module of the last layer of the C3 structure was used to reduce the computational amount and execute a better extraction of global feature information. Second, on the basis of bidirectional feature fusion, jump links were added and variable weights were used to strengthen the fusion of feature information for different resolutions. Then, the adaptive activation function was used to learn and decide whether neurons needed to be activated, such that the dynamic control of the network nonlinear degree was realized. Finally, the combination of a digital image processing algorithm and mathematical geometry was used to segment grape bunches identified by YOLOv5-GAP, and picking points were determined after finding centroid coordinates. Experimental results showed that the average precision of YOLOv5-GAP was 95.13%, which was 16.13%, 4.34%, and 2.35% higher than YOLOv4, YOLOv5, and YOLOv7 algorithms, respectively. The average positioning pixel error of the point was 6.3 pixels, which verified that the algorithm effectively detected grapes quickly and accurately. Full article
Show Figures

Figure 1

22 pages, 9698 KB  
Article
GA-YOLO: A Lightweight YOLO Model for Dense and Occluded Grape Target Detection
by Jiqing Chen, Aoqiang Ma, Lixiang Huang, Yousheng Su, Wenqu Li, Hongdu Zhang and Zhikui Wang
Horticulturae 2023, 9(4), 443; https://doi.org/10.3390/horticulturae9040443 - 28 Mar 2023
Cited by 23 | Viewed by 3798
Abstract
Picking robots have become an important development direction of smart agriculture, and the position detection of fruit is the key to realizing robot picking. However, the existing detection models have the shortcomings of missing detection and slow detection speed when detecting dense and [...] Read more.
Picking robots have become an important development direction of smart agriculture, and the position detection of fruit is the key to realizing robot picking. However, the existing detection models have the shortcomings of missing detection and slow detection speed when detecting dense and occluded grape targets. Meanwhile, the parameters of the existing model are too large, which makes it difficult to deploy to the mobile terminal. In this paper, a lightweight GA-YOLO model is proposed. Firstly, a new backbone network SE-CSPGhostnet is designed, which greatly reduces the parameters of the model. Secondly, an adaptively spatial feature fusion mechanism is used to address the issues of difficult detection of dense and occluded grapes. Finally, a new loss function is constructed to improve detection efficiency. In 2022, a detection experiment was carried out on the image data collected in the Bagui rural area of Guangxi Zhuang Autonomous Region, the results demonstrate that the GA-YOLO model has an mAP of 96.87%, detection speed of 55.867 FPS and parameters of 11.003 M. In comparison to the model before improvement, the GA-YOLO model has improved mAP by 3.69% and detection speed by 20.245 FPS. Additionally, the GA-YOLO model has reduced parameters by 82.79%. GA-YOLO model not only improves the detection accuracy of dense and occluded targets but also lessens model parameters and accelerates detection speed. Full article
(This article belongs to the Special Issue Application of Smart Technology and Equipment in Horticulture)
Show Figures

Figure 1

17 pages, 4709 KB  
Article
Multidirectional Dynamic Response and Swing Shedding of Grapes: An Experimental and Simulation Investigation under Vibration Excitation
by Po Zhang, De Yan, Xiaona Cai, Youbin Chen, Lufeng Luo, Yaoqiang Pan and Xiangjun Zou
Agronomy 2023, 13(3), 869; https://doi.org/10.3390/agronomy13030869 - 16 Mar 2023
Cited by 4 | Viewed by 2669
Abstract
During mechanized table grape harvesting, berries are subjected to vibration and collision, which can cause shedding and damage to the fruit. Research on table grape berry shedding has primarily focused on macroscopic swing modes, which are reflected in the integrated grape cluster structure [...] Read more.
During mechanized table grape harvesting, berries are subjected to vibration and collision, which can cause shedding and damage to the fruit. Research on table grape berry shedding has primarily focused on macroscopic swing modes, which are reflected in the integrated grape cluster structure and idealized particle interactions, as well as static response treatments. However, these approaches are unable to accurately explain the characteristics of berry wobbling during picking, predict shedding-prone areas, or identify factors affecting shedding. In this paper, we study the dynamic response characteristics of grape berries in the X, Y, and Z directions by establishing a dynamic model and combining harmonic response and random vibration characteristics with finite element analysis. Our studies revealed that grape berries exhibit various forms (swinging and rebounding) under the same stimulus during harvesting. The grape berry amplitude in the X, Y, and Z directions were 14.71, 12.46, and 27.10 mm, respectively, with the most obvious response being in the Z direction and the flattest response in the Y direction. Berries in the lower cob system part were relatively stable, while those in the upper right side were more prone to swinging and falling, with areas most likely to fall off concentrated in the upper right side. This system accurately predicted the dynamic response characteristics of fruit during vibration harvesting and provided an ideal basis for mechanized grape harvesting. Optimization and research on fruit collection equipment may benefit from this theoretical basis. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

21 pages, 8363 KB  
Article
Design of a Virtual Multi-Interaction Operation System for Hand–Eye Coordination of Grape Harvesting Robots
by Jizhan Liu, Jin Liang, Shengyi Zhao, Yingxing Jiang, Jie Wang and Yucheng Jin
Agronomy 2023, 13(3), 829; https://doi.org/10.3390/agronomy13030829 - 12 Mar 2023
Cited by 27 | Viewed by 3123
Abstract
In harvesting operations, simulation verification of hand–eye coordination in a virtual canopy is critical for harvesting robot research. More realistic scenarios, vision-based driving motion, and cross-platform interaction information are needed to achieve such simulations, which are very challenging. Current simulations are more focused [...] Read more.
In harvesting operations, simulation verification of hand–eye coordination in a virtual canopy is critical for harvesting robot research. More realistic scenarios, vision-based driving motion, and cross-platform interaction information are needed to achieve such simulations, which are very challenging. Current simulations are more focused on path planning operations for consistency scenarios, which are far from satisfying the requirements. To this end, a new approach of visual servo multi-interaction simulation in real scenarios is proposed. In this study, a dual-arm grape harvesting robot in the laboratory is used as an example. To overcome these challenges, a multi-software federation is first proposed to establish their communication and cross-software sending of image information, coordinate information, and control commands. Then, the fruit recognition and positioning algorithm, forward and inverse kinematic model and simulation model are embedded in OpenCV and MATLAB, respectively, to drive the simulation run of the robot in V-REP, thus realizing the multi-interaction simulation of hand–eye coordination in virtual trellis vineyard. Finally, the simulation is verified, and the results show that the average running time of a string-picking simulation system is 6.5 s, and the success rate of accurate picking point grasping reached 83.3%. A complex closed loop of “scene-image recognition-grasping” is formed by data processing and transmission of various information. It can effectively realize the continuous hand–eye coordination multi-interaction simulation of the harvesting robot under the virtual environment. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

17 pages, 5507 KB  
Article
Grape Maturity Detection and Visual Pre-Positioning Based on Improved YOLOv4
by Chang Qiu, Guangzhao Tian, Jiawei Zhao, Qin Liu, Shangjie Xie and Kui Zheng
Electronics 2022, 11(17), 2677; https://doi.org/10.3390/electronics11172677 - 26 Aug 2022
Cited by 21 | Viewed by 3395
Abstract
To guide grape picking robots to recognize and classify the grapes with different maturity quickly and accurately in the complex environment of the orchard, and to obtain the spatial position information of the grape clusters, an algorithm of grape maturity detection and visual [...] Read more.
To guide grape picking robots to recognize and classify the grapes with different maturity quickly and accurately in the complex environment of the orchard, and to obtain the spatial position information of the grape clusters, an algorithm of grape maturity detection and visual pre-positioning based on improved YOLOv4 is proposed in this study. The detection algorithm uses Mobilenetv3 as the backbone feature extraction network, uses deep separable convolution instead of ordinary convolution, and uses the h-swish function instead of the swish function to reduce the number of model parameters and improve the detection speed of the model. At the same time, the SENet attention mechanism is added to the model to improve the detection accuracy, and finally the SM-YOLOv4 algorithm based on improved YOLOv4 is constructed. The experimental results of maturity detection showed that the overall average accuracy of the trained SM-YOLOv4 target detection algorithm under the verification set reached 93.52%, and the average detection time was 10.82 ms. Obtaining the spatial position of grape clusters is a grape cluster pre-positioning method based on binocular stereo vision. In the pre-positioning experiment, the maximum error was 32 mm, the mean error was 27 mm, and the mean error ratio was 3.89%. Compared with YOLOv5, YOLOv4-Tiny, Faster_R-CNN, and other target detection algorithms, which have greater advantages in accuracy and speed, have good robustness and real-time performance in the actual orchard complex environment, and can simultaneously meet the requirements of grape fruit maturity recognition accuracy and detection speed, as well as the visual pre-positioning requirements of grape picking robots in the orchard complex environment. It can reliably indicate the growth stage of grapes, so as to complete the picking of grapes at the best time, and it can guide the robot to move to the picking position, which is a prerequisite for the precise picking of grapes in the complex environment of the orchard. Full article
Show Figures

Figure 1

Back to TopTop