Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (193)

Search Parameters:
Keywords = fruit-harvesting robot

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 11649 KiB  
Article
Development of Shunt Connection Communication and Bimanual Coordination-Based Smart Orchard Robot
by Bin Yan and Xiameng Li
Agronomy 2025, 15(8), 1801; https://doi.org/10.3390/agronomy15081801 - 25 Jul 2025
Viewed by 155
Abstract
This research addresses the enhancement of operational efficiency in apple-picking robots through the design of a bimanual spatial configuration enabling obstacle avoidance in contemporary orchard environments. A parallel coordinated harvesting paradigm for dual-arm systems was introduced, leading to the construction and validation of [...] Read more.
This research addresses the enhancement of operational efficiency in apple-picking robots through the design of a bimanual spatial configuration enabling obstacle avoidance in contemporary orchard environments. A parallel coordinated harvesting paradigm for dual-arm systems was introduced, leading to the construction and validation of a six-degree-of-freedom bimanual apple-harvesting robot. Leveraging the kinematic architecture of the AUBO-i5 manipulator, three spatial layout configurations for dual-arm systems were evaluated, culminating in the adoption of a “workspace-overlapping Type B” arrangement. A functional prototype of the bimanual apple-harvesting system was subsequently fabricated. The study further involved developing control architectures for two end-effector types: a compliant gripper and a vacuum-based suction mechanism, with corresponding operational protocols established. A networked communication framework for parallel arm coordination was implemented via Ethernet switching technology, enabling both independent and synchronized bimanual operation. Additionally, an intersystem communication protocol was formulated to integrate the robotic vision system with the dual-arm control architecture, establishing a modular parallel execution model between visual perception and motion control modules. A coordinated bimanual harvesting strategy was formulated, incorporating real-time trajectory and pose monitoring of the manipulators. Kinematic simulations were executed to validate the feasibility of this strategy. Field evaluations in modern Red Fuji apple orchards assessed multidimensional harvesting performance, revealing 85.6% and 80% success rates for the suction and gripper-based arms, respectively. Single-fruit retrieval averaged 7.5 s per arm, yielding an overall system efficiency of 3.75 s per fruit. These findings advance the technological foundation for intelligent apple-harvesting systems, offering methodologies for the evolution of precision agronomic automation. Full article
(This article belongs to the Special Issue Smart Farming: Advancing Techniques for High-Value Crops)
Show Figures

Figure 1

25 pages, 8282 KiB  
Article
Performance Evaluation of Robotic Harvester with Integrated Real-Time Perception and Path Planning for Dwarf Hedge-Planted Apple Orchard
by Tantan Jin, Xiongzhe Han, Pingan Wang, Yang Lyu, Eunha Chang, Haetnim Jeong and Lirong Xiang
Agriculture 2025, 15(15), 1593; https://doi.org/10.3390/agriculture15151593 - 24 Jul 2025
Viewed by 218
Abstract
Apple harvesting faces increasing challenges owing to rising labor costs and the limited seasonal workforce availability, highlighting the need for robotic harvesting solutions in precision agriculture. This study presents a 6-DOF robotic arm system designed for harvesting in dwarf hedge-planted orchards, featuring a [...] Read more.
Apple harvesting faces increasing challenges owing to rising labor costs and the limited seasonal workforce availability, highlighting the need for robotic harvesting solutions in precision agriculture. This study presents a 6-DOF robotic arm system designed for harvesting in dwarf hedge-planted orchards, featuring a lightweight perception module, a task-adaptive motion planner, and an adaptive soft gripper. A lightweight approach was introduced by integrating the Faster module within the C2f module of the You Only Look Once (YOLO) v8n architecture to optimize the real-time apple detection efficiency. For motion planning, a Dynamic Temperature Simplified Transition Adaptive Cost Bidirectional Transition-Based Rapidly Exploring Random Tree (DSA-BiTRRT) algorithm was developed, demonstrating significant improvements in the path planning performance. The adaptive soft gripper was evaluated for its detachment and load-bearing capacities. Field experiments revealed that the direct-pull method at 150 mN·m torque outperformed the rotation-pull method at both 100 mN·m and 150 mN·m. A custom control system integrating all components was validated in partially controlled orchards, where obstacle clearance and thinning were conducted to ensure operation safety. Tests conducted on 80 apples showed a 52.5% detachment success rate and a 47.5% overall harvesting success rate, with average detachment and full-cycle times of 7.7 s and 15.3 s per apple, respectively. These results highlight the system’s potential for advancing robotic fruit harvesting and contribute to the ongoing development of autonomous agricultural technologies. Full article
(This article belongs to the Special Issue Agricultural Machinery and Technology for Fruit Orchard Management)
Show Figures

Figure 1

20 pages, 3688 KiB  
Article
Intelligent Fruit Localization and Grasping Method Based on YOLO VX Model and 3D Vision
by Zhimin Mei, Yifan Li, Rongbo Zhu and Shucai Wang
Agriculture 2025, 15(14), 1508; https://doi.org/10.3390/agriculture15141508 - 13 Jul 2025
Viewed by 486
Abstract
Recent years have seen significant interest among agricultural researchers in using robotics and machine vision to enhance intelligent orchard harvesting efficiency. This study proposes an improved hybrid framework integrating YOLO VX deep learning, 3D object recognition, and SLAM-based navigation for harvesting ripe fruits [...] Read more.
Recent years have seen significant interest among agricultural researchers in using robotics and machine vision to enhance intelligent orchard harvesting efficiency. This study proposes an improved hybrid framework integrating YOLO VX deep learning, 3D object recognition, and SLAM-based navigation for harvesting ripe fruits in greenhouse environments, achieving servo control of robotic arms with flexible end-effectors. The method comprises three key components: First, a fruit sample database containing varying maturity levels and morphological features is established, interfaced with an optimized YOLO VX model for target fruit identification. Second, a 3D camera acquires the target fruit’s spatial position and orientation data in real time, and these data are stored in the collaborative robot’s microcontroller. Finally, employing binocular calibration and triangulation, the SLAM navigation module guides the robotic arm to the designated picking location via unobstructed target positioning. Comprehensive comparative experiments between the improved YOLO v12n model and earlier versions were conducted to validate its performance. The results demonstrate that the optimized model surpasses traditional recognition and harvesting methods, offering superior target fruit identification response (minimum 30.9ms) and significantly higher accuracy (91.14%). Full article
Show Figures

Figure 1

21 pages, 10356 KiB  
Article
Autonomous Greenhouse Cultivation of Dwarf Tomato: Performance Evaluation of Intelligent Algorithms for Multiple-Sensor Feedback
by Stef C. Maree, Pinglin Zhang, Bart M. van Marrewijk, Feije de Zwart, Monique Bijlaard and Silke Hemming
Sensors 2025, 25(14), 4321; https://doi.org/10.3390/s25144321 - 10 Jul 2025
Viewed by 394
Abstract
Greenhouse horticulture plays an important role globally by producing nutritious fruits and vegetables with high resource use efficiency. Modern greenhouses are large-scale high-tech production factories that are increasingly data-driven, and where climate and irrigation control are gradually becoming more autonomous. This is enabled [...] Read more.
Greenhouse horticulture plays an important role globally by producing nutritious fruits and vegetables with high resource use efficiency. Modern greenhouses are large-scale high-tech production factories that are increasingly data-driven, and where climate and irrigation control are gradually becoming more autonomous. This is enabled by technological developments and driven by shortages in skilled labor and the demand for improved resource use efficiency. In the Autonomous Greenhouse Challenge, it has been shown that controlling greenhouse cultivation can be done efficiently with intelligent algorithms. For an optimal strategy, however, it is essential that control algorithms properly account for crop responses, which requires appropriate sensors, reliable data, and accurate models. This paper presents the results of the 4th Autonomous Greenhouse Challenge, in which international teams developed six intelligent algorithms that fully controlled a dwarf tomato cultivation, a crop that is well-suited for robotic harvesting, but for which little prior cultivation data exists. Nevertheless, the analysis of the experiment showed that all teams managed to obtain a profitable strategy, and the best algorithm resulted a production equivalent to 45 kg/m2/year, higher than in the commercial practice of high-wire cherry tomato growing. The predominant factor was found to be the much higher plant density that can be achieved in the applied growing system. More difficult challenges were found to be related to measuring crop status to determine the harvest moment. Finally, this experiment shows the potential for novel greenhouse cultivation systems that are inherently well-suited for autonomous control, and results in a unique and rich dataset to support future research. Full article
(This article belongs to the Special Issue AI, IoT and Smart Sensors for Precision Agriculture: 2nd Edition)
Show Figures

Figure 1

18 pages, 4447 KiB  
Article
Ripe-Detection: A Lightweight Method for Strawberry Ripeness Detection
by Helong Yu, Cheng Qian, Zhenyang Chen, Jing Chen and Yuxin Zhao
Agronomy 2025, 15(7), 1645; https://doi.org/10.3390/agronomy15071645 - 6 Jul 2025
Viewed by 357
Abstract
Strawberry (Fragaria × ananassa), a nutrient-dense fruit with significant economic value in commercial cultivation, faces critical detection challenges in automated harvesting due to complex growth conditions such as foliage occlusion and variable illumination. To address these limitations, this study proposes Ripe-Detection, [...] Read more.
Strawberry (Fragaria × ananassa), a nutrient-dense fruit with significant economic value in commercial cultivation, faces critical detection challenges in automated harvesting due to complex growth conditions such as foliage occlusion and variable illumination. To address these limitations, this study proposes Ripe-Detection, a novel lightweight object detection framework integrating three key innovations: a PEDblock detection head architecture with depth-adaptive feature learning capability, an ADown downsampling method for enhanced detail perception with reduced computational overhead, and BiFPN-based hierarchical feature fusion with learnable weighting mechanisms. Developed using a purpose-built dataset of 1021 annotated strawberry images (Fragaria × ananassa ‘Red Face’ and ‘Sachinoka’ varieties) from Changchun Xiaohongmao Plantation and augmented through targeted strategies to enhance model robustness, the framework demonstrates superior performance over existing lightweight detectors, achieving mAP50 improvements of 13.0%, 9.2%, and 3.9% against YOLOv7-tiny, YOLOv10n, and YOLOv11n, respectively. Remarkably, the architecture attains 96.4% mAP50 with only 1.3M parameters (57% reduction from baseline) and 4.4 GFLOPs (46% lower computation), simultaneously enhancing accuracy while significantly reducing resource requirements, thereby providing a robust technical foundation for automated ripeness assessment and precision harvesting in agricultural robotics. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

21 pages, 33500 KiB  
Article
Location Research and Picking Experiment of an Apple-Picking Robot Based on Improved Mask R-CNN and Binocular Vision
by Tianzhong Fang, Wei Chen and Lu Han
Horticulturae 2025, 11(7), 801; https://doi.org/10.3390/horticulturae11070801 - 6 Jul 2025
Viewed by 431
Abstract
With the advancement of agricultural automation technologies, apple-harvesting robots have gradually become a focus of research. As their “perceptual core,” machine vision systems directly determine picking success rates and operational efficiency. However, existing vision systems still exhibit significant shortcomings in target detection and [...] Read more.
With the advancement of agricultural automation technologies, apple-harvesting robots have gradually become a focus of research. As their “perceptual core,” machine vision systems directly determine picking success rates and operational efficiency. However, existing vision systems still exhibit significant shortcomings in target detection and positioning accuracy in complex orchard environments (e.g., uneven illumination, foliage occlusion, and fruit overlap), which hinders practical applications. This study proposes a visual system for apple-harvesting robots based on improved Mask R-CNN and binocular vision to achieve more precise fruit positioning. The binocular camera (ZED2i) carried by the robot acquires dual-channel apple images. An improved Mask R-CNN is employed to implement instance segmentation of apple targets in binocular images, followed by a template-matching algorithm with parallel epipolar constraints for stereo matching. Four pairs of feature points from corresponding apples in binocular images are selected to calculate disparity and depth. Experimental results demonstrate average coefficients of variation and positioning accuracy of 5.09% and 99.61%, respectively, in binocular positioning. During harvesting operations with a self-designed apple-picking robot, the single-image processing time was 0.36 s, the average single harvesting cycle duration reached 7.7 s, and the comprehensive harvesting success rate achieved 94.3%. This work presents a novel high-precision visual positioning method for apple-harvesting robots. Full article
(This article belongs to the Section Fruit Production Systems)
Show Figures

Figure 1

18 pages, 5274 KiB  
Article
DRFW-TQC: Reinforcement Learning for Robotic Strawberry Picking with Dynamic Regularization and Feature Weighting
by Anping Zheng, Zirui Fang, Zixuan Li, Hao Dong and Ke Li
AgriEngineering 2025, 7(7), 208; https://doi.org/10.3390/agriengineering7070208 - 2 Jul 2025
Viewed by 414
Abstract
Strawberry harvesting represents a labor-intensive agricultural operation where existing end-effector pose control algorithms frequently exhibit insufficient precision in fruit grasping, often resulting in unintended damage to target fruits. Concurrently, deep learning-based pose control algorithms suffer from inherent training instability, slow convergence rates, and [...] Read more.
Strawberry harvesting represents a labor-intensive agricultural operation where existing end-effector pose control algorithms frequently exhibit insufficient precision in fruit grasping, often resulting in unintended damage to target fruits. Concurrently, deep learning-based pose control algorithms suffer from inherent training instability, slow convergence rates, and inefficient learning processes in complex environments characterized by high-density fruit clusters and occluded picking scenarios. To address these challenges, this paper proposes an enhanced reinforcement learning framework DRFW-TQC that integrates Dynamic L2 Regularization for adaptive model stabilization and a Group-Wise Feature Weighting Network for discriminative feature representation. The methodology further incorporates a picking posture traction mechanism to optimize end-effector orientation control. The experimental results demonstrate the superior performance of DRFW-TQC compared to the baseline. The proposed approach achieves a 16.0% higher picking success rate and a 20.3% reduction in angular error with four target strawberries. Most notably, the framework’s transfer strategy effectively addresses the efficiency challenge in complex environments, maintaining an 89.1% success rate in eight-strawberry while reducing the timeout count by 60.2% compared to non-adaptive methods. These results confirm that DRFW-TQC successfully resolves the tripartite challenge of operational precision, training stability, and environmental adaptability in robotic fruit harvesting systems. Full article
Show Figures

Figure 1

21 pages, 7766 KiB  
Article
An Intelligent Operation Area Allocation and Automatic Sequential Grasping Algorithm for Dual-Arm Horticultural Smart Harvesting Robot
by Bin Yan and Xiameng Li
Horticulturae 2025, 11(7), 740; https://doi.org/10.3390/horticulturae11070740 - 26 Jun 2025
Viewed by 375
Abstract
Aiming to solve the problem that most existing apple-picking robots operate with a single arm and that the overall efficiency of the machine needs to be further improved, a prototype of a dual-arm picking robot was built, and its picking operation planning method [...] Read more.
Aiming to solve the problem that most existing apple-picking robots operate with a single arm and that the overall efficiency of the machine needs to be further improved, a prototype of a dual-arm picking robot was built, and its picking operation planning method was studied. Firstly, based on the configuration and motion mode of the AUBO-i5 robotic arm, the overlapping dual-arm layout of the workspace was determined. Then, a prototype of a dual-arm apple-picking robot was built, and, based on the designed dual-arm spatial layout, a dual-arm picking operation zoning planning method was proposed. The experimental results showed that in the four simulation experiments, the highest value of the maximum parallel operation proportion of the dual arms was 83%, and the lowest value was 50.6%. The highest value of the maximum operation length of the single arm was 7323 mm, and the lowest value was 5654 mm. The total length of the dual-arm operation path was 12,705 mm, and the lowest value was 8770 mm. Furthermore, a fruit-picking sequence planning method based on dual robotic arm operation was proposed. Fruit traversal simulation verification experiments were conducted. The results showed that there was no conflict between the left and right arms during the motion of the dual robotic arms. Finally, the proposed dual-arm robot operation zoning and picking sequence planning method was validated in the apple experimental station. The results showed that the proportion of dual-arm parallel operations was the lowest at 50.7% and the highest at 72.4%. The total length of the dual-arm operation path was the highest at 8604 mm and the lowest at 6511 mm. Full article
(This article belongs to the Special Issue New Trends in Smart Horticulture)
Show Figures

Figure 1

16 pages, 12771 KiB  
Article
Application of AI in Date Fruit Detection—Performance Analysis of YOLO and Faster R-CNN Models
by Seweryn Lipiński, Szymon Sadkowski and Paweł Chwietczuk
Computation 2025, 13(6), 149; https://doi.org/10.3390/computation13060149 - 13 Jun 2025
Viewed by 923
Abstract
Presented study evaluates and compares two deep learning models, i.e., YOLOv8n and Faster R-CNN, for automated detection of date fruits in natural orchard environments. Both models were trained and tested using a publicly available annotated dataset. YOLO, a single-stage detector, achieved a mAP@0.5 [...] Read more.
Presented study evaluates and compares two deep learning models, i.e., YOLOv8n and Faster R-CNN, for automated detection of date fruits in natural orchard environments. Both models were trained and tested using a publicly available annotated dataset. YOLO, a single-stage detector, achieved a mAP@0.5 of 0.942 with a training time of approximately 2 h. It demonstrated strong generalization, especially in simpler conditions, and is well-suited for real-time applications due to its speed and lower computational requirements. Faster R-CNN, a two-stage detector using a ResNet-50 backbone, reached comparable accuracy (mAP@0.5 = 0.94) with slightly higher precision and recall. However, its training required significantly more time (approximately 19 h) and resources. Deep learning metrics analysis confirmed both models performed reliably, with YOLO favoring inference speed and Faster R-CNN offering improved robustness under occlusion and variable lighting. Practical recommendations are provided for model selection based on application needs—YOLO for mobile or field robotics and Faster R-CNN for high-accuracy offline tasks. Additional conclusions highlight the benefits of GPU acceleration and high-resolution inputs. The study contributes to the growing body of research on AI deployment in precision agriculture and provides insights into the development of intelligent harvesting and crop monitoring systems. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

26 pages, 11251 KiB  
Article
Design and Testing of a Four-Arm Multi-Joint Apple Harvesting Robot Based on Singularity Analysis
by Xiaojie Lei, Jizhan Liu, Houkang Jiang, Baocheng Xu, Yucheng Jin and Jianan Gao
Agronomy 2025, 15(6), 1446; https://doi.org/10.3390/agronomy15061446 - 13 Jun 2025
Viewed by 530
Abstract
The use of multi-joint arms in a high-spindle environment can solve complex problems, but the singularity problem of the manipulator related to the structure of the serial manipulator is prominent. Therefore, based on the general mathematical model of fruit spatial distribution in high-spindle [...] Read more.
The use of multi-joint arms in a high-spindle environment can solve complex problems, but the singularity problem of the manipulator related to the structure of the serial manipulator is prominent. Therefore, based on the general mathematical model of fruit spatial distribution in high-spindle apple orchards, this study proposes two harvesting system architecture schemes that can meet the constraints of fruit spatial distribution and reduce the singularity of harvesting robot operation, which are four-arm dual-module independent moving scheme (Scheme A) and four-arm single-module parallel moving scheme (Scheme B). Based on the link-joint method, the analytical expression of the singular configuration of the redundant degree of freedom arm group system under the two schemes is obtained. Then, the inverse kinematics solution method of the redundant arm group and the singularity avoidance picking trajectory planning strategy are proposed to realize the judgment and solution of the singular configuration in the complex working environment of the high-spindle. The singularity rate of Scheme A in the simulation environment is 17.098%, and the singularity rate of Scheme B is only 6.74%. In the field experiment, the singularity rate of Scheme A is 26.18%, while the singularity rate of Scheme B is 13.22%. The success rate of Schemes A and B are 80.49% and 72.33%, respectively. Through experimental comparison and analysis, Scheme B is more prominent in solving singular problems but still needs to improve the success rate in future research. This paper can provide a reference for solving the singular problems in the complex working environment of high spindles. Full article
Show Figures

Figure 1

21 pages, 5511 KiB  
Article
LGVM-YOLOv8n: A Lightweight Apple Instance Segmentation Model for Standard Orchard Environments
by Wenkai Han, Tao Li, Zhengwei Guo, Tao Wu, Wenlei Huang, Qingchun Feng and Liping Chen
Agriculture 2025, 15(12), 1238; https://doi.org/10.3390/agriculture15121238 - 6 Jun 2025
Viewed by 595
Abstract
Accurate fruit target identification is crucial for autonomous harvesting robots in complex orchards, where image segmentation using deep learning networks plays a key role. To address the trade-off between segmentation accuracy and inference efficiency, this study proposes LGVM-YOLOv8n, a lightweight instance segmentation model [...] Read more.
Accurate fruit target identification is crucial for autonomous harvesting robots in complex orchards, where image segmentation using deep learning networks plays a key role. To address the trade-off between segmentation accuracy and inference efficiency, this study proposes LGVM-YOLOv8n, a lightweight instance segmentation model based on YOLOv8n-seg. LGVM is an acronym for lightweight, GSConv, VoVGSCSP, and MPDIoU, highlighting the key improvements incorporated into the model. The proposed model integrates three key improvements: (1) the GSConv module, which enhances feature interaction and reduces computational cost; (2) the VoVGSCSP module, which optimizes multi-scale feature representation for small objects; and (3) the MPDIoU loss function, which improves target localization accuracy, particularly for occluded fruits. Experimental results show that LGVM-YOLOv8n reduces computational cost by 9.17%, decreases model weight by 7.89%, and improves inference speed by 16.9% compared to the original YOLOv8n-seg. Additionally, segmentation accuracy under challenging conditions (front-light, back-light, and occlusion) improves by 3.28% to 4.31%. Deployment tests on an edge computing platform demonstrate real-time performance, with inference speed accelerated to 0.084 s per image and frame rate increased to 28.73 FPS. These results validated the model’s robustness and adaptability, providing a practical solution for apple-picking robots in complex orchard environments. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

32 pages, 10515 KiB  
Article
E-CLIP: An Enhanced CLIP-Based Visual Language Model for Fruit Detection and Recognition
by Yi Zhang, Yang Shao, Chen Tang, Zhenqing Liu, Zhengda Li, Ruifang Zhai, Hui Peng and Peng Song
Agriculture 2025, 15(11), 1173; https://doi.org/10.3390/agriculture15111173 - 29 May 2025
Viewed by 564
Abstract
With the progress of agricultural modernization, intelligent fruit harvesting is gaining importance. While fruit detection and recognition are essential for robotic harvesting, existing methods suffer from limited generalizability, including adapting to complex environments and handling new fruit varieties. This problem stems from their [...] Read more.
With the progress of agricultural modernization, intelligent fruit harvesting is gaining importance. While fruit detection and recognition are essential for robotic harvesting, existing methods suffer from limited generalizability, including adapting to complex environments and handling new fruit varieties. This problem stems from their reliance on unimodal visual data, which creates a semantic gap between image features and contextual understanding. To solve these issues, this study proposes a multi-modal fruit detection and recognition framework based on visual language models (VLMs). By integrating multi-modal information, the proposed model enhances robustness and generalization across diverse environmental conditions and fruit types. The framework accepts natural language instructions as input, facilitating effective human–machine interaction. Through its core module, Enhanced Contrastive Language–Image Pre-Training (E-CLIP), which employs image–image and image–text contrastive learning mechanisms, the framework achieves robust recognition of various fruit types and their maturity levels. Experimental results demonstrate the excellent performance of the model, achieving an F1 score of 0.752, and an mAP@0.5 of 0.791. The model also exhibits robustness under occlusion and varying illumination conditions, attaining a zero-shot mAP@0.5 of 0.626 for unseen fruits. In addition, the system operates at an inference speed of 54.82 FPS, effectively balancing speed and accuracy, and shows practical potential for smart agriculture. This research provides new insights and methods for the practical application of smart agriculture. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

19 pages, 5648 KiB  
Article
An Object Feature-Based Recognition and Localization Method for Wolfberry
by Renwei Wang, Dingzhong Tan, Xuerui Ju and Jianing Wang
Sensors 2025, 25(11), 3365; https://doi.org/10.3390/s25113365 - 27 May 2025
Viewed by 358
Abstract
To improve the object recognition and localization capabilities of wolfberry harvesting robots, this study introduces an object feature-based image segmentation algorithm designed for the segmentation and localization of wolfberry fruits and branches in unstructured lighting environments. Firstly, based on the a-channel of [...] Read more.
To improve the object recognition and localization capabilities of wolfberry harvesting robots, this study introduces an object feature-based image segmentation algorithm designed for the segmentation and localization of wolfberry fruits and branches in unstructured lighting environments. Firstly, based on the a-channel of the Lab color space and the I-channel of the YIQ color space, a feature fusion algorithm combined with wavelet transformation is proposed to achieve pixel-level fusion of the two feature images, significantly enhancing the image segmentation effect. Experimental results show that this method achieved a 78% segmentation accuracy for wolfberry fruits in 500 test image samples under complex lighting and occlusion conditions, demonstrating good robustness. Secondly, addressing the issue of branch colors being similar to the background, a K-means clustering segmentation algorithm based on the Lab color space is proposed, combined with morphological processing and length filtering strategies, effectively achieving precise segmentation of branches and localization of gripping point coordinates. Experiments validated the high accuracy of the improved algorithm in branch localization. The results indicate that the algorithm proposed in this paper can effectively address illumination changes and occlusion issues in complex harvesting environments. Compared with traditional segmentation methods, it significantly improves the segmentation accuracy of wolfberry fruits and the localization accuracy of branches, providing technical support for the vision system of field-based wolfberry harvesting robots and offering theoretical basis and a practical reference for research on agricultural automated harvesting operations. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

21 pages, 7067 KiB  
Article
A Lightweight and Rapid Dragon Fruit Detection Method for Harvesting Robots
by Fei Yuan, Jinpeng Wang, Wenqin Ding, Song Mei, Chenzhe Fang, Sunan Chen and Hongping Zhou
Agriculture 2025, 15(11), 1120; https://doi.org/10.3390/agriculture15111120 - 23 May 2025
Cited by 1 | Viewed by 597
Abstract
Dragon fruit detection in natural environments remains challenged by limited accuracy and deployment difficulties, primarily due to variable lighting and occlusions from branches. To enhance detection accuracy and satisfy the deployment constraints of edge devices, we propose YOLOv10n-CGD, a lightweight and efficient dragon [...] Read more.
Dragon fruit detection in natural environments remains challenged by limited accuracy and deployment difficulties, primarily due to variable lighting and occlusions from branches. To enhance detection accuracy and satisfy the deployment constraints of edge devices, we propose YOLOv10n-CGD, a lightweight and efficient dragon fruit detection method designed for robotic harvesting applications. The method builds upon YOLOv10 and integrates Gated Convolution (gConv) into the C2f module, forming a novel C2f-gConv structure that effectively reduces model parameters and computational complexity. In addition, a Global Attention Mechanism (GAM) is inserted between the backbone and the feature fusion layers to enrich semantic representations and improve the detection of occluded fruits. Furthermore, the neck network integrates a Dynamic Sample (DySample) operator to enhance the spatial restoration of high-level semantic features. The experimental results demonstrate that YOLOv10n-CGD significantly improves performance while reducing model size from 5.8 MB to 4.5 MB—a 22.4% decrease. The mAP improves from 95.1% to 98.1%, with precision and recall reaching 97.1% and 95.7%, respectively. The observed improvements are statistically significant (p < 0.05). Moreover, detection speeds of 44.9 FPS and 17.2 FPS are achieved on Jetson AGX Orin and Jetson Nano, respectively, demonstrating strong real-time capabilities and suitability for deployment. In summary, YOLOv10n-CGD enables high-precision, real-time dragon fruit detection while preserving model compactness, offering robust technical support for future robotic harvesting systems and smart agricultural terminals. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

26 pages, 10969 KiB  
Article
TQVGModel: Tomato Quality Visual Grading and Instance Segmentation Deep Learning Model for Complex Scenarios
by Peichao Cong, Kun Wang, Ji Liang, Yutao Xu, Tianheng Li and Bin Xue
Agronomy 2025, 15(6), 1273; https://doi.org/10.3390/agronomy15061273 - 22 May 2025
Viewed by 600
Abstract
To address the challenges of poor instance segmentation accuracy, real-time performance trade-offs, high miss rates, and imprecise edge localization in tomato grading and harvesting robots operating in complex scenarios (e.g., dense growth, occluded fruits, and dynamic viewing conditions), an accurate, efficient, and robust [...] Read more.
To address the challenges of poor instance segmentation accuracy, real-time performance trade-offs, high miss rates, and imprecise edge localization in tomato grading and harvesting robots operating in complex scenarios (e.g., dense growth, occluded fruits, and dynamic viewing conditions), an accurate, efficient, and robust visual instance segmentation network is urgently needed. This paper proposes TQVGModel (Tomato Quality Visual Grading Model), a Mask RCNN-based instance segmentation network for tomato quality grading. First, TQVGModel employs a multi-branch IncepConvV2 backbone, reconstructed via ConvNeXt architecture and large-kernel convolution decomposition, to enhance instance segmentation accuracy while maintaining real-time performance. Second, the Class Balanced Focal Loss is adopted in the classification branch to prioritize sparse or challenging classes, reducing the miss rates in complex scenes. Third, an Enhanced Sobel (E-Sobel) operator integrates boundary prediction with an edge loss function, improving edge localization precision for quality assessment. Additionally, a quality grading subsystem is designed to automate tomato evaluation, supporting subsequent harvesting and growth monitoring. A high-quality benchmark dataset, Tomato-Seg, is constructed for complex-scene tomato instance segmentation. Experiments show that the TQVGModel-Tiny variant achieves an 80.05% mAP (7.04% higher than Mask R-CNN), with 33.98 M parameters (10.2 M fewer) and 53.38 ms inference speed (16.6 ms faster). These results demonstrate TQVGModel’s high accuracy, real-time capability, reduced miss rates, and precise edge localization, providing a theoretical foundation for tomato grading and harvesting in complex environments. Full article
Show Figures

Figure 1

Back to TopTop