Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (559)

Search Parameters:
Keywords = harvesting robots

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
39 pages, 5498 KB  
Article
A Review of Key Technologies and Recent Advances in Intelligent Fruit-Picking Robots
by Tao Lin, Fuchun Sun, Xiaoxiao Li, Xi Guo, Jing Ying, Haorong Wu and Hanshen Li
Horticulturae 2026, 12(2), 158; https://doi.org/10.3390/horticulturae12020158 - 30 Jan 2026
Viewed by 122
Abstract
Intelligent fruit-picking robots have emerged as a promising solution to labor shortages and the increasing costs of manual harvesting. This review provides a systematic and critical overview of recent advances in three core domains: (i) vision-based fruit and peduncle detection, (ii) motion planning [...] Read more.
Intelligent fruit-picking robots have emerged as a promising solution to labor shortages and the increasing costs of manual harvesting. This review provides a systematic and critical overview of recent advances in three core domains: (i) vision-based fruit and peduncle detection, (ii) motion planning and obstacle-aware navigation, and (iii) robotic manipulation technologies for diverse fruit types. We summarize the evolution of deep learning-based perception models, highlighting improvements in occlusion robustness, 3D localization accuracy, and real-time performance. Various planning frameworks—from classical search algorithms to optimization-driven and swarm-intelligent methods—are compared in terms of efficiency and adaptability in unstructured orchard environments. Developments in multi-DOF manipulators, soft and adaptive grippers, and end-effector control strategies are also examined. Despite these advances, critical challenges remain, including heavy dependence on large annotated datasets; sensitivity to illumination and foliage occlusion; limited generalization across fruit varieties; and the difficulty of integrating perception, planning, and manipulation into reliable field-ready systems. Finally, this review outlines emerging research trends such as lightweight multimodal networks, deformable-object manipulation, embodied intelligence, and system-level optimization, offering a forward-looking perspective for autonomous harvesting technologies. Full article
Show Figures

Figure 1

15 pages, 556 KB  
Review
Robotic Rectus Muscle Flap Reconstruction After Pelvic Exenteration in Gynecological Oncology: Current and Future Perspectives—A Narrative Review
by Gurhan Guney, Ritchie M. Delara, Johnny Yi, Evrim Erdemoglu and Kristina A. Butler
Cancers 2026, 18(3), 375; https://doi.org/10.3390/cancers18030375 - 25 Jan 2026
Viewed by 180
Abstract
Background/Objectives: Pelvic exenteration is a radical procedure performed for recurrent gynecologic cancers. The goal of exenteration is to prolong survival, but this procedure also results in extensive tissue loss and consequently high morbidity. Reconstruction using vascularized flaps, particularly the VRAM flap, is [...] Read more.
Background/Objectives: Pelvic exenteration is a radical procedure performed for recurrent gynecologic cancers. The goal of exenteration is to prolong survival, but this procedure also results in extensive tissue loss and consequently high morbidity. Reconstruction using vascularized flaps, particularly the VRAM flap, is crucial to restoring pelvic integrity and decreasing complications resulting from extensive tissue loss. With the rise of minimally invasive surgery, the traditionally open abdominal approach to exenteration and reconstruction can now be performed with the assistance of robotic platforms. This review aims to summarize available evidence, describe techniques, and propose future directions for robotic rectus flap reconstruction after pelvic exenteration. Methods: This narrative review was conducted following the SANRA guidelines for narrative synthesis. A comprehensive search of PubMed, Embase, Scopus, and Web of Science was conducted for studies published between January 2000 and November 2025 on pelvic exenteration followed by robotic rectus abdominis flap reconstruction in gynecologic oncology. Eligible studies were retrospective or prospective reports, technical descriptions, case series, or comparative analyses. Non-robotic techniques and animal studies were excluded. Although the primary focus was gynecologic oncology, technically relevant studies from other oncologic disciplines were included when the reconstructive approach was directly applicable to pelvic exenteration. Extracted data included patient demographics, surgical details, and perioperative and oncologic outcomes. Results: The literature search identified primarily case reports and small single-center series describing robot-assisted rectus muscle-based flap reconstruction after pelvic exenteration. Reported cases demonstrated technical feasibility and successful flap harvest using robotic platforms, with adequate pelvic defect coverage. Potential benefits, such as reduced wound morbidity and preservation of a minimally invasive workflow, have been described. However, patient numbers were small, techniques varied, and standardized outcome measures or comparative data with open approaches were lacking. Conclusions: Robotic rectus flap reconstruction represents a promising advancement in pelvic exenteration surgery, potentially reducing morbidity and improving recovery. Further research, including multicenter prospective studies, is needed to validate these findings and establish standardized protocols. Full article
Show Figures

Figure 1

24 pages, 5280 KB  
Article
MA-DeepLabV3+: A Lightweight Semantic Segmentation Model for Jixin Fruit Maturity Recognition
by Leilei Deng, Jiyu Xu, Di Fang and Qi Hou
AgriEngineering 2026, 8(2), 40; https://doi.org/10.3390/agriengineering8020040 - 23 Jan 2026
Viewed by 239
Abstract
Jixin fruit (Malus domesticaJixin’) is a high-value specialty fruit of significant economic importance in northeastern and northwestern China. Automatic recognition of fruit maturity is a critical prerequisite for intelligent harvesting. However, challenges inherent to field environments—including heterogeneous ripeness levels [...] Read more.
Jixin fruit (Malus domesticaJixin’) is a high-value specialty fruit of significant economic importance in northeastern and northwestern China. Automatic recognition of fruit maturity is a critical prerequisite for intelligent harvesting. However, challenges inherent to field environments—including heterogeneous ripeness levels among fruits on the same plant, gradual color transitions during maturation that result in ambiguous boundaries, and occlusion by branches and foliage—render traditional image recognition methods inadequate for simultaneously achieving high recognition accuracy and computational efficiency. Although existing deep learning models can improve recognition accuracy, their substantial computational demands and high hardware requirements preclude deployment on resource-constrained embedded devices such as harvesting robots. To achieve the rapid and accurate identification of Jixin fruit maturity, this study proposes Multi-Attention DeepLabV3+ (MA-DeepLabV3+), a streamlined semantic segmentation framework derived from an enhanced DeepLabV3+ model. First, a lightweight backbone network is adopted to replace the original complex structure, substantially reducing computational burden. Second, a Multi-Scale Self-Attention Module (MSAM) is proposed to replace the traditional Atrous Spatial Pyramid Pooling (ASPP) structure, reducing network computational cost while enhancing the model’s perception capability for fruits of different scales. Finally, an Attention and Convolution Fusion Module (ACFM) is introduced in the decoding stage to significantly improve boundary segmentation accuracy and small target recognition ability. Experimental results on a self-constructed Jixin fruit dataset demonstrated that the proposed MA-DeepLabV3+ model achieves an mIoU of 86.13%, mPA of 91.29%, and F1 score of 90.05%, while reducing the number of parameters by 89.8% and computational cost by 55.3% compared to the original model. The inference speed increased from 41 frames per second (FPS) to 81 FPS, representing an approximately two-fold improvement. The model memory footprint is only 21 MB, demonstrating potential for deployment on embedded devices such as harvesting robots. Experimental results demonstrate that the proposed model achieves significant reductions in computational complexity while maintaining high segmentation accuracy, exhibiting robust performance particularly in complex scenarios involving color gradients, ambiguous boundaries, and occlusion. This study provides technical support for the development of intelligent Jixin fruit harvesting equipment and offers a valuable reference for the application of lightweight deep learning models in smart agriculture. Full article
Show Figures

Figure 1

18 pages, 2924 KB  
Article
Path Planning for a Cartesian Apple Harvesting Robot Using the Improved Grey Wolf Optimizer
by Dachen Wang, Huiping Jin, Chun Lu, Xuanbo Wu, Qing Chen, Lei Zhou, Xuesong Jiang and Hongping Zhou
Agronomy 2026, 16(2), 272; https://doi.org/10.3390/agronomy16020272 - 22 Jan 2026
Viewed by 136
Abstract
As a high-value fruit crop grown worldwide, apples require efficient harvesting solutions to maintain a stable supply. Intelligent harvesting robots represent a promising approach to address labour shortages. This study introduced a Cartesian robot integrated with a continuous-picking end-effector, providing a cost-effective and [...] Read more.
As a high-value fruit crop grown worldwide, apples require efficient harvesting solutions to maintain a stable supply. Intelligent harvesting robots represent a promising approach to address labour shortages. This study introduced a Cartesian robot integrated with a continuous-picking end-effector, providing a cost-effective and mechanically simpler alternative to complex articulated arms. The system employed a hand–eye calibration model to enhance positioning accuracy. To overcome the inefficiencies resulting from disordered harvesting sequences and excessive motion trajectories, the harvesting process was treated as a travelling salesman problem (TSP). The conventional fixed-plane return trajectory of Cartesian robots was enhanced using a three-dimensional continuous picking path strategy based on a fixed retraction distance (H). The value of H was determined through mechanical characterization of the apple stem’s brittle fracture, which eliminated redundant horizontal displacements and improved operational efficiency. Furthermore, an improved grey wolf optimizer (IGWO) was proposed for multi-fruit path planning. Simulations demonstrated that the IGWO achieved shorter path lengths compared to conventional algorithms. Laboratory experiments validated that the system successfully achieved vision-based localization and fruit harvesting through optimal path planning, with a fruit picking success rate of 89%. The proposed methodology provides a practical framework for automated continuous harvesting systems. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

14 pages, 4270 KB  
Article
Dual-Arm Coordination of a Tomato Harvesting Robot with Subtask Decoupling and Synthesizing
by Binhao Chen, Liang Gong, Shenghan Xie, Xuhao Zhao, Peixin Gao, Hefei Luo, Cheng Luo, Yanming Li and Chengliang Liu
Agriculture 2026, 16(2), 267; https://doi.org/10.3390/agriculture16020267 - 21 Jan 2026
Viewed by 121
Abstract
Robotic harvesters have the potential to substantially reduce the physical workload of agricultural laborers. However, in complex agricultural environments, traditional single-arm robot path planning methods often struggle to accomplish fruit harvesting tasks due to the presence of collision avoidance requirements and orientation constraints [...] Read more.
Robotic harvesters have the potential to substantially reduce the physical workload of agricultural laborers. However, in complex agricultural environments, traditional single-arm robot path planning methods often struggle to accomplish fruit harvesting tasks due to the presence of collision avoidance requirements and orientation constraints during grasping. In this work, we design a dual-arm tomato harvesting robot and propose a reinforcement learning-based cooperative control algorithm tailored to the dual-arm system. First, a deep learning-based semantic segmentation network is employed to extract the spatial locations of tomatoes and branches from sensory data. Building upon this perception module, we develop a reinforcement learning-based cooperative path planning approach to address inter-arm collision avoidance and end-effector orientation constraints during the harvesting process. Furthermore, a task-driven policy network architecture is introduced to decouple the complex harvesting task into structured subproblems, thereby enabling more efficient learning and improved performance. Simulation and experimental results demonstrate that the proposed method can generate collision-free harvesting trajectories that satisfy dual-arm orientation constraints, significantly improving the tomato harvesting success rate. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

24 pages, 69667 KB  
Article
YOLO-ELS: A Lightweight Cherry Tomato Maturity Detection Algorithm
by Zhimin Tong, Yu Zhou, Changhao Li, Changqing Cai and Lihong Rong
Appl. Sci. 2026, 16(2), 1043; https://doi.org/10.3390/app16021043 - 20 Jan 2026
Viewed by 130
Abstract
Within the domain of intelligent picking robotics, fruit recognition and positioning are essential. Challenging conditions such as varying light, occlusion, and limited edge-computing power compromise fruit maturity detection. To tackle these issues, this paper proposes a lightweight algorithm YOLO-ELS based on YOLOv8n. Specifically, [...] Read more.
Within the domain of intelligent picking robotics, fruit recognition and positioning are essential. Challenging conditions such as varying light, occlusion, and limited edge-computing power compromise fruit maturity detection. To tackle these issues, this paper proposes a lightweight algorithm YOLO-ELS based on YOLOv8n. Specifically, we reconstruct the backbone by replacing the bottlenecks in the C2f structure with Edge-Information-Enhanced Modules (EIEM) to prioritize morphological cues and filter background redundancy. Furthermore, a Large Separable Kernel Attention (LSKA) mechanism is integrated into the SPPF layer to expand the effective receptive field for multi-scale targets. To mitigate occlusion-induced errors, a Spatially Enhanced Attention Module (SEAM) is incorporated into the decoupled detection head to enhance feature responses in obscured regions. Finally, the Inner-GIoU loss is adopted to refine bounding box regression and accelerate convergence. Experimental results demonstrate that compared to the YOLOv8n baseline, the proposed YOLO-ELS achieves a 14.8% reduction in GFLOPs and a 2.3% decrease in parameters, while attaining a precision, recall, and mAP@50% of 92.7%, 83.9%, and 92.0%, respectively. When compared with mainstream models such as DETR, Faster-RCNN, SSD, TOOD, YOLOv5s, and YOLO11n, the mAP@50% is improved by 7.0%, 4.7%, 11.4%, 8.6%, 3.1%, and 3.2%. Deployment tests on the NVIDIA Jetson Orin Nano Super edge platform yield an inference latency of 25.2 ms and a detection speed of 28.2 FPS, successfully meeting the real-time operational requirements of automated harvesting systems. These findings confirm that YOLO-ELS effectively balances high detection accuracy with lightweight architecture, providing a robust technical foundation for intelligent fruit picking in resource-constrained greenhouse environments. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

18 pages, 14158 KB  
Article
Vision-Based Perception and Execution Decision-Making for Fruit Picking Robots Using Generative AI Models
by Yunhe Zhou, Chunjiang Yu, Jiaming Zhang, Yuanhang Liu, Jiangming Kan, Xiangjun Zou, Kang Zhang, Hanyan Liang, Sheng Zhang and Fengyun Wu
Machines 2026, 14(1), 117; https://doi.org/10.3390/machines14010117 - 19 Jan 2026
Viewed by 177
Abstract
At present, fruit picking mainly relies on manual operation. Taking the litchi (litchi chinensis Sonn.)-picking robot as an example, visual perception is often affected by illumination variations, low recognition accuracy, complex maturity judgment, and occlusion, which lead to inaccurate fruit localization. This study [...] Read more.
At present, fruit picking mainly relies on manual operation. Taking the litchi (litchi chinensis Sonn.)-picking robot as an example, visual perception is often affected by illumination variations, low recognition accuracy, complex maturity judgment, and occlusion, which lead to inaccurate fruit localization. This study aims to establish an embodied perception mechanism based on “perception-reasoning-execution” to enhance the visual perception and decision-making capability of the robot in complex orchard environments. First, a Y-LitchiC instance segmentation method is proposed to achieve high-precision segmentation of litchi clusters. Second, a generative artificial intelligence model is introduced to intelligently assess fruit maturity and occlusion, providing auxiliary support for automatic picking. Based on the auxiliary judgments provided by the generative AI model, two types of dynamic harvesting decisions are formulated for subsequent operations. For unoccluded main fruit-bearing branches, a skeleton thinning algorithm is applied within the segmented region to extract the skeleton line, and the midpoint of the skeleton is used to perform the first type of localization and harvesting decision. In contrast, for main fruit-bearing branches occluded by leaves, threshold-based segmentation combined with maximum connected component extraction is employed to obtain the target region, followed by skeleton thinning, thereby completing the second type of dynamic picking decision. Experimental results show that the Y-LitchiC model improves the mean average precision (mAP) by 1.6% compared with the YOLOv11s-seg model, achieving higher accuracy in litchi cluster segmentation and recognition. The generative artificial intelligence model provides higher-level reasoning and decision-making capabilities for automatic picking. Overall, the proposed embodied perception mechanism and dynamic picking strategies effectively enhance the autonomous perception and decision-making of the picking robot in complex orchard environments, providing a reliable theoretical basis and technical support for accurate fruit localization and precision picking. Full article
(This article belongs to the Special Issue Control Engineering and Artificial Intelligence)
Show Figures

Figure 1

26 pages, 925 KB  
Review
Integrating Artificial Intelligence and Machine Learning for Sustainable Development in Agriculture and Allied Sectors of the Temperate Himalayas
by Arnav Saxena, Mir Faiq, Shirin Ghatrehsamani and Syed Rameem Zahra
AgriEngineering 2026, 8(1), 35; https://doi.org/10.3390/agriengineering8010035 - 19 Jan 2026
Viewed by 290
Abstract
The temperate Himalayan states of Jammu and Kashmir, Himachal Pradesh, Uttarakhand, Ladakh, Sikkim, and Arunachal Pradesh in India face unique agro-ecological challenges across agriculture and allied sectors, including pest and disease pressures, inefficient resource use, post-harvest losses, and fragmented supply chains. This review [...] Read more.
The temperate Himalayan states of Jammu and Kashmir, Himachal Pradesh, Uttarakhand, Ladakh, Sikkim, and Arunachal Pradesh in India face unique agro-ecological challenges across agriculture and allied sectors, including pest and disease pressures, inefficient resource use, post-harvest losses, and fragmented supply chains. This review systematically examines 21 critical problem areas, with three key challenges identified per sector across agriculture, agricultural engineering, fisheries, forestry, horticulture, sericulture, and animal husbandry. Artificial Intelligence (AI) and Machine Learning (ML) interventions, including computer vision, predictive modeling, Internet of Things (IoT)-based monitoring, robotics, and blockchain-enabled traceability, are evaluated for their regional applicability, pilot-level outcomes, and operational limitations under temperate Himalayan conditions. The analysis highlights that AI-enabled solutions demonstrate strong potential for early pest and disease detection, improved resource-use efficiency, ecosystem monitoring, and market integration. However, large-scale adoption remains constrained by limited digital infrastructure, data scarcity, high capital costs, low digital literacy, and fragmented institutional frameworks. The novelty of this review lies in its cross-sectoral synthesis of AI/ML applications tailored to the Himalayan context, combined with a sector-wise revenue-loss assessment to quantify economic impacts and guide prioritization. Based on the identified gaps, the review proposes feasible, context-aware strategies, including lightweight edge-AI models, localized data platforms, capacity-building initiatives, and policy-aligned implementation pathways. Collectively, these recommendations aim to enhance sustainability, resilience, and livelihood security across agriculture and allied sectors in the temperate Himalayan region. Full article
Show Figures

Figure 1

19 pages, 4498 KB  
Article
Research and Implementation of Peach Fruit Detection and Growth Posture Recognition Algorithms
by Linjing Xie, Wei Ji, Bo Xu, Donghao Wu and Jiaxin Ao
Agriculture 2026, 16(2), 193; https://doi.org/10.3390/agriculture16020193 - 12 Jan 2026
Viewed by 211
Abstract
Robotic peach harvesting represents a pivotal strategy for reducing labor costs and improving production efficiency. The fundamental prerequisite for a harvesting robot to successfully complete picking tasks is the accurate recognition of fruit growth posture subsequent to target identification. This study proposes a [...] Read more.
Robotic peach harvesting represents a pivotal strategy for reducing labor costs and improving production efficiency. The fundamental prerequisite for a harvesting robot to successfully complete picking tasks is the accurate recognition of fruit growth posture subsequent to target identification. This study proposes a novel methodology for peach growth posture recognition by integrating an enhanced YOLOv8 algorithm with the RTMpose keypoint detection framework. Specifically, the conventional Neck network in YOLOv8 was replaced by an Atrous Feature Pyramid Network (AFPN) to bolster multi-scale feature representation. Additionally, the Soft Non-Maximum Suppression (Soft-NMS) algorithm was implemented to suppress redundant detections. The RTMpose model was further employed to locate critical morphological landmarks, including the stem and apex, to facilitate precise growth posture recognition. Experimental results indicated that the refined YOLOv8 model attained precision, recall, and mean average precision (mAP) of 98.62%, 96.3%, and 98.01%, respectively, surpassing the baseline model by 8.5%, 6.2%, and 3.0%. The overall accuracy for growth posture recognition achieved 89.60%. This integrated approach enables robust peach detection and reliable posture recognition, thereby providing actionable guidance for the end-effector of an autonomous harvesting robot. Full article
Show Figures

Figure 1

16 pages, 254 KB  
Review
Robotic Horizons in Plastic Surgery: A Look Toward the Future
by Ali Foroutan, Diwakar Phuyal, Georgia Babb, Julia Ting, Ghazal Mashhadiagha, Niayesh Najafi, Risal Djohan, Sarah N. Bishop and Graham S. Schwarz
J. Clin. Med. 2026, 15(2), 602; https://doi.org/10.3390/jcm15020602 - 12 Jan 2026
Viewed by 333
Abstract
Background/Objectives: Robotic technology has transformed several surgical specialties, offering enhanced precision, visualization, and dexterity. In plastic and reconstructive surgery, robotic systems are increasingly utilized across a range of procedures, though their applications remain in early development. Methods: A review of the literature was [...] Read more.
Background/Objectives: Robotic technology has transformed several surgical specialties, offering enhanced precision, visualization, and dexterity. In plastic and reconstructive surgery, robotic systems are increasingly utilized across a range of procedures, though their applications remain in early development. Methods: A review of the literature was performed to identify studies reporting robot-assisted procedures in plastic and reconstructive surgery. The literature was synthesized thematically to characterize current procedural applications, emerging technologies, and areas of active clinical investigation. Results: Robotic systems have been reported in a broad range of plastic and reconstructive procedures, including flap harvest, microsurgery, breast reconstruction, craniofacial and head and neck reconstruction, esthetic surgery, and gender-affirming surgery. The existing studies primarily consist of case series and case reports with substantial variability in reported indications, techniques, and technological platforms. Comparative clinical outcomes and long-term data are limited. Conclusions: Robot-assisted reconstruction continues to expand across multiple procedural domains. However, current evidence remains largely descriptive, underscoring the need for standardized reporting and prospective studies to better define clinical value, safety, and appropriate indications. Full article
(This article belongs to the Special Issue Plastic Surgery: Challenges and Future Directions)
17 pages, 11104 KB  
Article
Lightweight Improvements to the Pomelo Image Segmentation Method for Yolov8n-seg
by Zhen Li, Baiwei Cao, Zhengwei Yu, Qingting Jin, Shilei Lyu, Xiaoyi Chen and Danting Mao
Agriculture 2026, 16(2), 186; https://doi.org/10.3390/agriculture16020186 - 12 Jan 2026
Viewed by 335
Abstract
Instance segmentation in agricultural robotics requires a balance between real-time performance and accuracy. This study proposes a lightweight pomelo image segmentation method based on the YOLOv8n-seg model integrated with the RepGhost module. A pomelo dataset consisting of 5076 samples was constructed through systematic [...] Read more.
Instance segmentation in agricultural robotics requires a balance between real-time performance and accuracy. This study proposes a lightweight pomelo image segmentation method based on the YOLOv8n-seg model integrated with the RepGhost module. A pomelo dataset consisting of 5076 samples was constructed through systematic image acquisition, annotation, and data augmentation. The RepGhost architecture was incorporated into the C2f module of the YOLOv8-seg backbone network to enhance feature reuse capabilities while reducing computational complexity. Experimental results demonstrate that the YOLOv8-seg-RepGhost model enhances efficiency without compromising accuracy: parameter count is reduced by 16.5% (from 3.41 M to 2.84 M), computational load decreases by 14.8% (from 12.8 GFLOPs to 10.9 GFLOPs), and inference time is shortened by 6.3% (to 15 ms). The model maintains excellent detection performance with bounding box mAP50 at 97.75% and mask mAP50 at 97.51%. The research achieves both high segmentation efficiency and detection accuracy, offering core support for developing visual systems in harvesting robots and providing an effective solution for deep learning-based fruit target recognition and automated harvesting applications. Full article
(This article belongs to the Special Issue Advances in Precision Agriculture in Orchard)
Show Figures

Figure 1

28 pages, 9738 KB  
Article
Design and Evaluation of an Underactuated Rigid–Flexible Coupled End-Effector for Non-Destructive Apple Harvesting
by Zeyi Li, Zhiyuan Zhang, Jingbin Li, Gang Hou, Xianfei Wang, Yingjie Li, Huizhe Ding and Yufeng Li
Agriculture 2026, 16(2), 178; https://doi.org/10.3390/agriculture16020178 - 10 Jan 2026
Viewed by 279
Abstract
In response to the growing need for efficient, stable, and non-destructive gripping in apple harvesting robots, this study proposes a novel rigid–flexible coupled end-effector. The design integrates an underactuated mechanism with a real-time force feedback control system. First, compression tests on ‘Red Fuji’ [...] Read more.
In response to the growing need for efficient, stable, and non-destructive gripping in apple harvesting robots, this study proposes a novel rigid–flexible coupled end-effector. The design integrates an underactuated mechanism with a real-time force feedback control system. First, compression tests on ‘Red Fuji’ apples determined the minimum damage threshold to be 24.33 N. A genetic algorithm (GA) was employed to optimize the geometric parameters of the finger mechanism for uniform force distribution. Subsequently, a rigid–flexible coupled multibody dynamics model was established to simulate the grasping of small (70 mm), medium (80 mm), and large (90 mm) apples. Additionally, a harvesting experimental platform was constructed to verify the performance. Results demonstrated that by limiting the contact force of the distal phalange region silicone (DPRS) to 24 N via active feedback, the peak contact forces on the proximal phalange region silicone (PPRS) and middle phalange region silicone (MPRS) were effectively maintained below the damage threshold across all three sizes. The maximum equivalent stress remained significantly below the fruit’s yield limit, ensuring no mechanical damage occurred, with an average enveloping time of approximately 1.30 s. The experimental data showed strong agreement with the simulation, with a mean absolute percentage error (MAPE) of 5.98% for contact force and 5.40% for enveloping time. These results confirm that the proposed end-effector successfully achieves high adaptability and reliability in non-destructive harvesting, offering a valuable reference for agricultural robotics. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

20 pages, 59455 KB  
Article
ACDNet: Adaptive Citrus Detection Network Based on Improved YOLOv8 for Robotic Harvesting
by Zhiqin Wang, Wentao Xia and Ming Li
Agriculture 2026, 16(2), 148; https://doi.org/10.3390/agriculture16020148 - 7 Jan 2026
Viewed by 328
Abstract
To address the challenging requirements of citrus detection in complex orchard environments, this paper proposes ACDNet (Adaptive Citrus Detection Network), a novel deep learning framework specifically designed for automated citrus harvesting. The proposed method introduces three key innovations: (1) Citrus-Adaptive Feature Extraction (CAFE) [...] Read more.
To address the challenging requirements of citrus detection in complex orchard environments, this paper proposes ACDNet (Adaptive Citrus Detection Network), a novel deep learning framework specifically designed for automated citrus harvesting. The proposed method introduces three key innovations: (1) Citrus-Adaptive Feature Extraction (CAFE) module that combines fruit-aware partial convolution with illumination-adaptive attention mechanisms to enhance feature representation with improved efficiency; (2) Dynamic Multi-Scale Sampling (DMS) operator that adaptively focuses sampling points on fruit regions while suppressing background interference through content-aware offset generation; and (3) Fruit-Shape Aware IoU (FSA-IoU) loss function that incorporates citrus morphological priors and occlusion patterns to improve localization accuracy. Extensive experiments on our newly constructed CitrusSet dataset, which comprises 2887 images capturing diverse lighting conditions, occlusion levels, and fruit overlapping scenarios, demonstrate that ACDNet achieves superior performance with mAP@0.5 of 97.5%, precision of 92.1%, and recall of 92.8%, while maintaining real-time inference at 55.6 FPS. Compared to the baseline YOLOv8n model, ACDNet achieves improvements of 1.7%, 3.4%, and 3.6% in mAP@0.5, precision, and recall, respectively, while reducing model parameters by 11% (to 2.67 M) and computational cost by 20% (to 6.5 G FLOPs), making it highly suitable for deployment in resource-constrained robotic harvesting systems. However, the current study is primarily validated on citrus fruits, and future work will focus on extending ACDNet to other spherical fruits and exploring its generalization under extreme weather conditions. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

33 pages, 14779 KB  
Article
A Vision-Based Robot System with Grasping-Cutting Strategy for Mango Harvesting
by Qianling Liu and Zhiheng Lu
Agriculture 2026, 16(1), 132; https://doi.org/10.3390/agriculture16010132 - 4 Jan 2026
Viewed by 538
Abstract
Mango is the second most widely cultivated tropical fruit in the world. Its harvesting mainly relies on manual labor. During the harvest season, the hot weather leads to low working efficiency and high labor costs. Current research on automatic mango harvesting mainly focuses [...] Read more.
Mango is the second most widely cultivated tropical fruit in the world. Its harvesting mainly relies on manual labor. During the harvest season, the hot weather leads to low working efficiency and high labor costs. Current research on automatic mango harvesting mainly focuses on locating the fruit stem harvesting point, followed by stem clamping and cutting. However, these methods are less effective when the stem is occluded. To address these issues, this study first acquires images of four mango varieties in a mixed cultivation orchard and builds a dataset. Mango detection and occlusion-state classification models are then established based on YOLOv11m and YOLOv8l-cls, respectively. The detection model achieves an AP0.5–0.95 (average precision at IoU = 0.50:0.05:0.95) of 90.21%, and the accuracy of the classification model is 96.9%. Second, based on the mango growth characteristics, detected mango bounding boxes and binocular vision, we propose a spatial localization method for the mango grasping point. Building on this, a mango-grasping and stem-cutting end-effector is designed. Finally, a mango harvesting robot system is developed, and verification experiments are carried out. The experimental results show that the harvesting method and procedure are well-suited for situations where the fruit stem is occluded, as well as for fruits with no occlusion or partial occlusion. The mango grasping success rate reaches 96.74%, the stem cutting success rate is 91.30%, and the fruit injury rate is less than 5%. The average image processing time is 119.4 ms. The results prove the feasibility of the proposed methods. Full article
Show Figures

Figure 1

20 pages, 4952 KB  
Article
Star Lightweight Convolution and NDT-RRT: An Integrated Path Planning Method for Walnut Harvesting Robots
by Xiangdong Liu, Xuan Li, Bangbang Chen, Jijing Lin, Kejia Zhuang and Baojian Ma
Sensors 2026, 26(1), 305; https://doi.org/10.3390/s26010305 - 2 Jan 2026
Viewed by 536
Abstract
To address issues such as slow response speed and low detection accuracy in fallen walnut picking robots in complex orchard environments, this paper proposes a detection and path planning method that integrates star-shaped lightweight convolution with NDT-RRT. The method includes the improved lightweight [...] Read more.
To address issues such as slow response speed and low detection accuracy in fallen walnut picking robots in complex orchard environments, this paper proposes a detection and path planning method that integrates star-shaped lightweight convolution with NDT-RRT. The method includes the improved lightweight detection model YOLO-FW and an efficient path planning algorithm NDT-RRT. YOLO-FW enhances feature extraction by integrating star-shaped convolution (Star Blocks) and the C3K2 module in the backbone network, while the introduction of a multi-level scale pyramid structure (CA_HSFPN) in the neck network improves multi-scale feature fusion. Additionally, the loss function is replaced with the PIoU loss, which incorporates the concept of Inner-IoU, thus improving regression accuracy while maintaining the model’s lightweight nature. The NDT-RRT path planning algorithm builds upon the RRT algorithm by employing node rejection strategies, dynamic step-size adjustment, and target-bias sampling, which reduces planning time while maintaining path quality. Experiments show that, compared to the baseline model, the YOLO-FW model achieves precision, recall, and mAP@0.5 of 90.6%, 90.4%, and 95.7%, respectively, with a volume of only 3.62 MB and a 30.65% reduction in the number of parameters. The NDT-RRT algorithm reduces search time by 87.71% under conditions of relatively optimal paths. Furthermore, a detection and planning system was developed based on the PySide6 framework on an NVIDIA Jetson Xavier NX embedded device. On-site testing demonstrated that the system exhibits good robustness, high precision, and real-time performance in real orchard environments, providing an effective technological reference for the intelligent operation of fallen walnut picking robots. Full article
(This article belongs to the Special Issue Robotic Systems for Future Farming)
Show Figures

Figure 1

Back to TopTop