Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (96)

Search Parameters:
Keywords = fruit motion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 8415 KB  
Article
Research on Three-Dimensional Positioning Method for Automatic Strawberry Fruit Picking Based on Vision–IMU Fusion
by Bowen Liu, Chuhan Chen, Junqiu Li, Qinghui Zhang and Yinghao Meng
Agriculture 2026, 16(8), 893; https://doi.org/10.3390/agriculture16080893 - 17 Apr 2026
Viewed by 393
Abstract
Accurate fruit localization and efficient harvesting are key challenges for agricultural robots, especially in dynamic orchard environments, where platform vibration, fruit occlusion, and computational resource limitations of embedded devices significantly impact system performance. To address these issues, this paper proposes a lightweight “fruit [...] Read more.
Accurate fruit localization and efficient harvesting are key challenges for agricultural robots, especially in dynamic orchard environments, where platform vibration, fruit occlusion, and computational resource limitations of embedded devices significantly impact system performance. To address these issues, this paper proposes a lightweight “fruit detection + harvesting” framework. First, by integrating MobileNetV4 and Triplet Attention mechanisms, an improved YOLOv8n network is designed, with the improved YOLOv8n Precision reaching 98.148% and FPS reaching 30 FPS on Jetson Nano, achieving a good balance between detection accuracy and computational efficiency suitable for edge deployment. Second, a strawberry three-dimensional coordinate reconstruction method based on weighted 3D centroid reconstruction is proposed, utilizing depth bias adjustment coefficients to improve spatial accuracy. Third, to address localization errors caused by vibration and platform motion, a dynamic compensation and temporal fusion strategy based on an Inertial Measurement Unit (IMU) is proposed. The rotation matrix estimated from IMU data is first used to correct camera pose variations. Then, an adaptive sliding window is employed to smooth the coordinate sequence. Finally, an Extended Kalman Filter (EKF) is applied to further refine the fused results by incorporating temporal dynamics, ensuring that the reconstructed three-dimensional coordinates in the robotic arm reference frame achieve higher stability and continuity. Experimental results in orchard scenarios show that compared with traditional methods, the system has higher localization accuracy, stronger robustness to dynamic disturbances, and higher harvesting efficiency. This work provides a practical and deployable solution for advancing intelligent fruit-harvesting robots. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

28 pages, 3527 KB  
Article
Autonomous Tomato Harvesting System Integrating AI-Controlled Robotics in Greenhouses
by Mihai Gabriel Matache, Florin Bogdan Marin, Catalin Ioan Persu, Robert Dorin Cristea, Florin Nenciu and Atanas Z. Atanasov
Agriculture 2026, 16(8), 847; https://doi.org/10.3390/agriculture16080847 - 11 Apr 2026
Viewed by 1112
Abstract
Labor shortages and the need for increased productivity have accelerated the development of robotic harvesting systems for greenhouse crops; however, reliable operation under fruit occlusion and clustered arrangements remains a major challenge, particularly due to the limited integration between perception and motion planning [...] Read more.
Labor shortages and the need for increased productivity have accelerated the development of robotic harvesting systems for greenhouse crops; however, reliable operation under fruit occlusion and clustered arrangements remains a major challenge, particularly due to the limited integration between perception and motion planning modules. The paper presents the design and experimental validation of an autonomous robotic system for greenhouse tomato harvesting. The proposed platform integrates a rail-guided mobile base, a six-degrees-of-freedom robotic manipulator, and an adaptive end effector with a hybrid vision framework that combines convolutional neural networks and watershed-based segmentation to enable robust fruit detection and localization under occluded conditions. The proposed approach enables improved separation of overlapping fruits and provides accurate spatial localization through stereo vision combined with IMU-assisted camera-to-robot coordinate transformation. An occlusion-aware trajectory planning strategy was developed to generate collision-free manipulation paths in the presence of leaves and stems, enhancing harvesting safety and reliability. The system was trained and evaluated using a dataset of real greenhouse images supplemented with synthetic data augmentation. Experimental trials conducted under practical greenhouse conditions demonstrated a fruit detection precision of 96.9%, recall of 93.5%, and mean Intersection-over-Union of 79.2%. The robotic platform achieved an overall harvesting success rate of 78.5%, reaching 85% for unobstructed fruits, with an average cycle time of 15 s per fruit in direct harvesting scenarios. The rail-guided mobility significantly improved positioning stability and repeatability during manipulation compared with fully mobile platforms. The results confirm that integrating hybrid perception with occlusion-aware motion planning can substantially improve the functionality of robotic harvesting systems in protected cultivation environments. The proposed solution contributes to the advancement of automation technologies for greenhouse vegetable production and supports the transition toward more sustainable and labor-efficient agricultural practices. Full article
Show Figures

Figure 1

28 pages, 7047 KB  
Article
Design and Performance Evaluation of a Vacuum-Based Twist–Bend End-Effector for Automated Mushroom Harvesting with Vision-Based Damage Assessment
by Kittiphum Pawikhum, Yanqiu Yang, Long He, John A. Pecchia and Paul Heinemann
AgriEngineering 2026, 8(4), 151; https://doi.org/10.3390/agriengineering8040151 - 10 Apr 2026
Viewed by 425
Abstract
Manual harvesting of white button mushrooms involves coordinated bending and twisting motions to detach the fruiting body while minimizing surface damage; however, replicating these actions in automated systems remains challenging. In this study, a vacuum-based end-effector that mimics manual twist–bend detachment using a [...] Read more.
Manual harvesting of white button mushrooms involves coordinated bending and twisting motions to detach the fruiting body while minimizing surface damage; however, replicating these actions in automated systems remains challenging. In this study, a vacuum-based end-effector that mimics manual twist–bend detachment using a single-point contact was designed and evaluated to reduce mechanical damage. Key detachment parameters, including the friction coefficient (mean 0.62), bending angle (average 5.72°), and twisting torque (average 2.56 N·m), were experimentally analyzed to determine the minimum vacuum pressures required for effective bending and twisting, which were −8.64 ± 2.21 kPa and −8.91 ± 2.45 kPa, respectively, with no significant difference observed between the two motions (p = 0.51). A customized vision-based image processing algorithm was developed to quantify postharvest surface damage using a whiteness index (WI). An optimal vacuum pressure of −17.17 kPa was identified, together with a bending angle of 10° and a twisting angle of 90°, balancing high harvesting success with preservation of mushroom quality. The results highlight the influence of end-effector design parameters, including vacuum cup material, contact area, bending direction, and vacuum application duration, on harvesting performance and product marketability, supporting the development of robotic systems for fresh mushroom harvesting. Full article
(This article belongs to the Section Agricultural Mechanization and Machinery)
Show Figures

Figure 1

34 pages, 13959 KB  
Article
Geo-Referenced Factor-Graph SLAM for Orchard-Scale 3D Apple Reconstruction and Yield Estimation
by Dheeraj Bharti, Lilian Nogueira de Faria, Luciano Vieira Koenigkan, Luciano Gebler, Andrea de Rossi and Thiago Teixeira Santos
Agriculture 2026, 16(7), 764; https://doi.org/10.3390/agriculture16070764 - 30 Mar 2026
Viewed by 548
Abstract
Accurate and spatially resolved yield estimation is a critical requirement for precision agriculture and orchard management. This paper presents a geometrically consistent, orchard-scale apple yield estimation framework that integrates GNSS–visual-inertial odometry (VIO) fusion, deep learning-based object detection, multi-frame tracking, three-dimensional triangulation, and incremental [...] Read more.
Accurate and spatially resolved yield estimation is a critical requirement for precision agriculture and orchard management. This paper presents a geometrically consistent, orchard-scale apple yield estimation framework that integrates GNSS–visual-inertial odometry (VIO) fusion, deep learning-based object detection, multi-frame tracking, three-dimensional triangulation, and incremental factor-graph optimization. Camera poses are obtained using ZED GNSS–VIO fusion and subsequently refined using an iSAM2-based nonlinear smoothing approach that incorporates strong relative-motion constraints and soft global ENU (East-North-Up) translation priors. Apples are detected using a YOLO-based model and associated across frames via CoTracker3, enabling robust multi-view landmark reconstruction. Reprojection factors and landmark priors are incorporated into a unified nonlinear factor graph to jointly optimize camera trajectories and 3D apple positions. The reconstructed apples are spatially aggregated into a grid-based mass map, where individual fruit volumes are estimated assuming spherical geometry and converted to mass using density models. The resulting ENU-referenced yield plot provides a structured representation of orchard production variability. Experimental results demonstrate significant reductions in reprojection error after optimization and improved global consistency of the trajectory, leading to stable and spatially coherent 3D reconstructions. The proposed pipeline bridges perception, geometry, and optimization, providing a scalable solution for orchard-scale yield mapping and decision support in precision agriculture. Full article
(This article belongs to the Special Issue Application of Smart Technologies in Orchard Management)
Show Figures

Figure 1

29 pages, 8910 KB  
Article
Field Evaluation of a Robotic Apple Harvester with Negative-Pressure Driven End-Effectors on a Simplified 4-DoF Manipulator
by Guangrui Hu, Jianguo Zhou, Shiwei Wen, Ning Chen, Chen Chen, Fangmin Cheng, Yu Chen and Jun Chen
Agriculture 2026, 16(7), 717; https://doi.org/10.3390/agriculture16070717 - 24 Mar 2026
Viewed by 500
Abstract
Apple picking is an inherently labor-intensive, time-consuming, and costly task, and robotic harvesting represents a potential alternative to address this challenge. This study presents the development and field evaluation of an integrated robotic system for apple harvesting, which combines machine vision, a dual [...] Read more.
Apple picking is an inherently labor-intensive, time-consuming, and costly task, and robotic harvesting represents a potential alternative to address this challenge. This study presents the development and field evaluation of an integrated robotic system for apple harvesting, which combines machine vision, a dual four-degree-of-freedom (DoF) manipulator, and a mobile platform. The harvesting mechanism employed a streamlined 4-DoF manipulator driven by closed-loop stepper motors, incorporating a differential gear mechanism to execute yaw and pitch motions. Trajectory planning utilized linear interpolation with a harmonic acceleration/deceleration profile to ensure smooth end-effector movement. Fruit detection and localization within the canopy were performed by a stereo vision system running a lightweight deep neural network, achieving a mean hand-eye calibration accuracy of 4.7 ± 2.7 mm. Three negative-pressure driven soft end-effector designs—a suction soft end-effector (SSE), a grasping soft end-effector (GSE), and a suction-grasping soft end-effector (SGSE)—were assessed for their harvesting performance. Field trials conducted in a commercial spindle orchard demonstrated that the GSE achieved the highest performance, with a harvesting success rate of 80.80% among reachable fruits, a full-process success rate (from detection to collection) of 61.59%, an overall fruit damage rate of 10.89%, and an average single-fruit cycle time of 5.27 s. In contrast, the SSE and SGSE showed lower success rates (49.21% and 64.71%, respectively). This work provides a practical robotic harvesting solution. It validates the feasibility of a zoned, multi-manipulator harvesting strategy and delivers comparative data to guide the development of more efficient and robust harvesting robots. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Graphical abstract

25 pages, 7474 KB  
Article
Push-or-Avoid: Deep Reinforcement Learning of Obstacle-Aware Harvesting for Orchard Robots
by Heng Fu, Tao Li, Qingchun Feng and Liping Chen
Agriculture 2026, 16(6), 670; https://doi.org/10.3390/agriculture16060670 - 16 Mar 2026
Viewed by 640
Abstract
In structured orchard environments, harvesting robots operate where rigid bodies (e.g., trunks, poles, and wires) coexist with flexible foliage. Strict avoidance of all obstacles significantly compromises operational efficiency. To address this, this study proposes an end-to-end autonomous harvesting framework characterized by an “avoid-rigid, [...] Read more.
In structured orchard environments, harvesting robots operate where rigid bodies (e.g., trunks, poles, and wires) coexist with flexible foliage. Strict avoidance of all obstacles significantly compromises operational efficiency. To address this, this study proposes an end-to-end autonomous harvesting framework characterized by an “avoid-rigid, push-through-soft” strategy. This framework explicitly propagates uncertainties from sensor data and reconstruction processes into the planning and policy phases. First, a multi-task perception network acquires 2D semantic masks of fruits and branches. Class probabilities and instance IDs are back-projected onto a 3D Gaussian Splatting (3DGS) representation to construct a decision-oriented, semantically enhanced 3D scene model. The policy network accepts multi-channel 3DGS rendered observations and proprioceptive states as inputs, outputting a continuous preference vector over eight predefined motion primitives. This approach unifies path planning and action decision-making within a single closed loop. Additionally, a dynamic action shielding module was designed to perform look-ahead collision risk assessments on candidate discrete actions. By employing an action mask to block actions potentially colliding with rigid obstacles, high-risk behaviors are effectively suppressed during both training and execution, thereby enhancing the robustness and reliability of robotic manipulation. The proposed method was validated in both simulation and real-world scenarios. In complex orchard scenarios, the proposed AE-TD3 algorithm achieves a harvesting success rate of 77.1%, outperforming existing RRT (53.3%), DQN (60.9%), and TD3 (63.8%) methods. Furthermore, the method demonstrates superior safety and real-time performance, with a collision rate reduced to 16.2% and an average operation time of only 12.4 s. Results indicate that the framework effectively supports efficient harvesting operations while ensuring safety. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

22 pages, 4152 KB  
Article
Vacuum-Driven 3D Printable Soft Actuators with Foldable Contraction Capabilities
by Caiyang E, Jianming Li, Bin Wang, Danfang Guo and Qiping Xu
Actuators 2026, 15(3), 136; https://doi.org/10.3390/act15030136 - 28 Feb 2026
Viewed by 705
Abstract
In nature, structures such as earwig wings and mimosa leaves exhibit remarkable folding and unfolding capabilities. Inspired by these biological mechanisms, this work investigates soft foldable and torsional actuators based on Kresling crease pattern, fabricated using soft TPE 85A material through 3D printing. [...] Read more.
In nature, structures such as earwig wings and mimosa leaves exhibit remarkable folding and unfolding capabilities. Inspired by these biological mechanisms, this work investigates soft foldable and torsional actuators based on Kresling crease pattern, fabricated using soft TPE 85A material through 3D printing. These actuators enable both foldable grasping and torsional motions. An analytical geometric model is developed to characterize the relationship between structural parameters and the inscribed circle area of a single-layer soft actuator, thereby elucidating their influence on contraction magnitude and relative deflection angle. Treating the soft actuator as an equivalent spring system, a mechanical model relating vacuum pressure to contraction ratio is further established, revealing an approximately linear relationship. The actuators are subsequently integrated with suction cups to form two end-effectors, a foldable soft gripper and a torsional soft gripper, and mounted onto a UR5 robotic arm via a customized flange. Demonstration experiments show that the foldable gripper achieves gentle, adaptive grasping of diverse objects, while the torsional gripper replicates human-like twisting motion, such as opening a bottle cap. This study highlights the potential of Kresling-based soft grippers for practical deployment in automated production tasks, including precision assembly and fruit harvesting. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

25 pages, 15267 KB  
Article
3D Semantic Map Reconstruction for Orchard Environments Using Multi-Sensor Fusion
by Quanchao Wang, Yiheng Chen, Jiaxiang Li, Yongxing Chen and Hongjun Wang
Agriculture 2026, 16(4), 455; https://doi.org/10.3390/agriculture16040455 - 15 Feb 2026
Cited by 1 | Viewed by 953
Abstract
Semantic point cloud maps play a pivotal role in smart agriculture. They provide not only core three-dimensional data for orchard management but also empower robots with environmental perception, enabling safer and more efficient navigation and planning. However, traditional point cloud maps primarily model [...] Read more.
Semantic point cloud maps play a pivotal role in smart agriculture. They provide not only core three-dimensional data for orchard management but also empower robots with environmental perception, enabling safer and more efficient navigation and planning. However, traditional point cloud maps primarily model surrounding obstacles from a geometric perspective, failing to capture distinctions and characteristics between individual obstacles. In contrast, semantic maps encompass semantic information and even topological relationships among objects in the environment. Furthermore, existing semantic map construction methods are predominantly vision-based, making them ill-suited to handle rapid lighting changes in agricultural settings that can cause positioning failures. Therefore, this paper proposes a positioning and semantic map reconstruction method tailored for orchards. It integrates visual, LiDAR, and inertial sensors to obtain high-precision pose and point cloud maps. By combining open-vocabulary detection and semantic segmentation models, it projects two-dimensional detected semantic information onto the three-dimensional point cloud, ultimately generating a point cloud map enriched with semantic information. The resulting 2D occupancy grid map is utilized for robotic motion planning. Experimental results demonstrate that on a custom dataset, the proposed method achieves 74.33% mIoU for semantic segmentation accuracy, 12.4% relative error for fruit recall rate, and 0.038803 m mean translation error for localization. The deployed semantic segmentation network Fast-SAM achieves a processing speed of 13.36 ms per frame. These results demonstrate that the proposed method combines high accuracy with real-time performance in semantic map reconstruction. This exploratory work provides theoretical and technical references for future research on more precise localization and more complete semantic mapping, offering broad application prospects and providing key technological support for intelligent agriculture. Full article
(This article belongs to the Special Issue Advances in Robotic Systems for Precision Orchard Operations)
Show Figures

Figure 1

23 pages, 6344 KB  
Article
Visual Perception and Robust Autonomous Following for Orchard Transportation Robots Based on DeepDIMP-ReID
by Renyuan Shen, Yong Wang, Huaiyang Liu, Haiyang Gu, Changxing Geng and Yun Shi
Mach. Learn. Knowl. Extr. 2026, 8(2), 39; https://doi.org/10.3390/make8020039 - 8 Feb 2026
Viewed by 723
Abstract
Dense foliage, severe illumination variations, and interference from multiple individuals with similar appearances in complex orchard environments pose significant challenges for vision-based following robots in maintaining persistent target perception and identity consistency, thereby compromising the stability and safety of fruit transportation operations. To [...] Read more.
Dense foliage, severe illumination variations, and interference from multiple individuals with similar appearances in complex orchard environments pose significant challenges for vision-based following robots in maintaining persistent target perception and identity consistency, thereby compromising the stability and safety of fruit transportation operations. To address these challenges, we propose a novel framework, DeepDIMP-ReID, which integrates the Deep Implicit Model Prediction (DIMP) tracker with a person re-identification (ReID) module based on EfficientNet. This visual perception and autonomous following framework is designed for differential-drive orchard transportation robots, aiming to achieve robust target perception and reliable identity maintenance in unstructured orchard settings. The proposed framework adopts a hierarchical perception–verification–control architecture. Visual tracking and three-dimensional localization are jointly achieved using synchronized color and depth data acquired from a RealSense camera, where target regions are obtained via the discriminative model prediction (DIMP) method and refined through an elliptical-mask-based depth matching strategy. Front obstacle detection is performed using DBSCAN-based point cloud clustering techniques. To suppress erroneous following caused by occlusion, target switching, or target reappearance after occlusion, an enhanced HOReID person re-identification module with an EfficientNet backbone is integrated for identity verification at critical decision points. Based on the verified perception results, a state-driven motion control strategy is employed to ensure safe and continuous autonomous following. Extensive long-term experiments conducted in real orchard environments demonstrate that the proposed system achieves a correct tracking rate exceeding 94% under varying human walking speeds, with an average localization error of 0.071 m. In scenarios triggering re-identification, a target discrimination success rate of 93.3% is obtained. These results confirm the effectiveness and robustness of the proposed framework for autonomous fruit transportation in complex orchard environments. Full article
Show Figures

Figure 1

39 pages, 5498 KB  
Article
A Review of Key Technologies and Recent Advances in Intelligent Fruit-Picking Robots
by Tao Lin, Fuchun Sun, Xiaoxiao Li, Xi Guo, Jing Ying, Haorong Wu and Hanshen Li
Horticulturae 2026, 12(2), 158; https://doi.org/10.3390/horticulturae12020158 - 30 Jan 2026
Cited by 2 | Viewed by 1338
Abstract
Intelligent fruit-picking robots have emerged as a promising solution to labor shortages and the increasing costs of manual harvesting. This review provides a systematic and critical overview of recent advances in three core domains: (i) vision-based fruit and peduncle detection, (ii) motion planning [...] Read more.
Intelligent fruit-picking robots have emerged as a promising solution to labor shortages and the increasing costs of manual harvesting. This review provides a systematic and critical overview of recent advances in three core domains: (i) vision-based fruit and peduncle detection, (ii) motion planning and obstacle-aware navigation, and (iii) robotic manipulation technologies for diverse fruit types. We summarize the evolution of deep learning-based perception models, highlighting improvements in occlusion robustness, 3D localization accuracy, and real-time performance. Various planning frameworks—from classical search algorithms to optimization-driven and swarm-intelligent methods—are compared in terms of efficiency and adaptability in unstructured orchard environments. Developments in multi-DOF manipulators, soft and adaptive grippers, and end-effector control strategies are also examined. Despite these advances, critical challenges remain, including heavy dependence on large annotated datasets; sensitivity to illumination and foliage occlusion; limited generalization across fruit varieties; and the difficulty of integrating perception, planning, and manipulation into reliable field-ready systems. Finally, this review outlines emerging research trends such as lightweight multimodal networks, deformable-object manipulation, embodied intelligence, and system-level optimization, offering a forward-looking perspective for autonomous harvesting technologies. Full article
Show Figures

Figure 1

18 pages, 2924 KB  
Article
Path Planning for a Cartesian Apple Harvesting Robot Using the Improved Grey Wolf Optimizer
by Dachen Wang, Huiping Jin, Chun Lu, Xuanbo Wu, Qing Chen, Lei Zhou, Xuesong Jiang and Hongping Zhou
Agronomy 2026, 16(2), 272; https://doi.org/10.3390/agronomy16020272 - 22 Jan 2026
Cited by 1 | Viewed by 543
Abstract
As a high-value fruit crop grown worldwide, apples require efficient harvesting solutions to maintain a stable supply. Intelligent harvesting robots represent a promising approach to address labour shortages. This study introduced a Cartesian robot integrated with a continuous-picking end-effector, providing a cost-effective and [...] Read more.
As a high-value fruit crop grown worldwide, apples require efficient harvesting solutions to maintain a stable supply. Intelligent harvesting robots represent a promising approach to address labour shortages. This study introduced a Cartesian robot integrated with a continuous-picking end-effector, providing a cost-effective and mechanically simpler alternative to complex articulated arms. The system employed a hand–eye calibration model to enhance positioning accuracy. To overcome the inefficiencies resulting from disordered harvesting sequences and excessive motion trajectories, the harvesting process was treated as a travelling salesman problem (TSP). The conventional fixed-plane return trajectory of Cartesian robots was enhanced using a three-dimensional continuous picking path strategy based on a fixed retraction distance (H). The value of H was determined through mechanical characterization of the apple stem’s brittle fracture, which eliminated redundant horizontal displacements and improved operational efficiency. Furthermore, an improved grey wolf optimizer (IGWO) was proposed for multi-fruit path planning. Simulations demonstrated that the IGWO achieved shorter path lengths compared to conventional algorithms. Laboratory experiments validated that the system successfully achieved vision-based localization and fruit harvesting through optimal path planning, with a fruit picking success rate of 89%. The proposed methodology provides a practical framework for automated continuous harvesting systems. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

20 pages, 5061 KB  
Article
Research on Orchard Navigation Technology Based on Improved LIO-SAM Algorithm
by Jinxing Niu, Jinpeng Guan, Tao Zhang, Le Zhang, Shuheng Shi and Qingyuan Yu
Agriculture 2026, 16(2), 192; https://doi.org/10.3390/agriculture16020192 - 12 Jan 2026
Viewed by 646
Abstract
To address the challenges in unstructured orchard environments, including high geometric similarity between fruit trees (with the measured average Euclidean distance difference between point cloud descriptors of adjacent trees being less than 0.5 m), significant dynamic interference (e.g., interference from pedestrians or moving [...] Read more.
To address the challenges in unstructured orchard environments, including high geometric similarity between fruit trees (with the measured average Euclidean distance difference between point cloud descriptors of adjacent trees being less than 0.5 m), significant dynamic interference (e.g., interference from pedestrians or moving equipment can occur every 5 min), and uneven terrain, this paper proposes an improved mapping algorithm named OSC-LIO (Orchard Scan Context Lidar Inertial Odometry via Smoothing and Mapping). The algorithm designs a dynamic point filtering strategy based on Euclidean clustering and spatiotemporal consistency within a 5-frame sliding window to reduce the interference of dynamic objects in point cloud registration. By integrating local semantic features such as fruit tree trunk diameter and canopy height difference, a two-tier verification mechanism combining “global and local information” is constructed to enhance the distinctiveness and robustness of loop closure detection. Motion compensation is achieved by fusing data from an Inertial Measurement Unit (IMU) and a wheel odometer to correct point cloud distortion. A three-level hierarchical indexing structure—”path partitioning, time window, KD-Tree (K-Dimension Tree)”—is built to reduce the time required for loop closure retrieval and improve the system’s real-time performance. Experimental results show that the improved OSC-LIO system reduces the Absolute Trajectory Error (ATE) by approximately 23.5% compared to the original LIO-SAM (Tightly coupled Lidar Inertial Odometry via Smoothing and Mapping) in a simulated orchard environment, while enabling stable and reliable path planning and autonomous navigation. This study provides a high-precision, lightweight technical solution for autonomous navigation in orchard scenarios. Full article
Show Figures

Figure 1

30 pages, 6797 KB  
Article
Voxel-Based Leaf Area Estimation in Trellis-Grown Grapevines: A Destructive Validation and Comparison with Optical LAI Methods
by Poching Teng, Hiroyoshi Sugiura, Tomoki Date, Unseok Lee, Takeshi Yoshida, Tomohiko Ota and Junichi Nakagawa
Remote Sens. 2026, 18(2), 198; https://doi.org/10.3390/rs18020198 - 7 Jan 2026
Cited by 1 | Viewed by 707
Abstract
This study develops a voxel-based leaf area estimation framework and validates it using a three-year multi-temporal dataset (2022–2024) of pergola-trained grapevines. The workflow integrates 2D image analysis, ExGR-based leaf segmentation, and 3D reconstruction using Structure-from-Motion (SfM). Multi-angle canopy images were collected repeatedly during [...] Read more.
This study develops a voxel-based leaf area estimation framework and validates it using a three-year multi-temporal dataset (2022–2024) of pergola-trained grapevines. The workflow integrates 2D image analysis, ExGR-based leaf segmentation, and 3D reconstruction using Structure-from-Motion (SfM). Multi-angle canopy images were collected repeatedly during the growing seasons, and destructive leaf sampling was conducted to quantify true leaf area across multiple vines and years. After removing non-leaf structures with ExGR filtering, the point clouds were voxelized at a 1 cm3 resolution to derive structural occupancy metrics. Voxel-based leaf area showed strong within-vine correlations with destructively measured values (R2 = 0.77–0.95), while cross-vine variability was influenced by canopy complexity, illumination, and point-cloud density. In contrast, optical LAI tools (DHP and LAI–2000) exhibited negligible correspondence with true leaf area due to multilayer occlusion and lateral light contamination typical of pergola systems. This expanded, multi-year analysis demonstrates that voxel occupancy provides a robust and scalable indicator of canopy structural density and leaf area, offering a practical foundation for remote-sensing-based phenotyping, yield estimation, and data-driven management in perennial fruit crops. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

27 pages, 1217 KB  
Article
Immersive Virtual Reality for Stroke Rehabilitation: Linking Clinical and Digital Measures of Motor Recovery—A Pilot Study
by Livia-Alexandra Ion, Miruna Ioana Săndulescu, Claudia-Gabriela Potcovaru, Daniela Poenaru, Andrei Doru Comișel, Ștefan Ștefureac, Andrei Cristian Lambru, Alin Moldoveanu, Ana Magdalena Anghel and Delia Cinteză
Bioengineering 2026, 13(1), 59; https://doi.org/10.3390/bioengineering13010059 - 4 Jan 2026
Cited by 1 | Viewed by 1628
Abstract
Background: Immersive virtual reality (VR) has emerged as a promising tool to enhance neuroplasticity, motivation, and engagement during post-stroke motor rehabilitation. However, evidence on its feasibility and data-driven integration into clinical practice remains limited. Objective: This pilot study aimed to evaluate the feasibility, [...] Read more.
Background: Immersive virtual reality (VR) has emerged as a promising tool to enhance neuroplasticity, motivation, and engagement during post-stroke motor rehabilitation. However, evidence on its feasibility and data-driven integration into clinical practice remains limited. Objective: This pilot study aimed to evaluate the feasibility, usability, and short-term motor outcomes of an immersive VR-assisted rehabilitation program using the Travee-VR system. Methods: Fourteen adults with post-stroke upper-limb paresis completed a 10-day hybrid rehabilitation program combining conventional therapy with immersive VR sessions. Feasibility and tolerability were assessed through adherence, adverse events, the System Usability Scale (SUS), and the Simulator Sickness Questionnaire (SSQ). Motor outcomes included active and passive range of motion (AROM, PROM) and a derived GAP index (PROM–AROM). Correlations between clinical changes and in-game performance metrics were explored to identify potential digital performance metrics of recovery. Results: All participants completed the program without adverse events. Usability was rated as high (mean SUS = 79 ± 11.3), and cybersickness remained mild (SSQ < 40). Significant improvements were observed in shoulder abduction (+7.3°, p < 0.01) and elbow flexion (+5.8°, p < 0.05), with moderate-to-large effect sizes. Performance gains in the Fire and Fruits games correlated with clinical improvement in shoulder AROM (ρ = 0.45, p = 0.041). Cluster analysis identified distinct responder profiles, reflecting individual variability in neuroplastic adaptation. Conclusions: The Travee-VR system proved feasible, well tolerated, and associated with measurable short-term improvements in upper-limb function. By linking clinical outcomes with real-time kinematic data, this study supports the role of immersive, feedback-driven VR as a catalyst for data-informed neuroplastic recovery. These results lay the groundwork for adaptive, clinic-to-home rehabilitation models integrating clinical and exploratory digital performance metrics. Full article
Show Figures

Figure 1

19 pages, 2314 KB  
Article
Occlusion Avoidance for Harvesting Robots: A Lightweight Active Perception Model
by Tao Zhang, Jiaxi Huang, Jinxing Niu, Zhengyi Liu, Le Zhang and Huan Song
Sensors 2026, 26(1), 291; https://doi.org/10.3390/s26010291 - 2 Jan 2026
Cited by 2 | Viewed by 587
Abstract
Addressing the issue of fruit recognition and localization failures in harvesting robots due to severe occlusion by branches and leaves in complex orchard environments, this paper proposes an occlusion avoidance method that combines a lightweight YOLOv8n model, developed by Ultralytics in the United [...] Read more.
Addressing the issue of fruit recognition and localization failures in harvesting robots due to severe occlusion by branches and leaves in complex orchard environments, this paper proposes an occlusion avoidance method that combines a lightweight YOLOv8n model, developed by Ultralytics in the United States, with active perception. Firstly, to meet the stringent real-time requirements of the active perception system, a lightweight YOLOv8n model was developed. This model reduces computational redundancy by incorporating the C2f-FasterBlock module and enhances key feature representation by integrating the SE attention mechanism, significantly improving inference speed while maintaining high detection accuracy. Secondly, an end-to-end active perception model based on ResNet50 and multi-modal fusion was designed. This model can intelligently predict the optimal movement direction for the robotic arm based on the current observation image, actively avoiding occlusions to obtain a more complete field of view. The model was trained using a matrix dataset constructed through the robot’s dynamic exploration in real-world scenarios, achieving a direct mapping from visual perception to motion planning. Experimental results demonstrate that the proposed lightweight YOLOv8n model achieves a mAP of 0.885 in apple detection tasks, a frame rate of 83 FPS, a parameter count reduced to 1,983,068, and a model weight file size reduced to 4.3 MB, significantly outperforming the baseline model. In active perception experiments, the proposed method effectively guided the robotic arm to quickly find observation positions with minimal occlusion, substantially improving the success rate of target recognition and the overall operational efficiency of the system. The current research outcomes provide preliminary technical validation and a feasible exploratory pathway for developing agricultural harvesting robot systems suitable for real-world complex environments. It should be noted that the validation of this study was primarily conducted in controlled environments. Subsequent work still requires large-scale testing in diverse real-world orchard scenarios, as well as further system optimization and performance evaluation in more realistic application settings, which include natural lighting variations, complex weather conditions, and actual occlusion patterns. Full article
Show Figures

Figure 1

Back to TopTop