A GNSS–Vision Integrated Autonomous Navigation System for Trellis Orchard Transportation Robots
Abstract
1. Introduction
- The differences between in-row and out-of-row scenarios in trellis orchards are systematically analyzed, and a scene-oriented visual perception strategy is established to provide explicit guidance for navigation-mode selection.
- A robust visual perception method is designed to improve navigation reliability under severe occlusion and illumination variation conditions.
- Extensive field experiments are conducted across different orchard layouts and growth stages, demonstrating that the proposed approach can consistently support continuous navigation in trellis orchards.
2. Materials and Methods
2.1. System Overview
2.2. Analysis of Trellis Orchard Navigation Scenarios and System Workflow
2.2.1. Experimental Scenarios and Navigation Mode Classification
2.2.2. Workflow Design for Inter-Row and Intra-Row Navigation
- Step 1:
- Robot at Warehouse (Point A): After system startup, GNSS and vision sensors, as well as control modules, are initialized. The inter-row navigation algorithm and the inter-/intra-row judgment module enter a standby state.
- Step 2:
- Segment A–D: The system interpolates pre-planned waypoints to generate the inter-row navigation trajectory in real time and acquires relevant navigation parameters.
- Step 3:
- Trajectory tracking: The robot executes path-following control using GNSS positioning information to complete inter-row navigation tasks.
- Step 4:
- Approaching row entry (Point E): As the robot nears the intra-row transition, the system determines whether to enter the orchard row based on position information and vision-triggered conditions, while performing necessary attitude adjustments.
- Step 5:
- Segment E–G: Within the intra-row region, the system detects structural feature points of grapevine supports in real time using vision perception, constructs the row structure model, and fits the navigation line.
- Step 6:
- Intra-row navigation: The robot follows the generated visual navigation line to perform path tracking and maintain stable intra-row movement.
- Step 7:
- End-of-row detection (Point G): Upon detecting the end-of-row area, the system executes logical switching of robot motion direction and camera configuration according to the end-of-row judgment, supporting subsequent operations.
- Step 8:
- Return to warehouse: The return path mirrors the entry process, describing the system state transitions across inter-row and intra-row scenarios.
2.3. Multi-Modal Perception and Navigation Mode Coordination Based on Scenario Recognition
2.3.1. GNSS-Based Inter-Row Global Navigation Mode
2.3.2. Vision-Based Intra-Row Navigation
- Vision data acquisition and preprocessing
- 2.
- Tree trunk and trellis support detection
- 3.
- Geometrically constrained feature point selection
- (1)
- Far-field points (large image Y-coordinate values): constraints are applied only along the image width: .
- (2)
- Near-field points (small image Y-coordinate values): constraints are applied along both width and height: , .
- 4.
- Orchard row line modeling with temporal stabilization
- (1)
- Minimal sample set: Two candidate feature points are randomly sampled in each iteration;
- (2)
- Maximum iteration number: Set to 80 to enhance robustness under sparse features, occlusion, and potential outliers at row ends;
- (3)
- Inlier distance threshold: px. A candidate point is considered an inlier if its distance to the fitted line is ≤12;
- (4)
- Confidence level and minimum inlier ratio: Confidence and minimum inlier ratio , yielding a theoretical minimum iteration count , where is the minimal sample size;
- (5)
- Early stopping criteria: Iteration stops when the current model inlier ratio exceeds 0.70 or the best inlier count does not increase over 10 consecutive iterations;
- (6)
- Random seed control: Seed fixed at 2024 to ensure reproducibility across experiments;
- (7)
- Computational complexity: For a single frame with candidate feature points, the RANSAC complexity is . Introducing temporal constraints adds only constant-time operations (reading previous-frame model and threshold comparison) and has minimal impact on real-time performance;
- 5.
- Generation of the intra-row center navigation line
2.3.3. Scene-Aware Navigation Mode Switching Mechanism
3. Results
3.1. Experimental Setup and Evaluation Metrics
3.1.1. Experimental Platform and Scene Configuration
3.1.2. Navigation Control and Experimental Procedure
- Tree-row line detection: The visual perception module captures the intra-row environment in real time and fits the left and right tree-row lines.
- Center navigation line generation: A center navigation line is generated based on the geometric relationship between the two tree-row lines.
- Motion control: A Pure Pursuit controller is applied to the differential-drive chassis using the navigation parameters, enabling autonomous path tracking along the center navigation line.
- Look-ahead distance: In in-row visual navigation mode, it is set based on corridor width, vehicle kinematics, and operational speed: 0.9 m at 0.4 m/s, 1.0 m at 0.6 m/s, and 1.1 m at 0.8 m/s. In out-of-row GNSS navigation mode, it is set to 1.2 m.
- Controller update frequency: 20 Hz.
- Speed control strategy: Constant linear velocity reference according to the experimental speed settings.
- Steering-rate limitation: Angular velocity is saturated at ±0.75 rad/s to prevent overshoot and oscillations in segments with sharp curvature changes.
- Perception-to-control latency: The end-to-end delay from the visual perception module to the controller input is approximately 0.1 s, sufficient to maintain real-time navigation at the experimental low speeds.
3.1.3. Evaluation Metrics and Analysis Methods
- Perception level: The focus was on the fitting quality of tree-row lines and the generated center navigation line. The proportion of valid fitted frames was statistically analyzed to quantify algorithm robustness under noise interference and end-of-row conditions. A valid fitted frame is defined as follows: two researchers independently annotated the left and right tree-row reference lines in the original images, based on the geometric boundaries of trunks or trellis supports near the inner side of the row. In cases of significant disagreement, consensus annotations were obtained through discussion. A frame is considered valid if the angular error of both left and right fitted lines relative to the human-annotated reference lines does not exceed 5° and the mean point-to-line distance of candidate feature points is less than 15 px. Fitting accuracy is defined as the ratio of valid fitted frames to the total number of tested frames. This definition was consistently applied for the “Correct Fits/Accuracy (%)” reported in Table 5, Table 6, Table 7, Table 8 and Table 9.
- 2.
- Control level: Lateral deviation and heading deviation were analyzed to evaluate the robot’s tracking performance. Heading deviation is defined as the absolute angle difference between the robot’s current heading and the reference heading tangent to the center navigation line at the look-ahead point: , where is the heading deviation, is the robot’s current heading angle, and is the reference heading computed from the center navigation line at the look-ahead point. Smaller values indicate better alignment with the desired navigation direction. Lateral deviation is further defined to characterize the robot’s position relative to the row centerline. Using camera imaging geometry, the horizontal pixel offset of the center navigation line at the look-ahead depth is converted to a physical lateral deviation: , where is the lateral deviation, is the horizontal pixel coordinate of the center navigation line at the look-ahead depth, is the camera principal point in the x-direction, and is the camera focal length in pixels. This conversion gives the lateral deviation in meters, providing a clear physical meaning for in-row navigation errors.
3.2. Comparative Experiments on Vineyard Row Line Fitting Methods
3.2.1. Vineyard Row Line Fitting Based on Least Squares
3.2.2. Vineyard Row Line Fitting Based on RANSAC
3.2.3. RANSAC Incorporating Temporal Frame Information
3.3. Comparative Experiments on Vineyard Row Line Fitting Methods
3.4. Orchard In-Row and Out-of-Row Navigation Experiments
3.4.1. Out-of-Row GNSS-Based Navigation Experiments
3.4.2. In-Row Vision-Based Navigation Experiments
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| GNSS | Global Navigation Satellite System |
| RANSAC | Random Sample Consensus |
| LiDAR | Light Detection and Ranging |
| SLAM | Simultaneous Localization and Mapping |
| IMU | Inertial Measurement Unit |
| EKF | Extended Kalman Filtering |
| MPC | Model Predictive Control |
| 2D | Two-Dimensional |
| CNN | Convolutional Neural Network |
| YOLOv7 | You Only Live Once Version 7 |
| VIO | Visual-Inertial Odometry |
| NN | Neural Network-Based |
| UAV | Unmanned Aerial Vehicle |
| DC | Direct Current |
| CUDA | Compute Unified Device Architecture |
| mAP | Mean Average Precision |
References
- Li, H.; Huang, K.; Sun, Y.; Lei, X.; Yuan, Q.; Zhang, J.; Lv, X. An autonomous navigation method for orchard mobile robots based on octree 3D point cloud optimization. Front. Plant Sci. 2024, 15, 1510683. [Google Scholar] [CrossRef]
- Wu, H.; Wang, X.; Chen, X.; Zhang, Y.; Zhang, Y. Review on Key Technologies for Autonomous Navigation in Field Agricultural Machinery. Agriculture 2025, 15, 1297. [Google Scholar] [CrossRef]
- Xia, Y.; Lei, X.; Pan, J.; Chen, L.; Zhang, Z.; Lyu, X. Research on orchard navigation method based on fusion of 3D SLAM and point cloud positioning. Front. Plant Sci. 2023, 14, 1207742. [Google Scholar] [CrossRef] [PubMed]
- Jiang, A.; Ahamed, T.J. Navigation of an Autonomous Spraying Robot for Orchard Operations Using LiDAR for Tree Trunk Detection. Sensors 2023, 23, 4808. [Google Scholar] [CrossRef] [PubMed]
- Wang, W.; Qin, J.; Huang, D.; Zhang, F.; Liu, Z.; Wang, Z.; Yang, F. Integrated Navigation Method for Orchard-Dosing Robot Based on LiDAR/IMU/GNSS. Agronomy 2024, 14, 2541. [Google Scholar] [CrossRef]
- Li, Q.; Zhu, H. Performance evaluation of 2D LiDAR SLAM algorithms in simulated orchard environments. Comput. Electron. Agric. 2024, 221, 108994. [Google Scholar] [CrossRef]
- Gasparino, M.V.; Higuti, V.A.; Sivakumar, A.N.; Velasquez, A.E.; Becker, M.; Chowdhary, G. Cropnav: A framework for autonomous navigation in real farms. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 11824–11830. [Google Scholar]
- Jiang, S.; Qi, P.; Han, L.; Liu, L.; Li, Y.; Huang, Z.; Liu, Y.; He, X. Navigation system for orchard spraying robot based on 3D LiDAR SLAM with NDT_ICP point cloud registration. Comput. Electron. Agric. 2024, 220, 108870. [Google Scholar] [CrossRef]
- Shen, Y.; Xiao, X.; Liu, H. Real-time localization and mapping method for agricultural robot in orchards based on LiDAR/IMU tight-coupling. Trans. Chin. Soc. Agric. Mach. 2023, 54, 20–28. [Google Scholar]
- Su, Z.; Zou, W.; Zhai, S.Q.; Tan, H.; Yang, S.; Qin, X. Design of an Autonomous Orchard Navigation System Based on Multi-Sensor Fusion. Agronomy 2024, 14, 2825. [Google Scholar] [CrossRef]
- Xu, X.; Liang, J.; Li, J.; Wu, G.; Duan, J.; Jin, M.; Fu, H. Stereo visual-inertial localization algorithm for orchard robots based on point-line features. Comput. Electron. Agric. 2024, 224, 15. [Google Scholar] [CrossRef]
- Jin, P.; Li, T.; Pan, Y.; Hu, K.; Xu, N.; Ying, W.; Jin, Y.; Kang, H. A Context-Aware Navigation Framework for Ground Robots in Horticultural Environments. Sensors 2024, 24, 3663. [Google Scholar] [CrossRef]
- Li, Y.; Feng, Q.; Ji, C.; Sun, J.; Sun, Y. GNSS and LiDAR Integrated Navigation Method in Orchards with Intermittent GNSS Dropout. Appl. Sci. 2024, 14, 3231. [Google Scholar] [CrossRef]
- Pan, Y.; Hu, K.; Cao, H.; Kang, H.; Wang, X. A novel perception and semantic mapping method for robot autonomy in orchards. Comput. Electron. Agric. 2024, 219, 108769. [Google Scholar] [CrossRef]
- Rapado-Rincon, D.; Kootstra, G. Tree-SLAM: Semantic object SLAM for efficient mapping of individual trees in orchards. Smart Agric. Technol. 2025, 12, 101439. [Google Scholar] [CrossRef]
- Shi, Z.; Bai, Z.; Yi, K.; Qiu, B.; Dong, X.; Wang, Q.; Jiang, C.; Zhang, X.; Huang, X. Vision and 2D LiDAR Fusion-Based Navigation Line Extraction for Autonomous Agricultural Robots in Dense Pomegranate Orchards. Sensors 2025, 25, 5432. [Google Scholar]
- Ma, Z.; Yang, S.; Li, J.; Qi, J. Research on SLAM Localization Algorithm for Orchard Dynamic Vision Based on YOLOD-SLAM2. Agriculture 2024, 14, 1622. [Google Scholar] [CrossRef]
- Qu, J.; Gu, Y.; Qiu, Z.; Guo, K.; Zhu, Q. Development of an Orchard Inspection Robot: A ROS-Based LiDAR-SLAM System with Hybrid A*-DWA Navigation. Sensors 2025, 25, 6662. [Google Scholar] [CrossRef]
- Wang, Z.; Huang, P.; Wu, X.; Liu, J. Field-validated VIO-MPC fusion for autonomous headland turning in GNSS-denied orchards. Smart Agric. Technol. 2025, 12, 101373. [Google Scholar] [CrossRef]
- Usuelli, M.; Rapado-Rincon, D.; Kootstra, G.; Matteucci, M. AgriGS-SLAM: Orchard Mapping Across Seasons via Multi-View Gaussian Splatting SLAM. arXiv 2025, arXiv:2510.26358. [Google Scholar]
- Zhou, H.; Wang, J.; Chen, Y.; Hu, L.; Li, Z.; Xie, F.; He, J.; Wang, P. Neural Network-Based SLAM/GNSS Fusion Localization Algorithm for Agricultural Robots in Orchard GNSS-Degraded or Denied Environments. Agriculture 2025, 15, 1612. [Google Scholar] [CrossRef]
- Syed, T.N.; Zhou, J.; Lakhiar, I.A.; Marinello, F.; Gemechu, T.T.; Rottok, L.T.; Jiang, Z. Enhancing Autonomous Orchard Navigation: A Real-Time Convolutional Neural Network-Based Obstacle Classification System for Distinguishing ‘Real’ and ‘Fake’Obstacles in Agricultural Robotics. Agriculture 2025, 15, 827. [Google Scholar] [CrossRef]
- Zhu, X.; Zhao, X.; Liu, J.; Feng, W.; Fan, X. Autonomous Navigation and Obstacle Avoidance for Orchard Spraying Robots: A Sensor-Fusion Approach with ArduPilot, ROS, and EKF. Agronomy 2025, 15, 1373. [Google Scholar] [CrossRef]
- Cheng, B.; He, X.; Li, X.; Zhang, N.; Song, W.; Wu, H. Research on positioning and navigation system of greenhouse mobile robot based on multi-sensor fusion. Sensors 2024, 24, 4998. [Google Scholar] [CrossRef] [PubMed]
- Xu, S.; Rai, R. Vision-based autonomous navigation stack for tractors operating in peach orchards. Comput. Electron. Agric. 2024, 217, 108558. [Google Scholar] [CrossRef]
- Liu, E.; Monica, J.; Gold, K.; Cadle-Davidson, L.; Combs, D.; Jiang, Y. Vision-based Vineyard Navigation Solution with Automatic Annotation. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; pp. 4234–4241. [Google Scholar]
- Yan, Y.; Zhang, B.; Zhou, J.; Zhang, Y.; Liu, X. Real-Time Localization and Mapping Utilizing Multi-Sensor Fusion and Visual–IMU–Wheel Odometry for Agricultural Robots in Unstructured, Dynamic and GPS-Denied Greenhouse Environments. Agronomy 2022, 12, 1740. [Google Scholar] [CrossRef]
- Nazate-Burgos, P.; Torres-Torriti, M.; Aguilera-Marinovic, S.; Arévalo, T.; Huang, S.; Cheein, F.A. Robust 2D lidar-based SLAM in arboreal environments without IMU/GNSS. arXiv 2025, arXiv:2505.10847. [Google Scholar] [CrossRef]
- Shen, S.; Meng, J. A Review of Autonomous Navigation Technology for Orchard Robots Based on Visual SLAM. Asian Res. J. Agric. 2025, 18, 261–271. [Google Scholar] [CrossRef]
- Zheng, S. A Review of Navigation and SLAM Technologies in Orchard Environments. Asian Res. J. Agric. 2025, 18, 13–21. [Google Scholar] [CrossRef]
- Jiang, A.; Ahamed, T. Development of an autonomous navigation system for orchard spraying robots integrating a thermal camera and LiDAR using a deep learning algorithm under low- and no-light conditions. Comput. Electron. Agric. 2025, 235, 110359. [Google Scholar] [CrossRef]
- Gu, H.; Wang, Y.; Liu, H.; Tian, T.; Geng, C.; Shi, Y. SkySeg-Net: Sky Segmentation-Based Row-Terminal Recognition in Trellised Orchards. Mach. Learn. Knowl. Extr. 2026, 8, 46. [Google Scholar] [CrossRef]
- Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
























| Ref. | Primary Sensors | Core Method | Application Scenario | System Complexity | Key Characteristics |
|---|---|---|---|---|---|
| [1] | 3D LiDAR | 3D point-cloud mapping | Orchard mapping and navigation | High | Multi-season robust |
| [2] | 3D SLAM + Point Cloud Localization | SLAM-based localization | Orchard localization and navigation | High | Robust under occlusion |
| [4] | LiDAR | Tree-trunk detection | Tree-trunk detection | Medium | Stable in structured orchards |
| [5] | LiDAR + IMU + GNSS | Multi-sensor fusion localization | Orchard fusion navigation | High | Adaptable to GNSS-degraded environments |
| [6] | LiDAR + IMU + GNSS | Integrated navigation | Orchard dosing robot navigation | High | Maintains positioning under GNSS occlusion |
| [7] | Multi-sensor (GNSS + vision + inertial) | Hybrid navigation framework | Real farm autonomous navigation (CropNav) | High | Automatic mode switching under GNSS failure |
| [8] | 3D LiDAR SLAM | NDT-ICP SLAM mapping | Spraying robot navigation | High | Robust to point-cloud variations |
| [9] | LiDAR + IMU | LiDAR-IMU SLAM | Orchard localization and mapping | Medium | Stable under GNSS occlusion |
| [10] | Multi-sensor Fusion | EKF-based localization | Orchard navigation | Medium | Adaptable to diverse environments |
| [11] | Visual–Inertial | Visual-inertial odometry | Orchard localization | Medium | Robust to illumination changes |
| [12] | Semantic Perception | Context-aware navigation | Horticultural robot navigation | High | Strong environmental understanding |
| [13] | GNSS + LiDAR | Sensor fusion localization | Intermittent GNSS environments | High | Robust to GNSS loss |
| [14] | Semantic SLAM | Semantic mapping | Orchard mapping | High | Robust to semantic environment changes |
| [15] | Tree-SLAM | Tree-level mapping | Single-tree mapping | High | Cross-season stability |
| [16] | Vision + 2D LiDAR | Navigation line extraction | Dense orchard navigation | Medium | Robust to occlusion |
| [17] | LiDAR SLAM + Hybrid A * | Path planning and control | Inspection robots | High | Stable in dynamic environments |
| [18] | YOLO + SLAM | Visual SLAM navigation | Visual SLAM navigation | High | Robust to visual environment changes |
| [19] | VIO + MPC | Control-oriented navigation | GNSS-denied steering and path tracking | High | Stable under GNSS loss |
| [20] | Gaussian SLAM | Multi-season SLAM mapping | Multi-season mapping | High | Cross-season stability |
| [21] | NN SLAM + GNSS | Learning-based fusion localization | GNSS-degraded environments | High | Adaptive sensor fusion |
| [22] | CNN-based Visual Recognition | Visual obstacle detection | Inter-row obstacle avoidance | Medium | Dynamic obstacle detection |
| [23] | LiDAR + Vision + IMU + GNSS | Multi-sensor navigation | Spraying robot navigation and obstacle avoidance | High | Robust to multi-source environments |
| [24] | Multi-sensor Fusion | GNSS-denied localization | Greenhouse navigation | Medium | Stable under GNSS-denied conditions |
| [25] | Vision-based Navigation | Row detection | Peach orchard navigation | Medium | Adaptable to row-structure environments |
| [26] | RGB-D Vision | Depth-based navigation | Vineyard navigation | Medium | Stable in inter-row structures |
| [27] | Vision + IMU + Wheel Odometry | Visual-inertial localization | Greenhouse navigation | Medium | Adaptable to dynamic environments |
| [28] | 2D LiDAR SLAM | Tree-structured SLAM | Tree-structured SLAM | Medium | GNSS-independent operation |
| Proposed method | Vision + GNSS | Vision-triggered navigation framework | Trellis orchard transportation navigation | Medium | Verified across multiple growth stages |
| Parameter | Specification | Parameter | Specification |
|---|---|---|---|
| Weight | 90 kg | Power | 48 V DC brushless motor |
| Dimensions | 1160 × 815 × 620 mm | Operating Temperature | −20 °C to 50 °C |
| Wheelbase | 740 mm | Maximum Obstacle Height | 20° |
| Track Width | 700 mm | Maximum Climbing Height | 150 mm |
| Wheel Diameter | 300 mm | Maximum Speed | <5 km/h |
| Minimum Turning Radius | On-the-spot turning | Endurance | 4 h |
| Growth Stage | Sprouting Stage | Growing Stage | Ripening Stage | ||||||
|---|---|---|---|---|---|---|---|---|---|
| Time of day | Noon | Afternoon | Evening | Noon | Afternoon | Evening | Noon | Afternoon | Evening |
| Number of images | 470 | 520 | 510 | 519 | 504 | 477 | 500 | 515 | 515 |
| Growth Stage | Illumination Condition | mAP (%) |
|---|---|---|
| Germination stage | High illumination (noon) | 93.4 |
| Moderate illumination (afternoon) | 95.7 | |
| Low illumination (dusk) | 91.9 | |
| Vegetative stage | High illumination (noon) | 87.3 |
| Moderate illumination (afternoon) | 93.2 | |
| Low illumination (dusk) | 89.6 | |
| Mature stage | High illumination (noon) | 88.5 |
| Moderate illumination (afternoon) | 92.2 | |
| Low illumination (dusk) | 90.6 |
| Row | Row 1 | Row 2 | Row 3 | Row 4 |
|---|---|---|---|---|
| Number of images | 470 | 510 | 490 | 485 |
| Correct fits | 382 | 389 | 410 | 389 |
| Accuracy (%) | 81.3 | 76.3 | 83.7 | 80.2 |
| Row | Row 1 | Row 2 | Row 3 | Row 4 |
|---|---|---|---|---|
| Number of images | 470 | 510 | 490 | 485 |
| Correct fits | 434 | 456 | 448 | 454 |
| Accuracy (%) | 92.3 | 89.6 | 91.4 | 93.5 |
| Row | Row 1 | Row 2 | Row 3 | Row 4 |
|---|---|---|---|---|
| Number of images | 470 | 510 | 490 | 485 |
| Correct fits | 453 | 498 | 473 | 477 |
| Accuracy (%) | 96.2 | 97.6 | 98.5 | 98.2 |
| Method | Valid Fitted Frames | Total Frames | Overall Valid Rate (%) | 95% CI (%) | Relative Improvement vs. LS/pct. Points |
|---|---|---|---|---|---|
| Least Squares | 1570 | 1955 | 80.31 | 78.49–82.01 | — |
| RANSAC | 1792 | 1955 | 91.66 | 90.35–92.81 | +11.35 |
| Temporal RANSAC | 1901 | 1955 | 97.24 | 96.41–97.88 | +16.93 |
| Row | Budburst Stage (%) | Growth Stage(%) | Maturity Stage(%) |
|---|---|---|---|
| Row 1 | 98.5 | 97.6 | 96.2 |
| Row 2 | 98 | 98.1 | 97.6 |
| Row 3 | 97.2 | 97.9 | 98.5 |
| Row 4 | 99.1 | 98.7 | 98.2 |
| Growth Stage | Row | Images | Correct (Frame) | Accuracy (%) | Mean Accuracy (%) |
|---|---|---|---|---|---|
| Budding stage | 1 | 470 | 435 | 92.6 | 94.35 |
| 2 | 510 | 488 | 95.7 | ||
| 3 | 490 | 462 | 94.3 | ||
| 4 | 485 | 460 | 94.8 | ||
| Growing stage | 1 | 470 | 425 | 90.5 | 92.68 |
| 2 | 510 | 480 | 94.1 | ||
| 3 | 490 | 457 | 93.3 | ||
| 4 | 485 | 450 | 92.8 | ||
| Maturing stage | 1 | 470 | 429 | 91.2 | 92.23 |
| 2 | 510 | 478 | 93.7 | ||
| 3 | 490 | 452 | 92.3 | ||
| 4 | 485 | 445 | 91.7 |
| Growth Stage | Correct (Frame) | Total Frames | Overall Accuracy (%) | 95% CI (%) |
|---|---|---|---|---|
| Budding stage | 1845 | 1955 | 94.37 | 93.26–95.31 |
| Growing stage | 1812 | 1955 | 92.69 | 91.45–93.76 |
| Maturing stage | 1804 | 1955 | 92.28 | 91.01–93.38 |
| Path Type | Trial | Maximum Deviation (m) | Minimum Deviation (m) | Average Deviation (m) | Mean of 3 Trials (m) | Standard Deviation (m) | Coefficient of Variation (%) |
|---|---|---|---|---|---|---|---|
| Straight path | 1 | 1.128 | 0.023 | 0.061 | 0.094 | 0.032 | 34.04 |
| 2 | 0.926 | 0.04 | 0.096 | ||||
| 3 | 1.322 | 0.043 | 0.124 | ||||
| Curved path | 1 | 1.035 | 0.038 | 0.182 | 0.164 | 0.019 | 11.59 |
| 2 | 1.564 | 0.032 | 0.164 | ||||
| 3 | 1.256 | 0.043 | 0.145 | ||||
| Z-shaped path | 1 | 1.416 | 0.070 | 0.214 | 0.221 | 0.040 | 18.10 |
| 2 | 1.680 | 0.021 | 0.263 | ||||
| 3 | 1.347 | 0.055 | 0.185 |
| Speed (m·s−1) | Maximum Heading Deviation (°) | Minimum Heading Deviation (°) | Average Heading Deviation (°) | Range (°) |
|---|---|---|---|---|
| 0.4 | 10.52 | 0.12 | 3.24 | 10.40 |
| 0.6 | 20.3 | 0.23 | 5.23 | 20.07 |
| 0.8 | 12.6 | 0.38 | 8.12 | 12.22 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Liu, H.; Gu, H.; Wang, Y.; Zhong, T.; Tian, T.; Geng, C. A GNSS–Vision Integrated Autonomous Navigation System for Trellis Orchard Transportation Robots. AI 2026, 7, 125. https://doi.org/10.3390/ai7040125
Liu H, Gu H, Wang Y, Zhong T, Tian T, Geng C. A GNSS–Vision Integrated Autonomous Navigation System for Trellis Orchard Transportation Robots. AI. 2026; 7(4):125. https://doi.org/10.3390/ai7040125
Chicago/Turabian StyleLiu, Huaiyang, Haiyang Gu, Yong Wang, Tianjiao Zhong, Tong Tian, and Changxing Geng. 2026. "A GNSS–Vision Integrated Autonomous Navigation System for Trellis Orchard Transportation Robots" AI 7, no. 4: 125. https://doi.org/10.3390/ai7040125
APA StyleLiu, H., Gu, H., Wang, Y., Zhong, T., Tian, T., & Geng, C. (2026). A GNSS–Vision Integrated Autonomous Navigation System for Trellis Orchard Transportation Robots. AI, 7(4), 125. https://doi.org/10.3390/ai7040125

