Vision and 2D LiDAR Fusion-Based Navigation Line Extraction for Autonomous Agricultural Robots in Dense Pomegranate Orchards
Abstract
1. Introduction
2. Materials and Methods
2.1. Experimental Platform
2.2. Monocular Camera and 2D LiDAR Data Fusion
2.2.1. Coordinate System Definition and Establishment
2.2.2. Spatial Fusion
2.2.3. Temporal Fusion
2.3. Algorithm Framework
2.4. YOLOv8-ResCBAM-Based Trunk Detection
2.4.1. YOLOv8-ResCBAM Model Applicability Analysis
2.4.2. YOLOv8-ResCBAM Training and Implementation
2.5. Vision and LiDAR Fusion-Based Navigation Line Fitting
2.5.1. DBSCAN Trunk Clustering Algorithm
2.5.2. Vision and LiDAR Data Association
Line-of-Sight and LiDAR Plane Intersection Calculation
Association Algorithm
2.5.3. Navigation Line Fitting
Kalman Filter Smoothing
2.6. Experimental Methods
- (1)
- The sensor calibration experimental methodology was as follows: Zhang’s calibration method was employed for camera intrinsic parameter calibration, capturing 20 images of different poses using an 8 × 5 checkerboard (27 mm side length); the PnP algorithm was used for camera and 2D LiDAR extrinsic parameter calibration, obtaining multiple sets of calibration board center point positions in the radar coordinate system and pixel coordinates. Evaluation metrics included reprojection error and calibration accuracy.
- (2)
- The visual detection performance evaluation experimental methodology was as follows: 4150 images were collected under different lighting conditions (sunny, cloudy, dusk, strong light), divided into training, validation, and test sets at a 7:2:1 ratio, and the YOLOv8-ResCBAM model was used for trunk detection training and testing. Evaluation metrics included mAP50, recall rate, precision, and inference time.
- (3)
- The data association verification experimental methodology was as follows: five groups of 20 m-long inter-row orchard environments were selected, each containing 25–32 fruit trees, with each group experiment repeated 10 times, using the reverse ray projection algorithm for vision and LiDAR data association. Evaluation metrics included association count and association success rate.
- (4)
- The geometric constraint navigation line fitting simulation experimental methodology was as follows: six groups of simulation comparison experiments were designed with fruit tree point quantities ranging from 19 to 27, employing the proposed GCA and traditional RANSAC algorithm, respectively, for navigation line fitting. Evaluation metrics included fitting accuracy RMSE, inlier ratio, and computation time.
- (5)
- The navigation-line tracking comparison was designed as follows. Five representative inter-row segments were selected in the orchard. Each segment (20 m) was manually surveyed to obtain the ground-truth centerline and included operating conditions such as straight rows, slight lateral offset, cross-slope, local occlusions, and non-uniform illumination. For each segment, 10 independent navigation trials were conducted at speeds from 0.5 to 2.0 m·s−1. Four navigation line extraction methods were compared: (i) single 2D LiDAR with RANSAC, (ii) single vision detection with RANSAC, (iii) DeepLab v3+ semantic segmentation with RANSAC, and (iv) the proposed fusion-based method. The mobile chassis position was recorded using a “taut-string–centerline-marking–tape-measurement” protocol. Specifically, a cotton string was stretched taut between the midpoints equidistant from the left and right rows at the two ends of the test segment to define the ground-truth centerline, and sampling stations were marked every 1 m along the string. A marker was mounted at the geometric center of the mobile chassis; after the vehicle passed, a measuring tape was used at each station to measure the perpendicular distance from the trajectory to the centerline, yielding the lateral deviation.
3. Results
3.1. Calibration Experimental Results
3.2. Visual Detection Performance Evaluation
3.3. Data Association Results
3.4. Geometric Constraint Navigation Line Fitting Simulation Results
3.5. Navigation Line Tracking Results
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ghasemi-Soloklui, A.A.; Kordrostami, M.; Gharaghani, A. Environmental and geographical conditions influence color, physical properties, and physiochemical composition of pomegranate fruits. Sci. Rep. 2023, 13, 15447. [Google Scholar] [CrossRef]
- Yazdanpanah, P.; Jonoubi, P.; Zeinalabedini, M.; Rajaei, H.; Ghaffari, M.R.; Vazifeshenas, M.R.; Abdirad, S. Seasonal metabolic investigation in pomegranate (Punica granatum L.) highlights the role of amino acids in genotype- and organ-specific adaptive responses to freezing stress. Front. Plant Sci. 2021, 12, 699139. [Google Scholar] [CrossRef]
- Wang, S.; Song, J.; Qi, P.; Yuan, C.; Wu, H.; Zhang, L.; Liu, W.; Liu, Y.; He, X. Design and development of orchard autonomous navigation spray system. Front. Plant Sci. 2022, 13, 960686. [Google Scholar] [CrossRef]
- Blok, P.M.; van Boheemen, K.; van Evert, F.K.; IJsselmuiden, J.; Kim, G.H. Robot navigation in orchards with localization based on Particle filter and Kalman filter. Comput. Electron. Agric. 2019, 157, 261–269. [Google Scholar] [CrossRef]
- Jiang, A.; Noguchi, R.; Ahamed, T. Tree Trunk Recognition in Orchard Autonomous Operations under Different Light Conditions Using a Thermal Camera and Faster R-CNN. Sensors 2022, 22, 2065. [Google Scholar] [CrossRef]
- Xia, Y.; Lei, X.; Pan, J.; Chen, L.; Zhang, Z.; Lyu, X. Research on orchard navigation method based on fusion of 3D SLAM and point cloud positioning. Front. Plant Sci. 2023, 14, 1207742. [Google Scholar] [CrossRef]
- Bai, Y.; Zhang, B.; Xu, N.; Zhou, J.; Shi, J.; Diao, Z. Vision-based navigation and guidance for agricultural autonomous vehicles and robots: A review. Comput. Electron. Agric. 2023, 205, 107724. [Google Scholar] [CrossRef]
- Liu, H.; Zeng, X.; Shen, Y.; Xu, J.; Khan, Z. A Single-Stage Navigation Path Extraction Network for agricultural robots in orchards. Comput. Electron. Agric. 2025, 229, 109687. [Google Scholar] [CrossRef]
- Cao, Z.; Gong, C.; Meng, J.; Liu, L.; Rao, Y.; Hou, W. Orchard Vision Navigation Line Extraction Based on YOLOv8-Trunk Detection. IEEE Access 2024, 12, 89156–89168. [Google Scholar] [CrossRef]
- Yang, Z.; Ouyang, L.; Zhang, Z.; Duan, J.; Yu, J.; Wang, H. Visual navigation path extraction of orchard hard pavement based on scanning method and neural network. Comput. Electron. Agric. 2022, 197, 106964. [Google Scholar] [CrossRef]
- De Silva, R.; Cielniak, G.; Wang, G.; Gao, J. Deep learning-based crop row detection for infield navigation of agri-robots. J. Field Robot. 2024, 41, 2299–2321. [Google Scholar] [CrossRef]
- Gai, J.; Xiang, L.; Tang, L. Using a depth camera for crop row detection and mapping for under-canopy navigation of agricultural robotic vehicle. Comput. Electron. Agric. 2021, 188, 106301. [Google Scholar] [CrossRef]
- Winterhalter, W.; Fleckenstein, F.V.; Dornhege, C.; Burgard, W. Crop row detection on tiny plants with the pattern hough transform. IEEE Robot. Autom. Lett. 2018, 3, 3394–3401. [Google Scholar] [CrossRef]
- Gai, J.; Tang, L.; Steward, B.L. Automated crop plant detection based on the fusion of color and depth images for robotic weed control. J. Field Robot. 2020, 37, 35–60. [Google Scholar] [CrossRef]
- Wang, T.H.; Chen, B.; Zhang, Z.Q.; Li, H.; Zhang, M. Applications of machine vision in agricultural robot navigation: A review. Comput. Electron. Agric. 2022, 198, 107085. [Google Scholar] [CrossRef]
- Jiang, A.; Ahamed, T. Navigation of an Autonomous Spraying Robot for Orchard Operations Using LiDAR for Tree Trunk Detection. Sensors 2023, 23, 4808. [Google Scholar] [CrossRef]
- Li, H.; Huang, K.; Sun, Y.; Lei, X. An autonomous navigation method for orchard mobile robots based on octree 3D point cloud optimization. Front. Plant Sci. 2025, 15, 1510683. [Google Scholar] [CrossRef]
- Abanay, A.; Masmoudi, L.; El Ansari, M.; Gonzalez-Jimenez, J.; Moreno, F.-A. LIDAR-based autonomous navigation method for an agricultural mobile robot in strawberry greenhouse: AgriEco Robot. AIMS Electron. Electr. Eng. 2022, 6, 317–328. [Google Scholar] [CrossRef]
- Liu, W.; Li, W.; Feng, H.; Xu, J.; Yang, S.; Zheng, Y.; Liu, X.; Wang, Z.; Yi, X.; He, Y.; et al. Overall integrated navigation based on satellite and LiDAR in the standardized tall spindle apple orchards. Comput. Electron. Agric. 2024, 216, 108489. [Google Scholar] [CrossRef]
- Malavazi, F.B.; Guyonneau, R.; Fasquel, J.B.; Lagrange, S.; Mercier, F. LiDAR-only based navigation algorithm for an autonomous agricultural robot. Comput. Electron. Agric. 2018, 154, 71–79. [Google Scholar] [CrossRef]
- Jiang, S.; Qi, P.; Han, L.; Liu, L.; Li, Y.; Huang, Z.; Liu, Y.; He, X. Navigation system for orchard spraying robot based on 3D LiDAR SLAM with NDT_ICP point cloud registration. Comput. Electron. Agric. 2024, 220, 108870. [Google Scholar] [CrossRef]
- Jiang, J.; Zhang, T.; Li, K. LiDAR-based 3D SLAM for autonomous navigation in stacked cage farming houses: An evaluation. Comput. Electron. Agric. 2024, 230, 109885. [Google Scholar] [CrossRef]
- Firkat, E.; An, F.; Peng, B.; Zhang, J.; Mijit, T.; Ahat, A.; Zhu, J.; Hamdulla, A. FGSeg: Field-ground segmentation for agricultural robot based on LiDAR. Comput. Electron. Agric. 2023, 210, 107923. [Google Scholar] [CrossRef]
- Ringdahl, O.; Hohnloser, P.; Hellström, T.; Holmgren, J.; Lindroos, O. Enhanced Algorithms for Estimating Tree Trunk Diameter Using 2D Laser Scanner. Remote Sens. 2013, 5, 4839–4856. [Google Scholar] [CrossRef]
- Jiang, A.; Ahamed, T. Development of an autonomous navigation system for orchard spraying robots integrating a thermal camera and LiDAR using a deep learning algorithm under low- and no-light conditions. Comput. Electron. Agric. 2025, 235, 110359. [Google Scholar] [CrossRef]
- Ban, C.; Wang, L.; Chi, R.; Su, T.; Ma, Y. A Camera-LiDAR-IMU fusion method for real-time extraction of navigation line between maize field rows. Comput. Electron. Agric. 2024, 223, 109114. [Google Scholar] [CrossRef]
- Kang, H.W.; Wang, X.; Cheng, C. Accurate fruit localisation using high resolution LiDAR-camera fusion and instance segmentation. Comput. Electron. Agric. 2022, 203, 107450. [Google Scholar] [CrossRef]
- Han, C.; Wu, W.; Luo, X.; Li, J. Visual Navigation and Obstacle Avoidance Control for Agricultural Robots via LiDAR and Camera. Remote Sens. 2023, 15, 5402. [Google Scholar] [CrossRef]
- Shalal, N.; Low, T.; McCarthy, C.; Hancock, N. Orchard mapping and mobile robot localisation using on-board camera and laser scanner data fusion—Part A: Tree detection. Comput. Electron. Agric. 2015, 119, 254–266. [Google Scholar] [CrossRef]
- Xue, J.L.; Fan, B.W.; Yan, J.; Dong, S.X.; Ding, Q.S. Trunk detection based on laser radar and vision data fusion. Int. J. Agric. Biol. Eng. 2018, 11, 20–26. [Google Scholar] [CrossRef]
- Yu, S.; Liu, X.; Tan, Q.; Wang, Z.; Zhang, B. Sensors, systems and algorithms of 3D reconstruction for smart agriculture and precision farming: A review. Comput. Electron. Agric. 2024, 224, 109229. [Google Scholar] [CrossRef]
- Ji, Y.; Li, S.; Peng, C.; Xu, H.; Cao, R.; Zhang, M. Obstacle detection and recognition in farmland based on fusion point cloud data. Comput. Electron. Agric. 2021, 189, 106409. [Google Scholar] [CrossRef]
- Zhang, Y.Y.; Zhang, B.; Shen, C.; Liu, H.L.; Huang, J.C.; Tian, K.P.; Tang, Z. Review of the field environmental sensing methods based on multi-sensor information fusion technology. Int. J. Agric. Biol. Eng. 2024, 17, 1–13. [Google Scholar] [CrossRef]
- Zhou, J.; Geng, S.; Qiu, Q.; Shao, Y.; Zhang, M. A deep-learning extraction method for orchard visual navigation lines. Agriculture 2022, 12, 1650. [Google Scholar] [CrossRef]
- Chien, C.T.; Ju, R.Y.; Chou, K.Y.; Xieerke, E.; Chiang, J.-S. YOLOv8-AM: YOLOv8 Based on Effective Attention Mechanisms for Pediatric Wrist Fracture Detection. arXiv 2024, arXiv:2402.09329. [Google Scholar] [CrossRef]
- Liu, H.; Wu, C.; Wang, H. Real time object detection using LiDAR and camera fusion for autonomous driving. Sci. Rep. 2023, 13, 8056. [Google Scholar] [CrossRef] [PubMed]
- Huang, X.; Dong, X.; Ma, J.; Liu, K.; Ahmed, S.; Lin, J.; Qiu, B. The Improved A* Obstacle Avoidance Algorithm for the Plant Protection UAV with Millimeter Wave Radar and Monocular Camera Data Fusion. Remote Sens. 2021, 13, 3364. [Google Scholar] [CrossRef]
- Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
- Zhuang, Y.; Chen, W.; Jin, T.; Chen, B.; Zhang, H.; Zhang, W. A Review of Computer Vision-Based Structural Deformation Monitoring in Field Environments. Sensors 2022, 22, 3789. [Google Scholar] [CrossRef]
- Lin, K.-Y.; Tseng, Y.-H.; Chiang, K.-W. Interpretation and Transformation of Intrinsic Camera Parameters Used in Photogrammetry and Computer Vision. Sensors 2022, 22, 9602. [Google Scholar] [CrossRef]
- Duan, J. Study on Multi-Heterogeneous Sensor Data Fusion Method Based on Millimeter-Wave Radar and Camera. Sensors 2023, 23, 6044. [Google Scholar] [CrossRef]
- Iqra; Giri, K.J. SO-YOLOv8: A novel deep learning-based approach for small object detection with YOLO beyond COCO. Expert Syst. Appl. 2025, 280, 127447. [Google Scholar] [CrossRef]
- Ju, R.Y.; Chien, C.T.; Chiang, J.S. YOLOv8-ResCBAM: YOLOv8 Based on an Effective Attention Module for Pediatric Wrist Fracture Detection. arXiv 2024, arXiv:2409.18826. [Google Scholar]
- Zhang, Y.; Tan, Y.; Onda, Y.; Hashimoto, A.; Gomi, T.; Chiu, C.; Inokoshi, S. A tree detection method based on trunk point cloud section in dense plantation forest using drone LiDAR data. For. Ecosyst. 2023, 10, 100088. [Google Scholar] [CrossRef]
- Lee, S.; An, S.; Kim, J.; Namkung, H.; Park, J.; Kim, R.; Lee, S.E. Grid-Based DBSCAN Clustering Accelerator for LiDAR’s Point Cloud. Electronics 2024, 13, 3395. [Google Scholar] [CrossRef]
- Shi, J.; Bai, Y.; Diao, Z.; Zhou, J.; Yao, X.; Zhang, B. Row detection based navigation and guidance for agricultural robots and autonomous vehicles in row-crop fields: Methods and applications. Agronomy 2023, 13, 1780. [Google Scholar] [CrossRef]
- Zhou, M.; Wang, W.; Shi, S.; Huang, Z.; Wang, T. Research on Global Navigation Operations for Rotary Burying of Stubbles Based on Machine Vision. Agronomy 2025, 15, 114. [Google Scholar] [CrossRef]
- Lv, M.; Wei, H.; Fu, X.; Wang, W.; Zhou, D. A loosely coupled extended Kalman filter algorithm for agricultural scene-based multi-sensor fusion. Front. Plant Sci. 2022, 13, 849260. [Google Scholar] [CrossRef]
- Francq, B.G.; Berger, M.; Boachie, C. To Tolerate or to Agree: A Tutorial on Tolerance Intervals in Method Comparison Studies with BivRegBLS R Package. Stat. Med. 2020, 39, 4334–4349. [Google Scholar] [CrossRef] [PubMed]
Component | Parameter | Value |
---|---|---|
Intel RealSense D455 | RGB Resolution | Up to 1280 × 800 |
RGB Frame Rate/fps | Up to 90 | |
RGB FOV/(°) | 86 × 57 (±3) | |
SLAMTEC 2D LiDAR | Measurement Radius/m | 0.2–12 |
Sampling Frequency/k | 16 | |
Scanning Frequency/Hz | 5–15 | |
Angular Resolution/(°) | 0.225 | |
Scanning Range/(°) | 360 | |
Ranging Accuracy | 2% of actual distance (≤5 m) | |
NVIDIA Jetson XAVIER NX | CPU | 6-core NVIDIA Carmel ARM v8.2 64-bit CPU |
GPU | 384-core NVIDIA Volta™ GPU with 48 Tensor Cores | |
Acceleration Unit | 2 × NVIDIA (NVIDIA Deep Learning Accelerator) | |
TOPS | 21 | |
Memory | 8 GB 128-bit LPDDR4 | |
Tracked Mobile Platform | Dimensions/mm | 750 × 550 × 850 |
Motion Controller | STM32 | |
Number of Motors | 2 | |
Rated Power/w | 322 | |
Rated Speed/r·min−1 | 75 |
Detection Model | mAP50 (%) | Recall Rate (%) | Precision (%) | Inference Time (ms) |
---|---|---|---|---|
YOLOv8 | 95.5 | 91.6 | 91.4 | 12.96 |
YOLOv8-ECA | 95.5 | 90.9 | 91.4 | 13.04 |
YOLOv8-GAM | 95.2 | 91.1 | 91.6 | 15.64 |
YOLOv8-SA | 95.6 | 91.6 | 92.0 | 13.71 |
YOLOv8-ResCBAM (Proposed) | 95.7 | 92.5 | 91.0 | 15.36 |
Test | Number of Fruit Trees | Average Association Count | Association Success Rate (%) |
---|---|---|---|
Test 1 | 25 | 23.0 | 92.1 |
Test 2 | 27 | 24.8 | 91.9 |
Test 3 | 30 | 27.7 | 92.3 |
Test 4 | 28 | 26.4 | 94.3 |
Test 5 | 32 | 29.6 | 92.5 |
Average | 28.4 | 26.3 | 92.6 |
Test Case | Tree Points | Algorithm | RMSE (m) | Inlier Ratio (%) | Runtime (s) |
---|---|---|---|---|---|
Test 1 | 19 | GCA | 0.1469 | 79.44 | 0.001 |
RANSAC | 0.1395 | 79.44 | 0.0087 | ||
Test 2 | 21 | GCA | 0.09 | 95.45 | 0.002 |
RANSAC | 0.1062 | 85.45 | 0.005 | ||
Test 3 | 27 | GCA | 0.0982 | 96.43 | 0.009 |
RANSAC | 0.0947 | 89.29 | 0.004 | ||
Test 4 | 23 | GCA | 0.0756 | 95.83 | 0.0017 |
RANSAC | 0.0794 | 91.29 | 0.005 | ||
Test 5 | 20 | GCA | 0.0775 | 95 | 0.0012 |
RANSAC | 0.0786 | 95 | 0.003 | ||
Test 6 | 20 | GCA | 0.1018 | 90 | 0.0013 |
RANSAC | 0.1059 | 85 | 0.005 | ||
Average | 21.7 | GCA | 0.0983 | 92.03 | 0.0027 |
RANSAC | 0.1007 | 87.58 | 0.0051 | ||
Improvement | - | GCA vs. RANSAC | 2.40% | 5.10% | 47.06% |
Method | Average Lateral Error ± Standard Deviation/cm | Average Lateral Error RMS ± Standard Deviation/cm | Average Success Rate ± Standard Deviation (%) |
---|---|---|---|
LiDAR RANSAC | 7.0 ± 0.7 | 9.0 ± 0.7 | 75.9 ± 4.2 |
Vision RANSAC | 6.6 ± 0.6 | 8.2 ± 0.5 | 89.8 ± 2.5 |
DeepLab v3+ | 6.9 ± 0.7 | 8.6 ± 0.6 | 87.0 ± 2.8 |
Proposed Method | 5.2 ± 0.5 | 6.6 ± 0.6 | 95.4 ± 2.1 |
Method | Conservative Upper Bound of Lateral Error/cm | Conservative Upper Bound of Lateral Error RMS/cm | Conservative Lower Bound of Success Rate (%) |
---|---|---|---|
LiDAR RANSAC | 8.37 | 10.37 | 67.68 |
Vision RANSAC | 7.78 | 9.18 | 84.90 |
DeepLab v3+ | 8.27 | 9.78 | 81.51 |
Proposed Method | 6.18 | 7.78 | 91.28 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shi, Z.; Bai, Z.; Yi, K.; Qiu, B.; Dong, X.; Wang, Q.; Jiang, C.; Zhang, X.; Huang, X. Vision and 2D LiDAR Fusion-Based Navigation Line Extraction for Autonomous Agricultural Robots in Dense Pomegranate Orchards. Sensors 2025, 25, 5432. https://doi.org/10.3390/s25175432
Shi Z, Bai Z, Yi K, Qiu B, Dong X, Wang Q, Jiang C, Zhang X, Huang X. Vision and 2D LiDAR Fusion-Based Navigation Line Extraction for Autonomous Agricultural Robots in Dense Pomegranate Orchards. Sensors. 2025; 25(17):5432. https://doi.org/10.3390/s25175432
Chicago/Turabian StyleShi, Zhikang, Ziwen Bai, Kechuan Yi, Baijing Qiu, Xiaoya Dong, Qingqing Wang, Chunxia Jiang, Xinwei Zhang, and Xin Huang. 2025. "Vision and 2D LiDAR Fusion-Based Navigation Line Extraction for Autonomous Agricultural Robots in Dense Pomegranate Orchards" Sensors 25, no. 17: 5432. https://doi.org/10.3390/s25175432
APA StyleShi, Z., Bai, Z., Yi, K., Qiu, B., Dong, X., Wang, Q., Jiang, C., Zhang, X., & Huang, X. (2025). Vision and 2D LiDAR Fusion-Based Navigation Line Extraction for Autonomous Agricultural Robots in Dense Pomegranate Orchards. Sensors, 25(17), 5432. https://doi.org/10.3390/s25175432