Next Article in Journal
Data-Driven Multi-Objective Optimization Design of Micro-Textured Wet Friction Pair
Previous Article in Journal
Genotype × Environment Interaction and Yield Stability of “Pinto” Bean (Phaseolus vulgaris L.) Lines in a Semi-arid Region of Mexico
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advances in Crop Row Detection for Agricultural Robots: Methods, Performance Indicators, and Scene Adaptability

by
Zhen Ma
1,
Xinzhong Wang
1,2,*,
Xuegeng Chen
1,2,
Bin Hu
2 and
Jingbin Li
2
1
School of Agricultural Engineering, Jiangsu University, Zhenjiang 212013, China
2
College of Mechanical and Electrical Engineering, Shihezi University, Shihezi 832003, China
*
Author to whom correspondence should be addressed.
Agriculture 2025, 15(20), 2151; https://doi.org/10.3390/agriculture15202151
Submission received: 27 August 2025 / Revised: 13 October 2025 / Accepted: 14 October 2025 / Published: 16 October 2025
(This article belongs to the Section Agricultural Technology)

Abstract

Crop row detection technology, as one of the key technologies for agricultural robots to achieve autonomous navigation and precise operations, is related to the precision and stability of agricultural machinery operations. Its research and development will also significantly determine the development process of intelligent agriculture. The paper first summarizes the mainstream technical methods, performance evaluation systems, and adaptability analysis of typical agricultural scenes for crop row detection. The paper also summarizes and explains the technical principles and characteristics of traditional methods based on visual sensors, point cloud preprocessing based on LiDAR, line structure extraction and 3D feature calculation methods, and multi-sensor fusion methods. Secondly, a review was conducted on performance evaluation criteria such as accuracy, efficiency, robustness, and practicality, analyzing and comparing the applicability of different methods in typical scenarios such as open fields, facility agriculture, orchards, and special terrains. Based on the multidimensional analysis above, it is concluded that a single technology has specific environmental adaptability limitations. Multi-sensor fusion can help improve robustness in complex scenarios, and the fusion advantage will gradually increase with the increase in the number of sensors. Suggestions on the development of agricultural robot navigation technology are made based on the current status of technological applications in the past five years and the needs for future development. This review systematically summarizes crop row detection technology, providing a clear technical framework and scenario adaptation reference for research in this field, and striving to promote the development of precision and efficiency in agricultural production.

1. Introduction

1.1. Research Background

Crop row detection is an important prerequisite for robots to achieve precise positioning and navigation, and it plays a decisive role in ensuring the accuracy and stability of agricultural machinery during sowing, weeding, fertilization, and other operations. However, adverse factors such as harsh farmland environments, difficulty in avoiding weeds and obstructed branches and leaves during crop row detection tasks, and poor lighting conditions can have a significant impact on crop row detection. Hence it is crucial to conduct an in-depth analysis of the research status, technical difficulties, and research directions of crop row detection.
Shi et al. [1] conducted a comprehensive analysis of crop row detection technology in agricultural machinery navigation. They compared and analyzed the advantages and disadvantages of traditional techniques and deep learning based methods, and explained the effectiveness of different methods in different application scenarios. The conclusion is that it is necessary to achieve accurate detection of crop strains through sensor matching and target model design for different scenarios. Zhang et al. [2] briefly introduced the advantages and disadvantages of visual inspection methods for crop row detection. These visual inspection methods have improved the autonomous positioning ability of agricultural equipment and to some extent solved the problem of poor Global Navigation Satellite System (GNSS) signals. They summarized it as a typical visual inspection process consisting of the following: image acquisition, feature extraction, centerline detection, and listed the corresponding equipment selection, model design, and algorithm optimization. They pointed out that current research results have insufficient consideration for the complex situations in detection.
Yao et al. [3] classified agricultural machinery automatic navigation into three categories: GNSS, machine vision, and Light Detection and Ranging (LiDAR). By comparing and analyzing the structural characteristics, advantages, disadvantages, and technical difficulties of these three navigation methods, they found that the integration of GNSS and machine vision is the most mature path. They described four types of path planning algorithms and 22 scene matching algorithms to provide reference for technology implementation, and analyzed and summarized the challenges that need to be overcome. Bai et al. [4] also emphasized the important role of machine vision in agricultural robot navigation. They summarized the characteristics of visual sensors, and focused on explaining the use of key technologies such as filtering, segmentation, and row detection in environmental perception, positioning, and path planning. Wang et al. [5], based on the increasing emphasis on autonomous navigation in intelligent agriculture, have used different visual sensors and algorithms to achieve autonomous navigation due to the highly beneficial and low cost of machine vision. They have also specifically explained the scope and limitations of these visual sensors and algorithms.
From the perspective of scholars, various sensors have certain limitations when used in specific scenarios. When using a certain type of sensor alone, it may be affected by weak GNSS signal coverage, camera susceptibility to light interference and vibration, and LiDAR occlusion. Using a single sensor alone may result in insufficient accuracy. Bonacini et al. [6] proposed a dynamic selection method based on sensor data features, which enables the system to select the optimal method of camera or LiDAR in crop areas and uses GNSS in other areas. The accuracy of this method can achieve ≥87% accuracy and 20 s GNSS transition adaptation. Xie et al. [7] used an extended Kalman filter and particle filter to analyze vehicle motion GNSS. Information fusion, such as ultra-wideband improves the positioning accuracy of vehicles. Wang et al. [8] proposed using LiDAR/IMU/GNSS fusion navigation technology to improve the positioning accuracy of agricultural pesticide robots in orchard operating environments, enabling orchard pesticide robots to achieve centimeter-level positioning accuracy and an average lateral offset of 2.215 cm. Qu et al. [9] addressed the issue of LiDAR SLAM map distortion and optimized the map with an improved SLAM, resulting in a distortion error of 0.205 m.
The research on path planning and tracking optimization has also reached a significant level. Wen et al. [10] addressed the issue of low path tracking accuracy in agricultural machinery by adding a speed-forward distance correlation function to the algorithm to adjust parameter optimization. The measured average lateral offset value was 3 cm, and the yaw angle was less than 5°. Su et al. [11] proposed a method based on multi-sensor fusion to address the limitations of GNSS navigation in orchards. The system was constructed using two-dimensional LiDAR, an electronic compass, encoder, and other devices, and the extended Kalman filter (EKF) was used to fuse pose data. The experimental results showed that the system had high positioning and navigation accuracy and strong robustness. Kaewkorn et al. [12] showed that the positioning method based on low-cost IMU coupled with three laser guidance (TLG) achieves an accuracy of 1.68/0.59 cm in horizontal/vertical direction and 0.76–0.90° in yaw angle, which is much lower than the cost of commercial GNSS-INS systems.
Researchers have been continuously conducting research on technology optimization for a certain scenario. Cui et al. [13] combined the deep learning model DoubleNet with a multi-head, self-attention mechanism and improved activation function, and the experiment proved that this method can achieve a localization accuracy of 95.75% at a speed of 71.16 fps. Chen et al. [14] proposed a path planning method based on genetic algorithm (GA) and Bezier curve, achieving a balance between minimizing navigation error and maximizing land use efficiency. Jin et al. [15] proposed a field ridge navigation line extraction method based on Res2net50 to address the issues of poor real-time performance and susceptibility to light effects in field navigation path recognition. The pixel error was approximately 8.27 pixels under different lighting conditions. Gao et al. [16] proposed a GRU Transformer hybrid model for GNSS/INS fusion navigation optimization to improve the accuracy of GNSS/INS fusion navigation when GNSS signals in orchards are obstructed. The root mean square error of the position has been improved. At the same time, scholars have proposed a crop–weed recognition model based on the YOLOv5 unmanned plant protection robot, with an accuracy of 87.7% and an average lateral error of 0.036 m for straight-line operations [17].

1.2. Research Content

Although mature technological methods are currently being applied in many scenarios, in practice, there are still problems such as poor technical adaptability, incomplete multi-sensor fusion mechanisms, and weak robustness to complex scenarios. Therefore, starting from the current mainstream crop row detection algorithms, the paper briefly describes the principles of each algorithm and provides corresponding performance evaluation indicators. Based on this, combined with typical farmland scenarios, the applicability of each method is analyzed and discussed, in order to provide some reference materials for relevant researchers.

1.3. Main Contributions

Numerous research studies have been conducted on single- or multi-source vision, LiDAR, GNSS, etc., but there are limitations in practical application, such as low scene applicability, immature multi-sensor fusion methods, and poor robustness.
This paper reviews the mainstream crop row detection technologies of the past five years, and, for the first time, sorts out the technical spectrum from the following three dimensions: visual sensor technology, LiDAR technology, and multi-sensor fusion. It analyzes the relevant technical principles and characteristics of visual artificial feature extraction, deep learning automatic feature learning, LiDAR point cloud preprocessing, structured extraction, and multi-sensor information complementary fusion technology under various methods, and elaborates on the development trends and application boundaries of various technologies.
This paper constructs a multidimensional measurement analysis framework for accuracy, efficiency, robustness, and practicality. It elaborates on detection accuracy, positioning error, real-time performance, computational cost, environmental adaptability, scene fault tolerance, hardware cost, and deployment difficulty. In the past five years of research review, it analyzes the advantages and disadvantages of crop row detection technology compared by scholars.
This paper combines four typical agricultural scenarios, including open-air farmland, facility agriculture, orchards, and special terrain. Starting from the characteristics of each scenario, it analyzes the detection technologies applicable to each scenario and proposes corresponding technical selection suggestions to guide the selection of detection schemes in practical application scenarios.
Based on the technology application situation in the past 5 years and the development needs of intelligent agriculture, analyze the current technological shortcomings, including poor robustness in extreme environments, low data consistency among multiple sensors, and inability to meet low-cost deployment requirements. Based on numerous literature, the development trends of several research directions, including multimodal deep fusion, model lightweighting and edge deployment, and data-driven scene generalization, have been determined to promote the development of intelligent agriculture.

2. Conventional Technical Methods and Principles for Crop Row Detection

2.1. Detection Methods Based on Visual Sensors

Visual sensors have become the core technology carrier for crop row detection due to their low cost and rich data. Their methods can be divided into traditional visual methods that rely on manual features and data-driven deep learning methods, each with its own emphasis in different scenarios.

2.1.1. Traditional Visual Methods

Traditional visual methods use manually extracted features and geometric methods to detect crop rows, which have high accuracy and stability in structured farmland scenes, and have the advantages of low computational complexity and strong real-time performance. Gai et al. [18] proposed a method based on a frequency domain processing pipeline, which identifies crop row spacing and direction from the frequency domain components of crop rows through two-dimensional discrete Fourier transform (DFT). The positioning accuracy is improved through frequency domain interpolation and a weighted Hanning window optimization algorithm. After being combined with the LQG controller, the average absolute error of robot tracking is only 3.74 cm, and it can better adapt to stability and robustness under complex lighting conditions. As shown in Figure 1, in response to the problem of soybean seedlings being covered with straw after lodging during the seedling stage, the Hue–Saturation–Value (HSV) color model and Otsu’s Thresholding (OTSU) method were used to segment the image, and the adaptive DBSCAN clustering was improved to extract feature points. Zhang et al. [19] extracted 20 images from a multi-level sample to validate the navigation line extraction method. The least squares method was used to fit the navigation path, with an average distance deviation of 7.38 cm, an average angle deviation of 0.32°, and an accuracy rate of 96.77%.
The rice guidance system studied by Ruangurai et al. can use the trajectory law of tractor wheels, combine principal component analysis with the initial value estimation Hough transform optimization method, and make the position error of rows below 28.25 mm, which can achieve automation of precise rice sowing [20]. Zhou et al. [21] proposed a feature extraction based on the A* component of the CIELAB color space for multi-period high-ridge broad-leaved crops, using adaptive clustering and Huber loss linear regression to fit the centerline strategy, resulting in an average image processing time of only 38.53 ms, which is suitable for long-term crops. Gai et al. [22] proposed a canopy navigation system based on ToF depth cameras to construct a grid map of crop fields occupied from depth maps. The average absolute errors used to determine the lateral positioning of corn and sorghum fields are 5.0 cm and 4.2 cm, respectively, providing a solution in GPS weak signal scenarios.
In response to the stereoscopic vision guidance method proposed by Yun et al. [23], the introduction of dynamic pitch and roll compensation and anti-ghosting algorithms can effectively track ridges and ditches. The lateral root mean square error (RMSE) of a flat plot is 2.5 cm, and the RMSE of a field with a slope of 20% is 6.2 cm, which has achieved practical operation guidance. For the jujube harvester, after Lab color segmentation and noise reduction filtering, the tree line was fitted using the least squares method. The average missed detection rate was measured to be 3.98%, the average detection rate was 95.07%, and the average detection rate was 41 fps, which fully meets the requirements of autonomous driving [24].

2.1.2. Deep Learning Methods

Due to the comprehensive consideration of accuracy and speed, the YOLO series object detection model is currently the main choice for real-time field detection. Li et al. [25] added DyFasterNet and dynamic FM IoU loss to the D3-YOLOv10 framework structure. They applied enhanced DyFasterNet and enhanced dynamic FM IoU loss in 11 places on their model and achieved a 91.18% result in tomato detection mAP0.5 in the D3-YOLOv10 network structure. Compared with the original model, this model reduces the number of parameters by 54.0% computational complexity by 64.9%, and achieves a frame rate of 80.1 fps, meeting the real-time requirements of facility scenarios. Based on the LBDC-YOLO lightweight model, the design methods of GSConv Slim Neck and Triplet Attention were applied to the broccoli model, achieving an average accuracy of 94.44%. Compared with YOLOv8n, this model reduces parameters by 47.8%, making it more suitable for resource-constrained field harvesting [26]. Duan et al. [27] proposed a YOLOv8-GCI model based on RepGFPN backbone network feature map fusion and CA attention mechanism optimization. The gradient descent method combined with the population approximation algorithm was used for accurate detection of pepper phytophthora, which improved the accuracy by 0.9% and recall by 1.8%, reaching over 60 fps.
For crop row geometric feature extraction, end-to-end detection and semantic segmentation have good results. Yang et al. [28] proposed ROI autonomous extraction, which uses YOLO to predict the driving area and reduce the detection range. Cornrows are extracted through super green operator segmentation and FAST corner detection. Processing one image only takes 25 ms, with a frame rate of over 40 fps and an angle error of 1.88°. Quan et al. [29] used the Anchor Points Selecting Classification (RCASC) algorithm based on row and column and an end-to-end Convolutional Neural Network (CNN) for prediction. The F1-score reached 92.6%, while maintaining a frame rate of over 100 fps, achieving good results in various cornrows. Li et al.’s [30] E2CropDet end-to-end network directly models crop rows as independent objects, with a lateral deviation of 5.945 pixels in the centerline and a detection frame rate of 166 fps, without the need for any additional post-processing work. Luo et al. [31] combined CAD UNet with CBAM and DCNv2, and introduced them into UNet to detect rice seedling rows. The detection accuracy reached 91.14%, and the F1-score reached 89.52%, completing the evaluation of the quality of rice transplanter operation.
Gomez et al. [32] showed that the method based on YOLO-NAS whole leaf labeling can achieve an mAP of 97.9% and a recall rate of 98.8%. The bean detection method based on YOLOv7/8 is superior. The ST-YOLOv8s network proposed by Diao et al. [33] uses a swin transformer as the backbone, which improves the accuracy of cornrow detection by 4.03% to 11.92% compared to traditional methods at different stages, and reduces the angle error to 0.7° to 3.78°. Liu et al. [34] proposed an improved YOLOv5 model, which adds a small object detection layer to the neck and optimizes the loss function. Based on the feature points of pineapple rows, clustering and fitting the centerline can achieve an accuracy of 89.13% on sunny days and 85.32% on cloudy days. The accuracy of row recognition is 89.29%, with an angle error of 3.54°, which can be used for harvester navigation.
The lightweight VC UNet model is based on VGG16 as the backbone, and channel pruning techniques of CBAM and convolutional layers are introduced in upsampling. The ROI is determined by vertical projection, and the centerline of the rapeseed row is fitted using the least squares method, resulting in a quasi intersection rate of 84.67% and a mean accuracy of 94.11%. The segmentation optimization is carried out with the minimum mean error, and the processing speed is 24.47 fps. After transfer learning processing, the recognition rate of soybeans was 90.76%, and the recognition rate of corn was 91.57% [35]. As shown in Figure 2, Zhou et al. [36] applied the improved YOLOv8 model to the backbone layer and integrated the CBAM attention mechanism, and used the neck BiFPN feature fusion module for multi-scale information aggregation to improve the accuracy of the model for field obstacles. The recognition accuracy of field obstacles reached 85.5%, with an average accuracy of 82.5% and a frame rate of 52 fps, which can fully meet the needs of real-time obstacle avoidance. To improve the adaptability of the model to the environment and facilitate lightweight deployment, Shi et al. [37] developed the DCGA-YOLOv8 algorithm by fusing DCGA and YOLOv8, and the results of target detection in cabbage and rice crops were both greater than 95.9%. Liu et al. [38] proposed a crop root row detection method, which first extracts canopy ROI through semantic segmentation, and then obtains canopy row detection lines through horizontal stripe division and midpoint clustering. Using a root representation learning model to obtain root row detection lines corrected by an alignment equation. The method can achieve a single frame processing time of 30.49 ms, accuracy of 97.1%, and reduced risk of high pole crop crushing. Wang et al. [39] established the MV3-DeepLabV3 model based on MobileNetV3_Large, and obtained the best segmentation results for different wheat plant heights, maturity levels, and occlusion conditions through experiments, and the accuracy was 98.04% and the IoU was 95.02%.
Here we discuss the CNN dual-branch architecture proposed by Osco and Lucas [40], which simultaneously calculates crop quantity and detects planting rows, and optimizes detection and multi-stage refinement based on row information feedback. The detection accuracy of cornrows is 0.913, the recall rate is 0.941, and the F1-score is 0.925. Lv et al. [41] improved the YOLOv8 segmentation algorithm by using MobileNetV4 as the backbone and ShuffleNetV2 as the feature module. The average accuracy of the model was increased to 90.4%, the model size was reduced to 1.8 M, and the frame rate was 49.5 fps, achieving real-time recognition of crop-free ridge navigation routes and improving recognition speed and real-time performance.
In each specific field, the model has strong adaptability to the task. In the broccoli seedling replacement scenario, Zhang et al. [42] used seedling-YOLO to complete the replacement work, added the Efficient Layer Aggregation Network-Path (ELAN_P) module, and added the coordinate attention module to enhance the difficult-to-detect samples. They obtained AP values of 94.2% and 92.2%, respectively, in the categories of “exposed seedlings” and “missing holes”. Wang et al. [43] proposed the YOLOv8 TEA instance segmentation algorithm, which is an improved algorithm that replaces some cross-stage partial 2 with focus (C2f) modules with MobileViT Block (MVB), adds C2PSA and CoTAttention, and uses dynamic upsampling and dilated convolution methods on YOLOv8 seg. Through experiments, it can be seen that the algorithm can achieve mAP (BOX): 86.9%, mAP (Mask): 68.8%, Giga Floating-point Operations per Second (GFLOPs): 52.7, and can effectively recognize tea buds, thus providing some help for intelligent tea picking.
For mature lotus seed detection, Ma et al. [44] introduced the CBAM module using modified YOLOv8-seg, achieving an mAP mask of 97.4% in detection with an average time of only 25.9 ms, and applied this method to path planning for lotus pond robots. In addition, in order to solve the problems of low navigation speed and poor accuracy of most Automated Guided Vehicles (AGVs) relying on reflectors, Wang et al. [45] borrowed the MTNet network model and combined feature offset aggregation with multi-task collaboration to improve driving speed and positioning accuracy through feature offset aggregation. Compared to the original driving speed of You Only Look for Panoptic Driving Perception (YOLOP), this result has more advantages.

2.1.3. Summary of Detection Methods Based on Visual Sensors

There are two traditional techniques and principles for crop row detection based on visual sensors: traditional vision and deep learning. Traditional vision is achieved through manually extracting features or using geometric methods, and in structured agricultural scenes, it has high accuracy and stability. The computational complexity is relatively low. Realize real-time detection. By using methods such as HSV color model and OTSU threshold method to process the test object, it is possible to quickly fit the travel path and apply it to some simple crop row detection. However, this method has poor adaptability to unstructured environments and is prone to failure in situations with dense weeds and significant changes in light intensity. Deep learning methods, represented by the YOLO series, SSD, Faster R-CNN, etc., automatically learn features to improve detection accuracy. The introduction of multi-scale information fusion and attention mechanisms further improves the model performance. At the same time, for scenes with limited resources, lightweight methods can be used to improve detection efficiency while ensuring a certain level of accuracy, in order to meet the needs of detecting different types of crops. However, it relies on a large amount of annotated data, and the deployment of the model requires certain hardware conditions. If it is in a relatively harsh environment, it also needs to be continuously debugged to meet the corresponding requirements and continuously trained. The visual sensor method has the advantages of low cost and rich data, but a single visual modality cannot obtain the three-dimensional shape of crop rows, and its stability is poor in complex agricultural environments. Combining other methods such as LiDAR can help compensate for the shortcomings of a single visual sensor.

2.2. LiDAR-Based Detection Methods

Compared to visual sensors, LiDAR has great advantages in utilizing the precise positioning and depth information contained in three-dimensional point cloud data. It is more effective than visual sensors in poor lighting environments or severe tree canopy occlusion, and is also a supplementary technology for crop row detection. Below are some common tools and techniques used for LiDAR-based detection models.

2.2.1. Point Cloud Preprocessing Technology

Point cloud preprocessing is a core step in crop row detection based on LiDAR. Point cloud preprocessing, represented by ground filtering, denoising, and downsampling, can remove redundant point cloud data, preserve important crop information, and prepare for the next step of line structure extraction. Karim et al. [46] measured the R2 of wheat plants after airborne LiDAR preprocessing to be 0.97, and the relative error of soybean canopy estimation was only 5.14%. The accurate recognition rate of plants can reach 100%.
Using the 2D LiDAR tree crown sensing system developed by Baltazar et al. [47], precise spraying of tree crown boundaries was detected through pre-processing, resulting in a 28% reduction in excessive spraying and a 78% water saving, as shown in Figure 3. Li et al. [48] used LiDAR filtering and downsampling to achieve obstacle recognition between cornrows, and achieved a downsampling rate of 17.91% in the YOLOv5 framework. After supplementing visual information, the navigation accuracy significantly improved. Yan et al. optimized the flight parameters of Unmanned Aerial Vehicle (UAV) LiDAR and proposed a path and phenotype index p downsampling scheme. They suggested that the accuracy of phenotype estimation after downsampling the s-alone path should be p > 0.75, and the efficiency should be improved by about 42% to 44% [49]. Bhattarai et al. [50] set the optimal height of 24.4 m and grid sampling (20 cm) to balance accuracy and efficiency for cotton UAV LiDAR, and provided data processing standards.

2.2.2. Row Structure Extraction Methods

Using fitting algorithms and clustering analysis, the characteristics of crop rows are extracted from preprocessed point cloud data to distinguish between crops and background. Since this step determines the exported navigation lines and their positioning accuracy, the relevant algorithm used can be selected based on the actual situation. For dense wheat point clouds, Zou et al. [51] preprocessed them using the octree segmentation voxel grid merging method and extracted wheat ear point clouds through clustering algorithm to construct a linear regression model. The R2 of the model is 0.97, and compared with the actual field measurements, the R2 is 0.93. Liu et al. [52] combined 3D LiDAR observations with EKF localization and corrected trajectory errors through a crown center detection. They found that the average lateral and vertical positioning offset of the orchard was less than 0.1 m, and the heading offset was less than 6.75°. Nehme et al. [53] found that the grape row structure tracking method does not require prior data measurement, and uses Hough transform and Random Sample Consensus (RANSAC) to fit row features, and verifies the layout suitable for various vineyards through RTK-GNSS methods.
Ban et al. [54] combined monocular cameras with 3D LiDAR and used adaptive radius filtering and DBSCAN clustering methods to determine the accuracy of navigation lines in extracting stem point clouds, achieving a rate of 93.67%. The average absolute error of the obtained heading angle was 1.53°. Luo et al. [55] showed that the cotton variable spray system based on the UAV LiDAR can use the alpha shape algorithm to calculate the canopy volume (R2 = 0.89), which can adjust the flow in real time and save 43.37% of the dosage in the field test. Nazeri et al.’s [56] PointCleanNet deep learning method achieved an improvement in leaf area index (LAI) prediction value R2 by removing outliers, while reducing RMSE. Cruz et al. [57] found that the fertilization positioning error can be reduced from 13 mm to 11 mm by comparing the general point cloud map (G-PC) with the local point cloud map (L-PC), providing robust positioning support for precision fertilization.

2.2.3. Three-Dimensional Feature Calculation

Three-dimensional (3D) feature calculation is a quantitative reflection of crop growth and guidance. Nazeri et al. [58] used LiDAR point cloud statistical features to establish a regression model: the predicted values R2 of sorghum LAI ranged from 0.4 to 0.6, and the height difference was used to infer the growth status. Lin et al. [59] proposed a multi-temporal UAV LiDAR framework for automatic recognition of row position and orientation angle; the net vertical and planimetric discrepancies between multi-temporal point clouds are ±3 cm and ±8 cm, adapted to different planting orientations and crop types.
Karim et al. [60] used machine learning models to quantify the morphological features of apple orchards. The results showed that the mean absolute error (MAE) of tree height was 0.08 m, the MAE of canopy volume was 0.57 m3, the RMSE of row spacing was 0.07 m, and R2 was 0.94. Escolà et al. [61] compared Multi-Temporal Laser ScanningMTLS (MTLS-LDAR) and Unmanned Aerial Vehicle-Digital Aerial Photography (UAV-DAP) methods and found that using both LiDAR and TLM to obtain canopy stereo data can provide more detailed information about the canopy. When a large area is occluded, complementary enhancement can be achieved. It also has a high degree of spatial redundancy, making it suitable for large-scale monitoring and the monitoring of overall canopy volume: 19–25 m3.

2.2.4. Summary of LiDAR-Based Detection Methods

The technical core of crop row detection methods based on LiDAR mainly relies on the accurate positioning and depth perception ability of 3D point cloud data, which has strong advantages in low light or severe canopy occlusion. The technical system includes point cloud preprocessing, using filtering, denoising, sampling, and other methods to remove noise and reduce redundant data. And use optimization methods such as dynamic clustering to preserve key data in the point cloud, laying a solid foundation for subsequent processing to meet the real-time requirements of agricultural machinery assisted driving. Row structure extraction extracts preprocessed point cloud data through fitting and clustering algorithms to segment crops from the background, and generates low offset navigation lines that are close to the actual path position based on positioning information. Three-dimensional feature calculation can quantify crop growth related parameters and obtain relatively complete canopy information even under large-scale occlusion conditions. However, the method based on Hikvision equipment has certain limitations, such as sparse point clouds, high equipment costs, and difficulty in popularization. In some complex scenarios, objects may be disassembled, and the extraction effect may also deteriorate in scenes where ground objects obstruct each other or point clouds overlap.

2.3. Multi-Sensor Fusion Method

As outlined in the preceding discussion, a single sensor is not robust against environmental influences or effects. Simple sensors are prone to problems such as low recognition rate and inaccurate positioning. However, multi-sensor fusion can combine or maximize the advantages of multiple sensors to complement each other and achieve a high detection rate.

2.3.1. Visual–LiDAR Fusion

Vision and LiDAR are combined to achieve texture and geometric feature co-extraction. Xie et al. [62] pointed out that multisensor fusion and edge computing will become the future development trend, which can achieve centimeter-level positioning. By using drones for obstacle detection and parameter extraction, incorporating K-means parameter extraction and Markov chain dynamic programming, the collaborative operation of agricultural machinery has improved compared to traditional agricultural machinery collaborative operation methods [63]. The system design based on 2D LiDAR-AHRS-RTK-GNSS adopts a variable threshold segmentation method to obtain experimental data of fragmented dryland and paddy fields, with MAE of 59.41 mm and 69.36 mm, respectively, and an angle error of 0.6489° [64].
Shi et al. [65] proposed a visual and LiDAR fusion navigation line extraction method based on the YOLOv8 ResCBAM tree trunk detection model in a pomegranate orchard environment. Field experiments have shown that the average lateral error is 5.2 cm, the RMS is 6.6 cm, and the navigation success rate is 95.4%, demonstrating the effectiveness of this approach. Song et al. [66] used RGB-D cameras and LiDAR registration to obtain point clouds with R2 values of 0.9–0.96 for corn plant height and 0.8–0.86 for LAI during the seedling stage. Li et al. [67] reduced the seeding depth error by 10.7% to 22.9% by combining laser ranging with array LiDAR. Guan et al. [68] combined the RGB and depth features of RGB images and depth images using the YOLO-GS algorithm, and achieved a water pump recognition accuracy of 95.7% with a positioning error of less than 5.01 mm. Hu et al. [69] improved the method of apple detection by combining YOLOX and RGB-D images, and obtained an F1-score of 93% and a 3D positioning error of less than 7 mm for apple detection.

2.3.2. Visual–GNSS/IMU Fusion

This fusion method combines the local accuracy of vision with the stability of GNSS/IMU, addressing the limitations of single-sensor localization. Wu et al. [70] proposed a visual–unmanned aerial vehicle GNSS surveying scheme, and the RMS accuracy of navigation for curved rice seedlings reached 5.9 cm. The three schemes proposed by Mwitta et al. [71] for cotton fields integrate GPS and vision, with an average lateral error of 9.5 cm for local global planning. The fusion strategy designed by Li et al. [72] for phenotype robots has a lateral error of 0.8 cm and a crop positioning error of 6.4 cm in continuous scenes without GNSS. Li et al. [73] combined RTK-GNSS and odometer to solve the problems of leakage at the breakpoint of agricultural machinery spray operation and large flow control error.

2.3.3. Other Multi-Sensor Fusion Methods

Chen et al. [74] used a fusion of visual and infrared/tactile methods to achieve high-precision F1-score values of 91.80% and 91.82% in visual and tactile recognition based on rice lodging. The average accuracy of visual-based weed density detection in paddy fields is 91.7%, which is higher than the 64.17% achieved by visual methods alone [75]. The accuracy and reliability of rice recognition devices are balanced through tactile stimulation of visual acquisition [76]. Gronewold et al. [77] proposed a tactile-visual detection and positioning system based on visual and tactile hybrid navigation that can be used for obstacle avoidance with an accuracy rate of 97%. This system can achieve autonomous driving with a maximum driving distance of over 30 m.

2.3.4. Summary of Multi-Sensor Fusion Method

The multi-sensor fusion method proposed to address the above issues fully utilizes the complementary advantages of each sensor, greatly improving the robustness and accuracy of crop row detection, and completely solving the drawbacks of a sensor that is prone to distortion, low recognition rate, and large positioning deviation in harsh environments. Among them, the combination of vision and LiDAR is more common, which can complement each other’s strengths and weaknesses, extract texture and geometric features, complete centimeter-level positioning, and accurately obtain crop plant height parameter values, reducing operational errors. By integrating vision and GNSS/IMU, the advantages of vision in local accuracy and GNSS/IMU in global accuracy can be obtained, ensuring low positioning error even in the absence of GNSS continuous scenes. However, in a specific scene, if the fusion of vision and infrared is adopted, the limitation of single mode can be broken through, and the weed recognition rate and obstacle avoidance rate can be improved to some extent. However, the above methods also have the following obvious drawbacks: (1) how to achieve simultaneous acquisition of multi-source data and obtain accurate data synchronization information; (2) the complexity of the algorithm is also relatively high; and (3) compared to a single sensor, the system cost is relatively high.

2.4. Summary of Conventional Crop Row Detection Methods and Principles

This section mainly summarizes the mainstream technical methods for crop row detection, including vision-based methods, LiDAR-based methods, and multi-sensor fusion-based methods. Traditional vision-based methods often use manual feature extraction techniques such as threshold segmentation and Hough transform, which are suitable for scenarios such as broken seedling rows or lighting changes in structured backgrounds. However, their generalization ability to environmental complexity is poor. However, deep-learning-based methods use models such as CNN or YOLO to self-learn features, and have good accuracy in high-density weeds, curved rows, and other high-complexity scenarios. In addition, lightweight design can achieve a balance between efficiency and performance to a certain extent. Based on LiDAR technology, point clouds are preprocessed and structurally extracted, and 3D features are calculated to obtain accurate geometric information. This can provide an alternative to visual techniques for precise geometric references in scenarios such as canopy occlusion and weak GNSS signals, and is an important supplement. Based on multi-sensor fusion technology, utilizing visual-LiDAR texture geometric complementarity and visual-GNSS/IMU long-short-term localization for information fusion can improve robustness. This paper compares the advantages and disadvantages of various methods based on evaluation indicators such as principle types, application scenarios, and robustness characteristics in Table 1, in order to choose the appropriate method according to one’s own situation in practical applications.
The mainstream technologies for crop detection mainly include three categories: visual sensors, LiDAR, and multi-sensor fusion. Traditional visual technology uses manually designed methods to extract features and employs geometric algorithms for localization. It runs quickly and has high accuracy in structured farmland scenes, with extremely short delay times. Additionally, its localization error can be controlled within a few centimeters. However, it does not have good robustness in unstructured farmland scenes with overgrown weeds and large changes in lighting. Deep learning methods such as YOLO series have good accuracy by introducing attention mechanisms and performing multi-scale fusion. Most detection results are above 90%, and a few models can achieve 166 fps. Lightweight design can be well adapted to application environments with weak computing power, but requires a large-scale annotated data. The laser radar technology using three-dimensional point clouds has the advantages of accurate distance measurement, good stability, and low susceptibility to weak light. It can quantify crop growth parameters and has a positioning accuracy error of less than 0.5 mm. However, its shortcomings are sparse point clouds, high cost, and reduced accuracy in extracting complex scenes. Multi-sensor fusion has become a research focus, and the fusion of vision and LiDAR can be combined to obtain texture and geometric information, achieving centimeter-level positioning. The fusion of vision and GNSS/IMU can achieve both local accuracy and global stability, and can achieve a low error rate even after GNSS loss of lock. However, fusion requires consideration of data synchronization issues, as well as related computational algorithms and fusion system costs. Different technologies can be applied to different scenarios, and fusion technology can also compensate for the technical deficiencies of these various single sensors.

3. Performance Evaluation Indicators for Crop Row Detection Methods

3.1. Accuracy Indicators

Accuracy is the core indicator for measuring the practicality of crop row detection methods, directly reflecting the reliability of navigation and operations. This section summarizes the performance of mainstream methods from two dimensions: detection accuracy and positioning error, providing a quantitative basis for technical optimization and scene adaptation.

3.1.1. Detection Accuracy

Accuracy, F1-score, and recall are commonly used evaluation indicators to reflect the ability of crop row features. Sun J et al. [101] used an adaptive disturbance observer sliding mode control method to address the issue of unknown disturbances affecting tractor path tracking, reducing the average lateral error by 20–31.7%, and the heading error by 5–21%. Cui B et al. [102] added a path search function to the traditional Stanley model, and the average lateral error of straight-line tracking increased from 5.2 cm to 6.0 cm at a vehicle speed of 1 m/s, which was 10.3% higher than using the traditional model, and the maximum error across the entire scene was reduced from 34 cm to 27 cm.
Afzaal et al. [103] proposed a crop row detection framework integrating deep learning and deep modeling ideas, achieving an F1-score of 0.8012, accuracy of 0.8512, and recall of 0.7584 on the validation set, and achieving a good balance between accuracy and recall. Gong et al. [104] improved the YOLOX Tiny network with an attention mechanism, achieving an average accuracy of 92.2%, which is 16.4% higher than the original YOLOX Tiny, and an angle error of 0.59°.
Li et al. [105] used a row-column attention network, achieving 95.75% accuracy on the tea dataset and vegetable dataset, which is about 8.5% higher than state-of-the-art (SOTA). The average absolute error of the lateral distance was 8.8 pixels, and only 23.81 ms was used on an image with a size of 1920 × 1080. Wei et al. [106] used the Crop BiSeNet V2 algorithm to obtain an accuracy of 0.9811, a detection speed of 65.54 ms/frame, and strong generalization ability on the corn seedling dataset. The detection speed of YOLOv11 for weed detection in the field is about 34 fps, and the mAP can reach up to 97.5%, making it more suitable for field deployment [107].
Diao et al. [108] proposed the ASPPF-YOLOv8s model, which obtained 90.2% mAP and 91% F1-score. The navigation line fitting accuracy was 94.35%, and the angular error was 0.63°. Zhu et al. [109] used the SN-YOLOXNano ECA model to achieve 97.86% classification accuracy, 98.52% recall, and >96% accuracy for foliar fertilizer spraying information. Liang et al. [110] used the DE YOLO model to detect rice impurities and obtained 97.55% mAP, which was 2.9% higher than YOLOX and had 48.89% fewer parameters than YOLOX. The improved YOLOv8n by Jiang et al. [111] can achieve an accuracy rate of 94.7% for strawberry detection, and can achieve a correct grouping rate greater than 94%. Memon et al. [112] established an inter-row weed and cotton field recognition system, with 89.4% and 85.0% recognition rates, respectively.

3.1.2. Positioning Error

The magnitude of positioning error affects the accuracy of crop row geometric parameter estimation, and thus determines the effectiveness of navigation path planning. Zhang et al. [113] proposed an adaptive system of unmanned track harvesters for paddy fields: the steady-state tracking deviation of 0.09 m for concrete pavement, a stable tracking deviation of 0.032 m for paddy fields, and a cutting platform utilization rate of 91.3%. The tractor–trailer dual-layer closed-loop control system proposed by Lu et al. [114], considering parameter uncertainty, has a position error always within 0.1m and a heading error not exceeding 0.1°. Yang et al. [115] used the 3D LiDAR method to obtain canopy point clouds through dual thresholding; their algorithm achieved >86% correct detection rate and <120 ms average processing time across four corn growth stages, with the accuracy of canopy localization unaffected by crop density. Kong et al. [116] improved the ENet network for identifying rice heading stage navigation lines, with an IoU segmentation of 89.3% and an average deviation of less than 5 cm from the actual path.

3.1.3. Summary of Accuracy Indicators

The evaluation of crop row detection methods mainly focuses on accuracy, which is related to the reliability of navigation operations, that is, the quality of detection accuracy and positioning error in testing. Detection performance is generally measured by F1-score or recall rate. Different optimization models have excellent performance for different application scenarios, such as optimizing the YOLO series model for large-scale and multi-category crop row detection. In some scenarios, it can achieve correct recognition of crops or weeds. Most models have an accuracy rate of over 95% for specific datasets, and also have a certain detection speed, which can be deployed in the field.
The positioning error affects the accuracy of geometric parameter estimation of crop rows and the effectiveness of navigation path planning. By optimizing the control system and algorithms, the positioning error can be reduced. Unmanned tracked harvesters can achieve minimal tracking deviation in paddy fields, and some algorithms can maintain small angle errors throughout the crop growth period without being affected by crop density. Specifically, the accuracy index can obtain detection results from feature point recognition and geometric position information, while considering the differences in accuracy, speed, and robustness of detection methods. It can distinguish the advantages and disadvantages of different methods, guide subsequent improvement, application, and other work, and establish a more reasonable and comprehensive performance evaluation system based on this to meet actual production and usage needs.

3.2. Efficiency Indicators

The efficiency indicators for measuring the actual deployment of crop row detection methods directly affect the real-time response of agricultural robots to crop lines and the degree of hardware matching. In complex field environments, crop row detection needs to balance accuracy and speed, and consider computational resource consumption. This section summarizes the performance of various algorithms and summarizes the performance of methods from the perspectives of real-time performance and computational cost.

3.2.1. Real Time Performance

Focusing on testing metrics such as real-time detection speed and response to environmental changes, fps and single-frame processing time are key indicators to evaluate the two commonly used methods. Zhang et al. [117] proposed a lightweight TS-YOLO model, which uses MobileNetV3 convolution and depth-wise separable convolution instead of YOLOv4 convolution adds deformable convolution, a coordinate attention module, and other methods to reduce the model size to 11.78 M (only 18.30% of YOLOv4). Compared with YOLOv4, it increases 11.68 fps and has an accuracy of 85.35% under various scene lighting conditions.
Luo et al. [118] studied various crop harvest boundary detection methods based on stereo vision; they achieved >98% accuracy for rice, rapeseed, and corn. The automatic turning function of the combine harvester during the automatic harvesting process was also achieved, with a processing speed of 49 ms/frame. He et al. [119] proposed the YOLO Broccoli Seg model, which improves YOLOv8n Seg by adding a triple attention module to enhance feature fusion capability. The mean average precision mAP50 (Mask), mAP95 (Mask), mAP50 (Bounding Box, Bbox), and mAP95 (Bbox) are 0.973, 0.683, 0.973, and 0.748, respectively, with an 8.7% increase in accuracy. This model provides real-time recognition support for automatic broccoli harvesting. Lac et al. [120] proposed an algorithm for crop stem detection and tracking, which was obtained through corn/legume experiments. They reported F1-scores of, respectively, 94.74% and 93.82%, and the power consumption was not exceeding 30W, and it can operate in real time on embedded computers with power consumption not exceeding 30 W, supporting precise mechanical weeding in vegetable fields.

3.2.2. Cost Calculation

A navigation line extraction algorithm based on YOLOv8’s CornNet proposed by Guo et al. [121] replaces the original Conv module with Depthwise Convolution (DWConv) and the C2f module with PP LCNet, reducing the number of parameters and GFLOPs to achieve lightweight. Yang et al. [122] added the small object detection layer (P2) of the original YOLOv8-obb model to the improved YOLOv8-obb model and tested the optimal performance on a 2640 × 1978 image at a height of 7 m. Three proportional head structures were added to the improved YOLOv8-obb model, and comparative experiments were conducted on this basis. The model improved the accuracy of the minimum object by 0.1–2% and the F1-score by 0.5–3% compared to the comparative model. K-means clustering and linear fitting were used for row information extraction to reduce the cost of planting layout quantification and calculation. Lin et al. [123] used an enhanced multitask YOLO algorithm based on C2F and anchor-free modules to achieve the segmentation of passable areas and weed detection in pineapple fields. Compared with previous methods, the training was more accurate, with an accuracy increase of 4.27% and an mIoU of 2.5%. The error in navigation line extraction was 5.472 cm, which solved the problem that robots on the ground could not perceive accurately.

3.2.3. Summary of Efficiency Indicators

For crop row detection methods, efficiency indicators correspond to the practical application of examining whether real-time response can be achieved and whether it can operate on acceptable equipment, balancing accuracy and speed while considering computational resource consumption. The main focus is on achieving real-time performance and computational cost. Real-time performance mainly includes two indicators: frame rate and single-frame processing time. TS-YOLO improves speed through model slimming and can adapt to all-weather operations. SEEDING-YOLO and YOLO-like models with U-Net as the backbone network can simultaneously maintain high detection accuracy and meet the requirements of seedling monitoring and frame rate. Some models can achieve automatic steering of autonomous driving harvesters and support precise seeding. (Classic draws inspiration from other perceptual methods)
The calculation cost quantifies the hardware used by the number of parameters and calculations. Optimizing the model by using depthwise separable convolutions, reducing parameter and computational complexity, and other methods to alleviate model load while compensating for accuracy loss. Some algorithms improve the accuracy of small object detection by changing their own structure, while also reducing the cost of quantitative calculation of planting layout. Efficiency indicators measure its technical feasibility through real-time response and resource consumption. Lightweight models and optimization algorithms are the best way to balance accuracy and efficiency, facilitating the selection of various application scenarios and hardware platforms.

3.3. Robustness Indicators

Robustness is the key point to measure the stable operation of crop row detection technology, and it is also the focus of research, directly determining the practical application value of the technology. The evaluation of robustness based on accuracy and efficiency mainly depends on whether the method can have a certain degree of fault tolerance to changes in lighting, the impact of weeds, and crop cutting.

3.3.1. Environmental Adaptability

Environmental adaptability refers to the ability of technical methods to resist common field disturbances such as light fluctuations and weed density. Patidar et al. [124] used the object detection algorithm based on ByteTrack Simple Online and Real Time Tracker (BTSORT) and YOLOv7 to estimate the weed density in chili fields. The YOLOv7 recognition model had a recognition accuracy of 0.92, a recall rate of 0.94, and a frame rate of 47.39 fps. The multi-target tracking accuracy of BTSORT combined with YOLOv7 is 0.85, the MOTP is 0.81, and the overall classification accuracy is 0.87. The image processing speed of 180 × 720 is 1.38 times faster than 1920 × 1080, achieving dynamic estimation of weed density in chili fields.
Song et al. [125] used semantic segmentation to integrate RGB-D and HHG wheat field navigation algorithms, achieving 95% mIoU for HHG, <0.1° average navigation line angle deviation and 2.0° standard deviation, and 30 mm average distance deviation. This method can avoid interference caused by crop occlusion within a certain range. Costa et al. [126] found that mixing image sets from different cameras and different field images can improve the robustness of CNN crop row detection environments with a small amount of data. Vrochidou et al. [127] noted that the visual system is affected by lighting, and multispectral imaging eliminates the adverse effects of noise. Stereoscopic vision has better stability and is more suitable for navigating crops such as cotton and corn in large fields. Zhao et al. [128] used a combination of ExGR index, OTSU, and DBSCAN methods, which have advantages over traditional algorithms for characterizing crop rows in weeds and shadows. The DBSCAN clustering method is more suitable for characterizing crop rows in complex environments. Pang et al. [129] proposed a cornrow detection system that combines geometric descriptors with MaxArea Mask Scoring RCNN, achieving an estimation accuracy of 95.8% for seedling emergence rate.

3.3.2. Scene Fault Tolerance

The focus on fault tolerance in this scenario corresponds to the adaptability to abnormal crop growth and extreme weather conditions. Liang et al. [130] proposed a cotton ridge cutting navigation line detection method based on edge detection and OTSU, which used the least squares method to fit narrow row gap navigation lines, achieving a visual detection positioning accuracy of 99.2% and a processing speed of 6.63 ms per frame during the cotton seedling stage. At the same time, its recognition accuracy for corn and soybeans reached 98.1% and 98.4%, with a mean lateral drift of 2 cm and a mean longitudinal drift of 0.57°. It has good robustness against missing seedlings and ruts. Zhang et al. [131] designed a corn missed sowing detection and compensation system using microcontrollers and fiber optic sensors. The accuracy of missed sowing detection is 96%, the error of replanting is 4%, and the replanting rate is 90%. The qualified rate of sowing when the tractor travels at a speed of 3–8 km/h reaches 90%, which can meet the requirements of missed sowing detection and replanting.

3.3.3. Summary of Robustness Indicators

In the performance evaluation indicators of crop row detection methods, robustness is an important indicator to consider whether the technology can work stably, and it directly determines whether the technology has practical application value. Robustness indicators are mainly judged from the perspectives of environmental adaptability and scene fault tolerance, and examining the fault tolerance of the method for conditions such as changes in lighting, weed interference, and crop cutting. For the adaptability of the environment, optimization algorithms and multimodal techniques can enhance anti-interference ability, the YOLOv7 tracker model can be used to dynamically estimate weed density. The algorithm that integrates semantic segmentation with multi-source data reduces the interference caused by crop occlusion. Image acquisition technology such as multispectral imaging and stereo vision can alleviate the impact of lighting noise. The above methods are more effective than traditional algorithms when used in the shadow of weeds. In terms of scene fault tolerance, specialized detection methods have also shown good robustness in cases of abnormal crop growth and extreme environmental conditions. The cotton seedling ridge planting navigation line detection method has good fault tolerance for missing seedling ruts, and the corn missed planting detection compensation system can accurately detect missed planting and quickly achieve replanting, which is suitable for multiple types of crops and different driving speeds. The so-called robustness index is a quantitative measure of the adaptability of technology to the environment and scene, as well as its ability to tolerate abnormal situations, in order to evaluate whether the technology can be truly applied in the field. It is one of the key reference standards for transitioning from the laboratory to the field.

3.4. Practical Indicators

The practicality index focuses on the practical feasibility of crop row detection technology and directly relates to the potential for large-scale application of the technology. Compared to accuracy, efficiency, and robustness, practicality focuses more on the controllability of hardware costs and the convenience of deployment processes, which are key considerations for technology transitioning from the laboratory to the field.

3.4.1. Hardware Cost

Hardware is one of the factors that determine whether technology is economical, and low-cost solutions are easier to implement. Cox et al. [132] proposed Visual Teach and Generalize (VTAG), which uses a low-cost uncalibrated monocular camera and a wheel odometer to automatically complete crop navigation in a greenhouse environment. With only a distance of 25 m to teach the target line, navigation within a range of more than 3.5 km can be achieved, and experiments have shown that its generalization gain can reach 140 s, providing a low-cost method for GNSS signal obstruction environments. Calera et al. [133] proposed an agricultural rover navigation system under the canopy, which combines low-cost hardware with multiple image methods to achieve seamless crop navigation for robots in different field conditions and locations without human intervention. Torres et al. [134] compared the estimation results of woody crop canopy parameters using MTSL LiDAR with UAV-DAP, and found that the R2 value could reach 0.82–0.94, providing a reliable data reference for the selection of orchard cultivation techniques. Hong et al. [135] improved the Adaptive Monte Carlo Localization—Normal Distributions Transform (AMCL-NDT) localization algorithm and applied it in combination with 2D LiDAR to agricultural robots. Palm garden simulations showed that the absolute pose error of the robot in the palm garden was reduced by more than 53%, demonstrating strong cost-effectiveness and proving the superiority of 2D LiDAR.

3.4.2. Deployment Difficulty

The difficulty of deployment affects the feasibility of technology implementation. Simplifying the labeling and annotation steps can lower the threshold for technical use. The robust crop row detection algorithm proposed by de Silva et al. [136], based on a sugar beet dataset containing 11 field variables, uses deep learning methods to segment crop rows and uses low-cost cameras for field change detection, which performs better than baseline. The InstaCropNet dual-branch structure labeling method based on camera histogram correction proposed by Guo et al. [137] can effectively eliminate blade interference and achieve labeling by simulating the strip-shaped structure at the center of the row. The average angle deviation of detection is ≤2°, and the detection accuracy can reach 96.5%. Jayathunga et al.’s [138] method used unlabeled UAV-DAP point cloud information to construct coniferous tree seedlings, with a total accuracy of 95.2% and an F1-score of 96.6%, reducing a significant amount of labeling work. Rana et al. [139] established the GobhiSet cabbage dataset, which provides automatic annotation options, reducing manual annotation time, and can be further used for model improvement.

3.4.3. Summary of Practical Indicators

In the performance evaluation indicators of crop row detection methods, practicality indicators will be defined from the perspective of technical feasibility and used as indicators of whether the technology can achieve scale. Compared with indicators such as accuracy, efficiency, and robustness, more emphasis is placed on hardware cost, controllability and simplicity, and feasible layout and installation, which serve as guiding technologies for the process from experimental benches to farmland. In terms of hardware cost, there are more low-cost solutions with low investment and easy deployment, such as using uncalibrated monocular cameras and wheel odometry, which can run long distances without the need for GNSS signals and have strong generalization ability to various environments. Combining 2D LiDAR with optimized positioning algorithms can meet certain accuracy and performance requirements while offering high cost-effectiveness, making it more suitable in economically disadvantaged areas. Concerning deployment difficulty, simplifying the process of annotation and calibration can reduce the difficulty of using machine vision systems. Some algorithms simulate the central structure of rows or use unlabeled point cloud data, which greatly reduces the workload of manual annotation and can achieve positioning without the need for specially calibrated equipment, which is conducive to promoting the use of technology. Overall, when evaluating technology selection from the perspectives of economy, ease of operation, and practicality, accuracy, efficiency, and robustness should also be taken into account. The four types of indicators work together to form a comprehensive quantitative indicator, which can ensure that the improved and optimized crop row detection technology is used in actual production.

3.5. Summary of Performance Evaluation Indicators

Using the performance evaluation indicators of the crop row detection method, this technology is quantitatively evaluated from four aspects: accuracy, efficiency, robustness, and practicality. Accuracy index: The ability to identify features and estimate geometric parameters is represented by detection accuracy and positioning error. Efficiency indicator: measures the balance between speed and cost from two aspects: real-time performance and computational cost. Robustness index: It characterizes environmental adaptability and scene fault tolerance, and has a certain stabilizing effect under harsh conditions. Practicality indicator: consideration of its feasibility in terms of hardware cost and installation difficulty. The four types of indicators complement each other, and can provide quantitative data support for technical optimization and application.
Table 2 summarizes the typical indicators of various methods/models, and based on their main differences in accuracy, efficiency, and robustness, various technical indices can be measured and compared, providing a more intuitive and effective basis for the selection of crop row detection methods.
This section establishes an evaluation system for crop row detection methods based on four dimensions: accuracy, efficiency, robustness, and practicality. The detection accuracy is often measured by accuracy, F1-score, and other methods. The vast majority of models can achieve an accuracy of over 95% for specific datasets, and the highest mAP of YOLO after optimization can reach 97.55%. The positioning error can be lowered by optimizing control algorithms. The crop-density–independent steady-state tracking error of unmanned tracked harvesters in rice fields, and the angular error of certain algorithms, can reach 0.032 m and as low as 0.59°, respectively. In terms of efficiency, real-time performance is measured by frame rate and processing time per frame. The lightweight model TS-YOLO’s frame rate can already meet the needs of all-weather work, while models like SEEDING–YOLO improve both accuracy and speed. The computational cost can be reduced by some methods (such as depth-wise separable convolution) to minimize the number of parameters and calculations in the model, and to compensate for the loss of model accuracy as much as possible. The focus is on environmental adaptability and scene fault tolerance. Robustness can be improved by combining multi-source data or using multimodal methods to enhance anti-interference ability. Adding RGB-D data to semantic segmentation can avoid occlusion interference. Some methods have a certain fault tolerance for scenarios such as missing seedlings and ruts. Practicality considerations include hardware costs and deployment difficulties. A low-cost, uncalibrated monocular camera combined with a wheel speed sensor and a low budget can achieve low-cost, high-precision positioning by simplifying the labeling process. This can greatly reduce the technical barrier to entry for long-distance navigation. Overall, lightweight, low-cost, and strong environmental adaptability are the main development directions for the future, and are important factors in introducing papers from the laboratory to real-world scenarios.

4. Comparison of Adaptability in Farmland Scenes

4.1. Comparison of Methods for Open-Air Scenarios

As an agricultural production scenario, open-air fields are greatly affected by external natural environment interference, and the difficulty of detection varies with weed density and crop growth stage. In simple scenarios, crops have regular rows and minimal interference, and high efficiency should be pursued during detection. In complex scenarios, weeds are dense and there are many types of crop forms, so high requirements are placed on anti-interference performance.

4.1.1. Simple Scenarios in Open-Air Scenarios

In simple scenarios, crop rows have good regularity and minimal interference, so lightweight methods have more advantages. Sun et al. [150] used GD-YOLOv10n-seg and PCA fitting to achieve detection of soybean and corn seedling rows. By integrating GhostModule and DynamicConv, the network size was reduced by 18.3%, the fitting centerline accuracy was 95.08%, the angle offset was 1.75°, and the processing speed was 61.47 fps, meeting the requirements of composite planting navigation. In order to achieve 92.2% mAP for corn seedlings, Gong et al. [151] changed YOLOv5s to YOLOv5-M3 and replaced the backbone with MobileNetv3, combined with CBAM, to achieve a recognition speed of 39 fps and effectively denoise images, while also improving interference.
Zheng et al. [152] used vertical projection and Hough transform to extract cornrow features and developed an automatic row-oriented pesticide application system. The algorithm took an average of 42 ms, with an optimal accuracy of 93.3% and a field deviation of 4.36 cm, saving 11.4–20.4% of pesticide application compared to traditional methods. In the SN-CNN model proposed by Zhang et al. [153], the parameters containing the C2f_UIB module and SimAM attention are 2.37 M, and mAP@0.5. At 94.6%, the RANSAC fitting RMSE value is 5.7 pixels, and the entire image processing can be completed on the embedded platform at a speed of 25 ms/frame. The embedded platform has high real-time performance. Geng et al. [154] developed an automatic alignment system for corn harvesters, which combines touch detection and adaptive fuzzy PID control. When the straw offset is ±15 cm, the proportion reaches over 95.4%, reducing significant ear loss. Hruska et al. [155] used a machine learning model based on near-infrared data for real-time weed detection in broad row maize (Zea mays) fields. The customized model achieved a recognition accuracy of 94.5%, while emphasizing the practical limitations of the dataset.

4.1.2. Complex Scenarios in Open-Air Fields

For complex scenarios, strong anti-interference ability is required, and using deep learning and multi-sensor fusion is currently one of the popular methods. Yang et al. applied log transformation to convolutional neural networks to improve corn ear features. By using micro ROI to extract corn ear feature points, the accuracy was improved by 5%, and the real-time performance was improved by 62.3% [156]. Guo et al. [157] defined crop row detection as curve approximation based on Transformer, outputting shape parameters end-to-end, reducing steps, and improving generality for complex environments. In response to the low detection accuracy of tea buds in open-air tea gardens, Yu et al. proposed Tea CycleGAN and a data augmentation method, which improved the mAP of YOLOv7 detection to 83.54%, alleviating the influence of light and other factors on measurement accuracy [158].
Deng et al. [159] designed and proposed the HAD-YOLO model based on indoor images of three common weeds in the field. The average accuracy (mAP) of weed detection was verified to be 94.2% using a greenhouse-collected dataset. Using the greenhouse-collected dataset as pre-training weights, the field-collected dataset was tested to verify that the average accuracy of weed detection can reach 96.2%, with a detection frame rate of 30.6 fps. The HAD-YOLO model can fully meet the requirements of precise weed identification in crop growth environments and provide reference for automated weed control. Based on the improved YOLOv8, Zuo et al. [160] introduced the LBDC-YOLO model and adopted GSConv’s Slim neck design, incorporating a triple attention module. In open-air scenarios testing, the average detection accuracy of broccoli reached 94.44%, which can adapt to complex field disturbances. The cornrow control weed removal robot, using YOLOv5 and the seedling extraction method, achieved a weed removal rate of 79.8% and a seedling damage rate of 7.3%, completing integrated operations [161]. The enhanced version of the YOLOv5 intelligent weeding machine has an average weeding rate of 96.87% and a seedling damage rate of 1.19%, improving its adaptability to complex scenarios [162].

4.1.3. Open-Air Scenarios Challenges and Responses

In the open-air scenarios, extreme weather such as rainstorms and strong winds will have a great impact on the sensor. Strong wind will cause crops to fall or swing, which will make the number of photons scattered to the ground in the LiDAR point cloud relatively increase, resulting in more sparse LiDAR point cloud data, which will seriously affect the later point cloud preprocessing and row structure extraction accuracy. According to current research, although some methods have adopted technological improvements to enhance anti-interference, such as using waterproof LiDAR to reduce the damage caused by rainwater to sensors, and using a canopy navigation system based on ToF depth cameras, it has been found that under complex lighting conditions, the average horizontal and vertical position errors of cornfields can reach 5.0 cm and 4.2 cm, respectively, in corn and sorghum fields, proving that they have achieved a certain degree of resistance to some harsh environments. In addition, in traditional vision, the combination of the HSV color model and the OTSU thresholding method is used for image segmentation. In addition, in the YOLO series models that integrate attention mechanisms in deep learning methods, such as the ST-YOLOv8s network model, the accuracy of cornrow detection has been improved by 4.03–11.92%, and the angle error has been reduced by 0.7–3.78°. All of the above can alleviate the impact of weather interference on feature extraction in a certain sense, but so far, there are few special methods to solve the problem of sensor hardware damage and serious sparse point clouds caused by rainstorms and strong wind, and more efforts are being made to improve the algorithm against interference. Outdoor crop row detection requires both strengthening sensor hardware protection and improving algorithm robustness. It can ensure good working conditions under general interference conditions, but how to achieve long-term stable and reliable operation in harsh weather environments is a problem that needs to be solved.

4.2. Comparison of Methods for Facility Agriculture Scenarios

Facility agriculture provides a relatively stable detection foundation for crop row detection by utilizing controllable conditions such as light and temperature. However, due to the high degree of obstruction and vine entanglement in the production process of crops, special requirements are placed on sensor equipment.

4.2.1. Simple Scenarios in Facility Agriculture

When the growth conditions of crops are relatively simple, the main requirements for detection technology are to improve detection accuracy and speed. An agricultural big data platform is built based on the “Data Process Organization” three-dimensional collaborative framework model proposed by Wang et al. [163], which integrates sensor data from multiple sources. The application practice of linking cultivation, planting, management, harvesting, and other links together for greenhouse vegetable crops shows that the efficiency of agricultural machinery is increased by 30%, the land utilization rate is increased by 15%, and the cost is reduced by 20%.
Ulloa et al. [164] proposed a CNN-based vegetable detection and characterization algorithm. The data collected in the experimental field was trained and integrated with data processing and driving operations on ROS. The field experiment results showed that the accuracy of using neural networks to detect vegetables was 90.5%, and the vegetable characterization error was ±3%. The use of low-cost RGB cameras can achieve precise fertilization at the level of individual vegetable plants. Yan et al. [165] used the YOLOv5x model to identify tomato plug seeds, with an average detection accuracy of 92.84% and an average single disk detection time of 13.475 s. When the missed sowing rate was 5–20%, the success rate of replanting reached 91.7%, and the sowing productivity reached 42.4 disks/h, improving the accuracy and production efficiency of sowing. Wang et al. [166] fused CNN and LSTM to establish an integrated model for predicting cucumber downy mildew by introducing disease related information and greenhouse indoor and outdoor environmental data. The model had an MAE of 0.069, an R2 of 0.9127, and an average error of 6.6478%, providing technical support for early warning of cucumber downy mildew airborne diseases.

4.2.2. Complex Scenarios in Facility Agriculture Scenarios

Due to factors such as excessive coverage of branches and leaves, as well as vine entanglement, detection algorithms for such complex scenarios must have stronger anti-interference ability and robustness. To this end, Wang et al. [167] proposed a visual navigation method that integrates vegetation index and ridge segmentation: firstly, the vegetation index and ridge semantic segmentation methods are used to obtain plant segmentation maps. Then, the improved PROSAC algorithm and distance filtering method were used to fit the navigation line. The experimental results showed that this method was superior to traditional methods in terms of weed resistance and missing row interference. Its running speed reached 10 fps, meeting the requirements of real-time navigation, and can be extended to vegetable farms such as broccoli.
The ridge strawberry dual arm collaborative harvesting robot developed by Yu et al. [168] integrates a lightweight Mask R-CNN vision system and CAN bus control system. Greenhouse experiments have shown that the success rate of harvesting after removing flowers and fruits reaches 49.30% (30.23% in the untrimmed state), with a single arm harvesting time of 7 s per fruit and a dual-arm collaborative harvesting time of 4 s per fruit. However, the harvesting effect needs to be improved when the fruit is severely obstructed. Jin et al. [169] used an Intel Realsense D415 depth camera to identify the edges of leafy vegetable seedlings and reduced transplanting losses through an L-shaped seedling path. The accuracy of the calibration experiment was 98.4%, and the average X/Y coordinate deviations were 5 mm and 4 mm, respectively. Compared with the control group, the machine vision group had an injury rate of 8.59%, which was 11.11% lower than the manual group, and had a higher efficiency.

4.2.3. Facility Agriculture Scenarios Challenges and Responses

In response to the problems of camera lens fogging or condensation under high humidity conditions in greenhouses, which lead to a decrease in camera imaging quality, darkening and blurring of images, reduced image contrast, and difficulty in extracting texture details, affecting the accuracy and efficiency of crop row detection, previous studies have optimized image quality through image dehazing algorithms to indirectly improve detection stability. Based on CNN vegetable detection and characterization algorithms, inexpensive RGB cameras were used to collect greenhouse images, and after data training and image preprocessing optimization, vegetable detection accuracy of 90.5% and vegetable characterization error of ±3% were achieved in greenhouses, indicating that dehazing and other preprocessing methods can effectively improve image quality. The YOLOv5x model was used to identify tomato plug seeds. After image enhancement and algorithm optimization, the detection accuracy was 92.84%, and the average detection time per disk was 13.475 s. Even in high humidity environments where image quality is affected, good detection results can be achieved. The integrated C-object IV and LSTM network cucumber downy mildew prediction model reduces the mean square error of cucumber downy mildew prediction to 6.6478% by integrating greenhouse environmental information and disease information, and the goodness of fit (R2) reaches 0.9127, indicating that this method containing multi-source information and multiple types of algorithm sets can to some extent offset the adverse effects of high humidity environments and ensure the normal operation of the system. However, there is currently very little research on how to solve the problem of physical condensation on camera lenses under high humidity conditions in greenhouses, especially by utilizing newly added targeted dehazing algorithms without removing existing general image optimization methods. After optimizing facility agriculture scenarios using image dehazing algorithms, although the performance of cameras has been improved to some extent in high humidity environments, thereby enhancing the stability of detection methods in practical applications, the combination of software and hardware protection has not yet been completed. Further improvement is needed to ensure the stable operation of the technology in long-term high humidity conditions.

4.3. Comparison of Methods for Orchard Scenarios

Orchards exhibit various forms due to different crops and growth stages, with the main differences being the degree of inter-row regularity, tree crown density, and terrain stability. A regular rattan orchard requires accurate inter-row and efficient harvesting operations. Complex orchards are prone to poor robustness and positioning continuity due to issues such as tree canopy obstruction and large terrain undulations.

4.3.1. Simple Scenarios in Orchard Scenarios

The direction of the rule vine frame scenarios is fixed, and the detection methods are mainly based on precise positioning and fast completion. Combining deep learning and deep information to detect tree trunks, a dataset was generated using 2453 trees. This method was used to achieve tree detection with a detection accuracy of 81.6% mAP, an inference time of 60 ms, and a positioning error of 9 mm at a distance of 2.8 m. At the same time, the DWA obstacle avoidance algorithm was integrated, allowing the robot to adjust its position information and automatically pass between rows, avoiding returning to the previous path planning after passing, and solving problems such as terrain, lighting, and GPS [170].
The GNSS inter-row weeding system designed by Zhang et al. [171] combines obstacle detection and tool coordinate conversion. When the machine advances at a speed of 460 mm/s, the weeding coverage rate reaches 94.62%, the plant damage rate is 1.94%, and the comprehensive effect is the best. Devanna et al. [172] proposed a deep fusion framework based on RGB+D, and Mask Attention Network (MANet) was used to segment grape clusters, and overlapping fruits were separated by combining deep clustering methods. The average counting error obtained in the Italian vineyard experiment was about 12%, and the weight estimation error was 30%, which can generate high-resolution yield maps. Xu et al. [173] used the YOLOv4-SE model to locate grape picking points on trellises, achieving a picking recognition rate of 97%, a positioning accuracy of 93.5%, a Euclidean distance error of only 7.69 mm, and a picking speed of 6.18 s/string. Salas et al. [174] adjusted the spray amount according to the canopy volume and density factor by using ultrasonic + GNSS spray, and selected a medium beam sensor with good effect. The high pressure of the electric valve is 4.0–14.0 MPa, which can improve the accuracy of pesticide use and is suitable for orchards with regular rows.

4.3.2. Complex Scenarios in Orchard Settings

Due to the large undulations in the terrain of its mixed fruit forest and the mutual obstruction between fruit trees, it is necessary to improve its anti-interference ability and positioning stability. Nakaguchi et al. [175] established a stereo vision spray navigation system. When the efficiency is low, the speed is set to 1–2 km/h. In this case, the navigation RMSE of EfficientDet is 0.20–0.31 m, the navigation RMSE of YOLOv7 is 0.40–0.48 m, and the estimated turn error is low, not subject to GNSS signal interference.
Khan et al. [176] proposed an improved YOLOv8 algorithm for pesticide spraying optimization in orchard environments, with 93.3% mAP, and the pesticide spraying amount was reduced by 40%, which is suitable for orchard canopy segmentation requirements. Zhang et al. [177] used a Fast Unet model to prune convolutional kernels and added an ASPP module. The navigation path MIOU for peach, orange, and kiwi can reach 0.956–0.987, the inference speed can reach 48.8 fps, and the yaw angle difference is ≤0.4°. Xie et al. [178] used YOLOv5s to identify tree trunks, fitted the boundary line with a quadratic curve, and combined it with improved LQR control. Based on constant speed driving, the initial speed was set to 0.5 m/s. Under these conditions, the average lateral error was 0.059 m and the steady-state error was 0.0102 m. Xu et al. [179] achieved hardware savings by using a low-cost stereo camera and IMU system, and built a multitask perception network, achieving a detection rate of 69 fps (mAP: 96.7%) while ensuring detection speed. The results of the Taoyuan test were basically the same as those of manual driving. Yang et al. [180] combined neural networks and pixel scanning to extract path information, and the segmentation accuracy can reach 92% to 96%, with an average distance error of only 5.03 cm. Jia et al. [181] developed an orchard weeding machine that reduced the overall weight by 8% and achieved a weeding coverage rate of 84.6% when moving at a speed of 0.5 m/s. It can be used for different tree crown obstructions and small plant spacing environments. Syed et al. [182] added the Ghost module and SE block to YOLOv8n model, which improved the obstacle classification accuracy to 95% with a false positive rate of 2%. It has good generalization ability under low light conditions. Cao et al. [183] integrated LiDAR and IMU into an improved LeGO LOAM and used Rapidly exploring Random Tree Star (RRT*) algorithms to plan the path in a time of 0.357–0.565 s, optimizing the motion trajectory by reducing the cumulative error of the relocation module. Scalisi et al. [184] implemented a commercial platform for flower cluster detection, with an RMSE value of approximately 5, which can reduce fruit yield errors by about 5%. By visualizing the data, a heatmap is formed to assist in production and management decision-making. Mao et al. [185] used the ERWON method to segment single row apple trees from unmanned aerial vehicle point clouds, and obtained a fruit accuracy of 0.971 and a recall rate of 0.984. Krkljes et al. [186] carried out modular design of blueberry orchard robots, which can save 50–60% of herbicide usage and meet the needs of various growth stages throughout the year.

4.3.3. Orchard Scenarios Challenges and Responses

In the orchard scenarios, due to factors such as different types of crops, irregular row spacing during growth stages, varying tree crown density, and unstable terrain, there are different difficulties in detecting crop rows in the orchard. In complex orchards, small grape clusters and intertwined vines, dense apple tree crowns, and high tree crown coverage can lead to incomplete collection of visual sensors and LiDAR data. The uneven terrain of orchards can cause problems such as poor sensor positioning accuracy and deviation in driving paths. At present, the problem of orchard scenario detection in this part is mainly solved through multi technology fusion and algorithm optimization: a deep fusion framework based on RGB+D, which combines the grape clusters segmented by Mask Attention Network with the overlapping fruits separated by deep clustering, and conducts experiments in Italian vineyards. It can effectively reduce the error caused by tree crown obstruction by about 12% and only about 30% in weight. Improved YOLOv8 algorithm for pesticide spraying optimization in orchards, with 93.3% mAP and accuracy of 93.6%, reducing pesticide dosage by 40%. It can be seen that this method has good adaptability to changes in tree crown density in orchards. For scenarios with severe terrain undulations, an improved LeGO LOAM integrating LiDAR and IMU was used. The RRT * algorithm was used to plan the path, with a path planning time of 0.357–0.565 s. The motion trajectory was optimized by reducing the cumulative error of the repositioning module, thereby improving the positioning stability in terrain environments with large surface undulations. By combining a low-cost stereo camera with an IMU and using a multi task perception network, a detection rate of 69 fps and 96.7% mAP were achieved, which is consistent with human driving and solves the problem of inaccurate positioning caused by terrain and occlusion. However, currently there is still a lack of specialized adaptation methods for scenarios where the inter-row regularity of multi crop mixed planting orchards based on different orchard types is not high, and most of them are aimed at optimizing a certain orchard type. The use of multi-sensor fusion, algorithm optimization, and path planning in orchard scenarios can solve problems such as tree crown occlusion and terrain undulations, ensuring detection and operational accuracy. However, the matching level for complex scenarios such as multi crop mixing still needs to be improved.

4.4. Comparison of Methods for Special Terrain Scenarios

Due to the unique terrain characteristics of special terrain scenarios, there are special requirements for crop row detection in such scenarios. Errors caused by terrain undulations need to be addressed in sloping farmland, and wetland farmland needs to overcome environmental interference factors such as water reflection and mud.

4.4.1. Slope Farmland Scenarios

Sloping land needs to solve the problem of positioning difficulties caused by the undulating terrain, mainly through autonomous navigation and precise operation methods. Kim et al. [187] developed Purdue University’s Agricultural Navigation (P-AgNav) system, which utilizes 3D LiDAR distance view images to achieve precise intra-row and inter-row positioning in cornfields, autonomous navigation under the canopy, obstacle avoidance, and inter-row switching without the need for GNSS assistance. Li et al. [188] proposed a navigation line extraction method that spans different growth stages. The test results using sugarcane, corn, and rice datasets show that the MAE is 0.844–2.96°, and the RMSE is 1.249–4.65°, indicating strong generality. The LiDAR-based navigation seeding system proposed by Charan et al. [189] has a standard deviation of only 14.14% for the grid quality index (ICRQ) when advancing at a speed of 5 km·h −1, which can meet the requirements of grid planting for corn on sloping land.

4.4.2. Wetland Farmland Scenarios

The wetland area avoids problems such as water surface reflection and wetland mud, and focuses on robustness and operational accuracy. The TP-ISN model, a dual path instance segmentation network proposed by Chen et al. [190], is used. The model size is 13 MB, the inference rate is 39.37 fps, and the average recognition angle error is 0.61°. It is suitable for water surface reflection environments. The YOLOv5 AC proposed by Wang et al. [191] was used to detect missed planting in rice transplanters, with an accuracy rate of 95.8%, an F1-score of 93.39%, and an average training time of about 0.284 h. In the drone image leakage detection method proposed by Wu et al. [192], the recall rate is over 80%, the accuracy is over 75%, and the precise geographic coordinates of the leakage can be obtained. The vision-based trajectory tracking algorithm proposed by Fu et al. [193] can achieve a tracking success rate of >96.25%, 3° < angle error ≤ 90°, and a maximum lateral offset of 4.55–6.41 cm at speeds of 0.3–0.9 m/s. In addition, Guan et al. [194] proposed an intelligent harvesting method for water shield vegetables that meets the harvesting requirements by using unmanned helicopter vision sensors to collect information on the upright part of the target. The method can complete the search and location positioning of wild aquatic wetlands with small system errors, achieving a positioning accuracy of 4.80 mm and a harvesting success rate of 90.0%.

4.4.3. Other Scenarios

Wang et al. [195] proposed a method for identifying rice paddy seedling rows based on subzone growth and outlier removal, and combined it with an improved PFL-YOLOv5 model to detect seedlings. Under different background conditions, weed density, and seedling row curvature, the average recognition angle error obtained was within 0.5°, which can effectively avoid seedling damage caused by mechanical weeding.
Compared to orchard and special terrain scenarios, adapting to regular scenes requires a small amount of algorithm to obtain efficient accuracy methods, while for complex scenarios, it relies more on multi-sensor fusion and robust algorithms. For scenarios with terrain interference, high positioning stability and strong environmental anti-interference ability are required, which also indicates the direction of selection in various scenarios. The current problems and future development trends will be further explained later.

4.5. Summary of Adaptability Comparison in Farmland Scenarios

This section discusses the adaptability comparison of crop row detection methods based on common farmland scenarios, and finds that they reflect the correlation between scene characteristics and technology selection. In open-air field scenarios, simple scenarios are usually detected with efficient and lightweight solutions, using traditional vision or lightweight deep learning to achieve accurate detection. And complex scenarios require deep learning models with multi-sensor fusion and robustness to resist interference. For facility agriculture scenarios, simple scenarios generally pursue more cost-effective, precise, and efficient solutions. Complex scenarios require the use of fusion algorithms to overcome high occlusion. In the scenario of orchard regularization, it mainly relies on precise positioning technology, so the difficulty is to improve its resistance to tree crown obstruction and terrain adaptability. Special terrain scenarios include sloping farmland that requires attention to autonomous navigation stability, and wetland farmland that needs to consider environmental interference factors such as water surface reflection.
It can be seen in Table 3 that different technical scenarios require core methods, specific performance parameters, and applicability characteristics. By comparing the application of each method in specific scenarios, it can provide a more direct reference for the selection of actual crop row detection technology.
This section analyzes the scene characteristics of four crop rows: open-air, facility agriculture, orchard, and special terrain. It is found that the scene characteristics determine the suitable technical approach, and there are significant differences between the scenes. Simple scenes in outdoor environments tend to be processed using lightweight methods, such as using GD-YOLOv10n seg to achieve an efficiency of 61.47 fps through model compression, in order to meet the speed requirements of composite planting navigation. When encountering more complex scenarios, one can only rely on deep learning and multi-sensor fusion methods. The HAD-YOLO model can accurately detect weeds in the field, but due to the fact that sensors can be damaged in adverse weather conditions and the lack of sensors can affect weed recognition accuracy, the current main solution is to address this issue through the anti-interference ability of algorithms, with less emphasis on hardware protection.
In the context of facility agriculture, the environment inside the facility is relatively stable, while in simple scenarios, the goal is to pursue precision and high efficiency. The accuracy of identifying tomato plug seeds using YOLOv5x is 92.84%. For complex scenes, it is necessary to consider the problem of branch and leaf occlusion, and integrate vegetation index and ridge segmentation methods. In normal scenes, it can achieve resistance to weeds, missing rows, and other problems. However, due to the lack of a solution to the problem of lens fogging in high humidity, image dehazing algorithms are required in the later stage, which inevitably reduces some accuracy. In the orchard scene, the rule-based trellis scene has strong regularity and precise positioning. The accuracy of YOLOv4-SE grape picking point positioning reaches 93.5%. However, in complex scenes with large terrain undulations and multiple canopy obstructions, the improved YOLOv8 pesticide spraying optimization accuracy is 93.6%. However, it does not take into account the adaptation method of orchard planting status in multi crop mixed orchards. Under special terrain conditions, slopes require autonomous navigation to maintain stable positioning, while wetlands are prone to degraded image quality due to water surface reflection. The TP-ISN model has an angle error of 0.61°. Simple scenarios focus on using efficient, lightweight algorithms, while complex scenarios rely on the comprehensive application of multiple technologies. Hardware protection and adaptation to various crop growth environments are key development directions in the future.

5. Conclusions

Crop row detection is the core technology for autonomous navigation and precise operation of agricultural robots, and its usage methods and technical solutions directly determine the accuracy and stability of agricultural machinery operations. Due to the limitations of single-technology environmental adaptability and incomplete multi-sensor fusion mechanisms during agricultural machinery operation, the paper reviews the current research status of crop row detection and provides a detailed analysis of the application and scenario adaptability of agricultural robot navigation technology. From a technical perspective, vision-based detection methods have become a research hotspot due to their low cost and rich data. Traditional visual methods based on manual features such as threshold segmentation and Hough transform can achieve fast and accurate detection in structured scenes using a small amount of training set data. Deep learning based detection methods can improve the detection accuracy of complex scenarios by automatically learning more features using models such as CNN and YOLO. LiDAR-based methods can use point cloud preprocessing, row-structure extraction, and 3D feature calculation to obtain good geometric features, effectively mitigating the adverse effects on visual detection in the environment of light interference, and compensating for the shortcomings of a single visual modality. Multi-sensor fusion methods can overcome the limitations of a single sensor by utilizing visual LiDAR texture geometric complementarity, visual GNSS/IMU long short-term fusion localization, and other techniques, making it a technical approach to solving complex environmental problems. The paper establishes a quantitative analysis and evaluation system from four aspects: accuracy, efficiency, robustness, and practicality. Typical scenario applicability experiments have shown that due to their environmental characteristics, open-air fields, facility agriculture, orchards, and special terrain scenarios have significant differences in the demand for detection methods. Rule scenarios tend to use efficient and lightweight solutions, while complex scenarios tend to use fusion algorithms and robust models.
This review reveals that there are still three issues in current research. The first is the problem of poor robustness in extreme environments and susceptibility to dynamic interference. The second issue is that the problems of inconsistent data after multi-sensor fusion and data redundancy and feature conflicts have not been resolved. The third issue is the high cost of deploying technology, which makes implementation challenging and is work intensive, making it difficult to apply this technology to small and medium-sized farmland.
Therefore, based on our review of other research papers, it is suggested that future research can be further studied from four aspects to (1) strengthen multimodal fusion, focus on deep feature fusion of visual, LiDAR, and infrared/tactile modalities, construct a multimodal fusion model based on dynamic weighted allocation, and optimize and improve it according to specific application environments. (2) To achieve the goals of lightweight and edge deployment, the hardware requirements for embedded platforms in areas such as model compression and knowledge distillation are reduced, and real-time performance is improved. (3) Enhance data-driven and scenario generalization, generate data from multiple scenarios using digital twin technology to train models, and transfer usable training data from different crops and growth stages using transferrable usable training data. (4) Design industrial development-based, low-cost, and easy-to-implement integrated solutions, and promote laboratory technology to the field. Improve the precision, intelligence, and large-scale application of crop row detection technology, while providing strong technical support for intelligent agriculture.

Author Contributions

Conceptualization, Z.M. and X.W.; methodology, X.C., B.H. and J.L.; software, Z.M.; validation, Z.M. and X.W.; formal analysis, X.C., B.H. and J.L.; investigation, Z.M.; resources, B.H.; data curation, Z.M.; writing—original draft preparation, Z.M.; writing—review and editing, Z.M. and X.W.; visualization, X.W.; supervision, X.W.; project administration, X.W.; funding acquisition, X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Key Technologies Research and Development Program (No. 2022YFD2002403), the Ministry of Industry and Information Technology’s Residual Film Recycling Machine Project (No. zk20230359), the Priority Academic Program Development of Jiangsu Higher Education Institutions (No. PAPD-2023), and Talent Development Fund of Shihezi University 2025—“Small Group” Aid—Xinjiang Team (No. CZ002562).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Shi, J.; Bai, Y.; Diao, Z.; Zhou, J.; Yao, X.; Zhang, B. Row Detection Based Navigation and Guidance for Agricultural Robots and Autonomous Vehicles in Row-Crop Fields: Methods and Applications. Agronomy 2023, 13, 1780. [Google Scholar] [CrossRef]
  2. Zhang, S.; Liu, Y.; Gong, K.; Tian, Y.; Du, Y.; Zhu, Z.; Zhai, Z. A Review of Vision-Based Crop Row Detection Method: Focusing on Field Ground Autonomous Navigation Operations. Comput. Electron. Agric. 2024, 222, 109086. [Google Scholar] [CrossRef]
  3. Yao, Z.; Zhao, C.; Zhang, T. Agricultural Machinery Automatic Navigation Technology. iScience 2024, 27, 108714. [Google Scholar] [CrossRef]
  4. Bai, Y.; Zhang, B.; Xu, N.; Zhou, J.; Shi, J.; Diao, Z. Vision-Based Navigation and Guidance for Agricultural Autonomous Vehicles and Robots: A Review. Comput. Electron. Agric. 2023, 205, 107584. [Google Scholar] [CrossRef]
  5. Wang, T.; Chen, B.; Zhang, Z.; Li, H.; Zhang, M. Applications of Machine Vision in Agricultural Robot Navigation: A Review. Comput. Electron. Agric. 2022, 198, 107085. [Google Scholar] [CrossRef]
  6. Bonacini, L.; Tronco, M.L.; Higuti, V.A.H.; Velasquez, A.E.B.; Gasparino, M.V.; Peres, H.E.N.; Becker, M. Selection of a Navigation Strategy According to Agricultural Scenarios and Sensor Data Integrity. Agronomy 2023, 13, 925. [Google Scholar] [CrossRef]
  7. Xie, K.; Zhang, Z.; Zhu, S. Enhanced Agricultural Vehicle Positioning through Ultra-Wideband-Assisted Global Navigation Satellite Systems and Bayesian Integration Techniques. Agriculture 2024, 14, 1396. [Google Scholar] [CrossRef]
  8. Wang, W.; Qin, J.; Huang, D.; Zhang, F.; Liu, Z.; Wang, Z.; Yang, F. Integrated Navigation Method for Orchard-Dosing Robot Based on LiDAR/IMU/GNSS. Agronomy 2024, 14, 2541. [Google Scholar] [CrossRef]
  9. Qu, J.; Qiu, Z.; Li, L.; Guo, K.; Li, D. Map Construction and Positioning Method for LiDAR SLAM-Based Navigation of an Agricultural Field Inspection Robot. Agronomy 2024, 14, 2365. [Google Scholar] [CrossRef]
  10. Wen, J.; Yao, L.; Zhou, J.; Yang, Z.; Xu, L.; Yao, L. Path Tracking Control of Agricultural Automatic Navigation Vehicles Based on an Improved Sparrow Search-Pure Pursuit Algorithm. Agriculture 2025, 15, 1215. [Google Scholar] [CrossRef]
  11. Su, Z.; Zou, W.; Zhai, C.; Tan, H.; Yang, S.; Qin, X. Design of an Autonomous Orchard Navigation System Based on Multi-Sensor Fusion. Agronomy 2024, 14, 2825. [Google Scholar] [CrossRef]
  12. Kaewkorn, S.; Ekpanyapong, M.; Thamma, U. High-accuracy position-aware robot for agricultural automation using low-cost IMU-coupled triple-laser-guided (TLG) system. IEEE Access 2021, 9, 54325–54337. [Google Scholar] [CrossRef]
  13. Cui, X.; Zhu, L.; Zhao, B.; Wang, R.; Han, Z.; Lu, K.; Cui, X. DoubleNet: A Method for Generating Navigation Lines of Unstructured Soil Roads in a Vineyard Based on CNN and Transformer. Agronomy 2025, 15, 544. [Google Scholar] [CrossRef]
  14. Chen, H.; Xie, H.; Sun, L.; Shang, T. Research on Tractor Optimal Obstacle Avoidance Path Planning for Improving Navigation Accuracy and Avoiding Land Waste. Agriculture 2023, 13, 934. [Google Scholar] [CrossRef]
  15. Jin, X.; Lin, C.; Ji, J.; Li, W.; Zhang, B.; Suo, H. An Inter-Ridge Navigation Path Extraction Method Based on Res2net50 Segmentation Model. Agriculture 2023, 13, 881. [Google Scholar] [CrossRef]
  16. Gao, P.; Fang, J.; He, J.; Ma, S.; Wen, G.; Li, Z. GRU–Transformer Hybrid Model for GNSS/INS Integration in Orchard Environments. Agriculture 2025, 15, 1135. [Google Scholar] [CrossRef]
  17. Yang, T.; Jin, C.; Ni, Y.; Liu, Z.; Chen, M. Path Planning and Control System Design of an Unmanned Weeding Robot. Agriculture 2023, 13, 2001. [Google Scholar] [CrossRef]
  18. Gai, J.; Guo, Z.; Raj, A.; Tang, L. Robust Crop Row Detection Using Discrete Fourier Transform (DFT) for Vision-Based In-Field Navigation. Comput. Electron. Agric. 2025, 229, 109666. [Google Scholar] [CrossRef]
  19. Zhang, B.; Zhao, D.; Chen, C.; Li, J.; Zhang, W.; Qi, L.; Wang, S. Extraction of Crop Row Navigation Lines for Soybean Seedlings Based on Calculation of Average Pixel Point Coordinates. Agronomy 2024, 14, 1749. [Google Scholar] [CrossRef]
  20. Ruangurai, P.; Dailey, M.N.; Ekpanyapong, M.; Soni, P. Optimal Vision-Based Guidance Row Locating for Autonomous Agricultural Machines. Precis. Agric. 2022, 23, 1205–1225. [Google Scholar] [CrossRef]
  21. Zhou, X.; Zhang, X.; Zhao, R.; Chen, Y.; Liu, X. Navigation Line Extraction Method for Broad-Leaved Plants in the Multi-Period Environments of the High-Ridge Cultivation Mode. Agriculture 2023, 13, 1496. [Google Scholar] [CrossRef]
  22. Gai, J.; Xiang, L.; Tang, L. Using a Depth Camera for Crop Row Detection and Mapping for Under-Canopy Navigation of Agricultural Robotic Vehicle. Comput. Electron. Agric. 2021, 188, 106301. [Google Scholar] [CrossRef]
  23. Yun, C.; Kim, H.J.; Jeon, C.W.; Gang, M.; Lee, W.S.; Han, J.G. Stereovision-Based Ridge-Furrow Detection and Tracking for Auto-Guided Cultivator. Comput. Electron. Agric. 2021, 191, 106490. [Google Scholar] [CrossRef]
  24. Zhang, X.; Chen, B.; Li, J.; Fang, X.; Zhang, C.; Peng, S.; Li, Y. Novel Method for the Visual Navigation Path Detection of Jujube Harvester Autopilot Based on Image Processing. Int. J. Agric. Biol. Eng. 2023, 16, 189–197. [Google Scholar] [CrossRef]
  25. Li, A.; Wang, C.; Ji, T.; Wang, Q.; Zhang, T. D3-YOLOv10: Improved YOLOv10-Based Lightweight Tomato Detection Algorithm under Facility Scenario. Agriculture 2024, 14, 2268. [Google Scholar] [CrossRef]
  26. Zhang, Z.; Lu, Y.; Peng, Y.; Yang, M.; Hu, Y. A Lightweight and High-Performance YOLOv5-Based Model for Tea Shoot Detection in Field Conditions. Agronomy 2025, 15, 1122. [Google Scholar] [CrossRef]
  27. Duan, Y.; Han, W.; Guo, P.; Wei, X. YOLOv8-GDCI: Research on the Phytophthora Blight Detection Method of Different Parts of Chili Based on Improved YOLOv8 Model. Agronomy 2024, 14, 2734. [Google Scholar] [CrossRef]
  28. Yang, Y.; Zhou, Y.; Yue, X.; Zhang, G.; Wen, X.; Ma, B.; Chen, L. Real-Time Detection of Crop Rows in Maize Fields Based on Autonomous Extraction of ROI. Expert Syst. Appl. 2023, 213, 118826. [Google Scholar] [CrossRef]
  29. Quan, L.; Guo, Z.; Huang, L.; Xue, Y.; Sun, D.; Chen, T.; Lou, Z. Efficient Extraction of Corn Rows in Diverse Scenarios: A Grid-Based Selection Method for Intelligent Classification. Comput. Electron. Agric. 2024, 218, 108759. [Google Scholar] [CrossRef]
  30. Li, D.; Li, B.; Kang, S.; Feng, H.; Long, S.; Wang, J. E2CropDet: An Efficient End-to-End Solution to Crop Row Detection. Expert Syst. Appl. 2023, 227, 120345. [Google Scholar] [CrossRef]
  31. Luo, Y.; Dai, J.; Shi, S.; Xu, Y.; Zou, W.; Zhang, H.; Li, Y. Deep Learning-Based Seedling Row Detection and Localization Using High-Resolution UAV Imagery for Rice Transplanter Operation Quality Evaluation. Remote Sens. 2025, 17, 607. [Google Scholar] [CrossRef]
  32. Gomez, D.; Selvaraj, M.G.; Casas, J.; Mathiyazhagan, K.; Rodriguez, M.; Assefa, T.; Mlaki, A.; Nyakunga, G.; Kato, F.; Mukankusi, C.; et al. Advancing common bean (Phaseolus vulgaris L.) disease detection with YOLO driven deep learning to enhance agricultural AI. Sci. Rep. 2024, 14, 15596. [Google Scholar] [CrossRef] [PubMed]
  33. Diao, Z.; Ma, S.; Zhang, D.; Zhang, J.; Guo, P.; He, Z.; Zhang, B. Algorithm for Corn Crop Row Recognition during Different Growth Stages Based on ST-YOLOv8s Network. Agronomy 2024, 14, 1466. [Google Scholar] [CrossRef]
  34. Liu, T.H.; Zheng, Y.; Lai, J.S.; Cheng, Y.F.; Chen, S.Y.; Mai, B.F.; Xue, Z. Extracting Visual Navigation Line between Pineapple Field Rows Based on an Enhanced YOLOv5. Comput. Electron. Agric. 2024, 217, 108574. [Google Scholar] [CrossRef]
  35. Li, G.; Le, F.; Si, S.; Cui, L.; Xue, X. Image Segmentation-Based Oilseed Rape Row Detection for Infield Navigation of Agri-Robot. Agronomy 2024, 14, 1886. [Google Scholar] [CrossRef]
  36. Zhou, X.; Chen, W.; Wei, X. Improved Field Obstacle Detection Algorithm Based on YOLOv8. Agriculture 2024, 14, 2263. [Google Scholar] [CrossRef]
  37. Shi, J.; Bai, Y.; Zhou, J.; Zhang, B. Multi-Crop Navigation Line Extraction Based on Improved YOLO-V8 and Threshold-DBSCAN under Complex Agricultural Environments. Agriculture 2024, 14, 45. [Google Scholar] [CrossRef]
  38. Liu, Y.; Guo, Y.; Wang, X.; Yang, Y.; Zhang, J.; An, D.; Bai, T. Crop Root Rows Detection Based on Crop Canopy Image. Agriculture 2024, 14, 969. [Google Scholar] [CrossRef]
  39. Wang, Q.; Qin, W.; Liu, M.; Zhao, J.; Zhu, Q.; Yin, Y. Semantic Segmentation Model-Based Boundary Line Recognition Method for Wheat Harvesting. Agriculture 2024, 14, 1846. [Google Scholar] [CrossRef]
  40. Osco, L.P.; de Arruda, M.D.S.; Gonçalves, D.N.; Dias, A.; Batistoti, J.; de Souza, M.; Gonçalves, W.N. A CNN Approach to Simultaneously Count Plants and Detect Plantation-Rows from UAV Imagery. ISPRS J. Photogramm. Remote Sens. 2021, 174, 1–17. [Google Scholar] [CrossRef]
  41. Lv, R.; Hu, J.; Zhang, T.; Chen, X.; Liu, W. Crop-Free-Ridge Navigation Line Recognition Based on the Lightweight Structure Improvement of YOLOv8. Agriculture 2025, 15, 942. [Google Scholar] [CrossRef]
  42. Zhang, T.; Zhou, J.; Liu, W.; Yue, R.; Yao, M.; Shi, J.; Hu, J. Seedling-YOLO: High-Efficiency Target Detection Algorithm for Field Broccoli Seedling Transplanting Quality Based on YOLOv7-Tiny. Agronomy 2024, 14, 931. [Google Scholar] [CrossRef]
  43. Wang, W.; Gong, Y.; Gu, J.; Yang, Q.; Pan, Z.; Zhang, X.; Zhou, M. YOLOv8-TEA: Recognition Method of Tender Shoots of Tea Based on Instance Segmentation Algorithm. Agronomy 2025, 15, 1318. [Google Scholar] [CrossRef]
  44. Ma, J.; Zhao, Y.; Fan, W.; Liu, J. An Improved YOLOv8 Model for Lotus Seedpod Instance Segmentation in the Lotus Pond Environment. Agronomy 2024, 14, 1325. [Google Scholar] [CrossRef]
  45. Wang, C.; Chen, X.; Jiao, Z.; Song, S.; Ma, Z. An Improved YOLOP Lane-Line Detection Utilizing Feature Shift Aggregation for Intelligent Agricultural Machinery. Agriculture 2025, 15, 1361. [Google Scholar] [CrossRef]
  46. Karim, M.R.; Reza, M.N.; Gong, H.; Haque, M.A.; Lee, K.H.; Sung, J.; Chung, S.O. Application of LiDAR Sensors for Crop and Working Environment Recognition in Agriculture: A Review. Remote Sens. 2024, 16, 4623. [Google Scholar] [CrossRef]
  47. Baltazar, A.R.; Dos Santos, F.N.; De Sousa, M.L.; Moreira, A.P.; Cunha, J.B. 2D LiDAR-Based System for Canopy Sensing in Smart Spraying Applications. IEEE Access 2023, 11, 43583–43591. [Google Scholar] [CrossRef]
  48. Li, Z.; Xie, D.; Liu, L.; Wang, H.; Chen, L. Inter-Row Information Recognition of Maize in the Middle and Late Stages via LiDAR Supplementary Vision. Front. Plant Sci. 2022, 13, 1024360. [Google Scholar] [CrossRef]
  49. Yan, P.; Feng, Y.; Han, Q.; Wu, H.; Hu, Z.; Kang, S. Revolutionizing Crop Phenotyping: Enhanced UAV LiDAR Flight Parameter Optimization for Wide-Narrow Row Cultivation. Remote Sens. Environ. 2025, 320, 114638. [Google Scholar] [CrossRef]
  50. Bhattarai, A.; Scarpin, G.J.; Jakhar, A.; Porter, W.; Hand, L.C.; Snider, J.L.; Bastos, L.M. Optimizing Unmanned Aerial Vehicle LiDAR Data Collection in Cotton Through Flight Settings and Data Processing. Remote Sens. 2025, 17, 1504. [Google Scholar] [CrossRef]
  51. Zou, R.; Zhang, Y.; Chen, J.; Li, J.; Dai, W.; Mu, S. Density Estimation Method of Mature Wheat Based on Point Cloud Segmentation and Clustering. Comput. Electron. Agric. 2023, 205, 107626. [Google Scholar] [CrossRef]
  52. Liu, L.; Ji, D.; Zeng, F.; Zhao, Z.; Wang, S. Precision Inter-Row Relative Positioning Method by Using 3D LiDAR in Planted Forests and Orchards. Agronomy 2024, 14, 1279. [Google Scholar] [CrossRef]
  53. Nehme, H.; Aubry, C.; Solatges, T.; Savatier, X.; Rossi, R.; Boutteau, R. Lidar-Based Structure Tracking for Agricultural Robots: Application to Autonomous Navigation in Vineyards. J. Intell. Robot. Syst. 2021, 103, 61. [Google Scholar] [CrossRef]
  54. Ban, C.; Wang, L.; Su, T.; Chi, R.; Fu, G. Fusion of Monocular Camera and 3D LiDAR Data for Navigation Line Extraction under Corn Canopy. Comput. Electron. Agric. 2025, 232, 110124. [Google Scholar] [CrossRef]
  55. Luo, S.; Wen, S.; Zhang, L.; Lan, Y.; Chen, X. Extraction of crop canopy features and decision-making for variable spraying based on unmanned aerial vehicle LiDAR data. Comput. Electron. Agric. 2024, 224, 109197. [Google Scholar] [CrossRef]
  56. Nazeri, B.; Crawford, M. Detection of Outliers in Lidar Data Acquired by Multiple Platforms over Sorghum and Maize. Remote Sens. 2021, 13, 4445. [Google Scholar] [CrossRef]
  57. Cruz Ulloa, C.; Krus, A.; Barrientos, A.; Del Cerro, J.; Valero, C. Robotic Fertilisation Using Localisation Systems Based on Point Clouds in Strip-Crop Fields. Agronomy 2021, 11, 11. [Google Scholar] [CrossRef]
  58. Nazeri, B.; Crawford, M.M.; Tuinstra, M.R. Estimating Leaf Area Index in Row Crops Using Wheel-Based and Airborne Discrete Return Light Detection and Ranging Data. Front. Plant Sci. 2021, 12, 740322. [Google Scholar] [CrossRef]
  59. Lin, Y.C.; Habib, A. Quality Control and Crop Characterization Framework for Multi-Temporal UAV LiDAR Data over Mechanized Agricultural Fields. Remote Sens. Environ. 2021, 256, 112299. [Google Scholar] [CrossRef]
  60. Karim, M.R.; Ahmed, S.; Reza, M.N.; Lee, K.H.; Sung, J.; Chung, S.O. Geometric Feature Characterization of Apple Trees from 3D LiDAR Point Cloud Data. J. Imaging 2024, 11, 5. [Google Scholar] [CrossRef]
  61. Escolà, A.; Peña, J.M.; López-Granados, F.; Rosell-Polo, J.R.; de Castro, A.I.; Gregorio, E.; Torres-Sánchez, J. Mobile Terrestrial Laser Scanner vs. UAV Photogrammetry to Estimate Woody Crop Canopy Parameters–Part 1: Methodology and Comparison in Vineyards. Comput. Electron. Agric. 2023, 212, 108109. [Google Scholar] [CrossRef]
  62. Xie, B.; Jin, Y.; Faheem, M. Research Progress of Autonomous Navigation Technology for Multi-Agricultural Scenes. Comput. Electron. Agric. 2023, 211, 107963. [Google Scholar] [CrossRef]
  63. Shi, M.; Feng, X.; Pan, S.; Song, X.; Jiang, L. A Collaborative Path Planning Method for Intelligent Agricultural Machinery Based on Unmanned Aerial Vehicles. Electronics 2023, 12, 3232. [Google Scholar] [CrossRef]
  64. He, J.; Dong, W.; Tan, Q.; Li, J.; Song, X.; Zhao, R. A Variable-Threshold Segmentation Method for Rice Row Detection Considering Robot Travelling Prior Information. Agriculture 2025, 15, 413. [Google Scholar] [CrossRef]
  65. Shi, Z.; Bai, Z.; Yi, K.; Qiu, B.; Dong, X.; Wang, Q.; Jiang, C.; Zhang, X.; Huang, X. Vision and 2D lidar fusion-based navigation line extraction for autonomous agricultural robots in dense pomegranate orchards. Sensors 2025, 25, 5432. [Google Scholar] [CrossRef]
  66. Song, P.; Li, Z.; Yang, M.; Shao, Y.; Pu, Z.; Yang, W.; Zhai, R. Dynamic Detection of Three-Dimensional Crop Phenotypes Based on a Consumer-Grade RGB-D Camera. Front. Plant Sci. 2023, 14, 1097725. [Google Scholar] [CrossRef]
  67. Li, Y.; Qi, B.; Bao, E.; Tang, Z.; Lian, Y.; Sun, M. Design and Analysis of a Sowing Depth Detection and Control Device for a Wheat Row Planter Based on Fuzzy PID and Multi-Sensor Fusion. Agronomy 2025, 15, 1490. [Google Scholar] [CrossRef]
  68. Guan, X.; Shi, L.; Yang, W.; Ge, H.; Wei, X.; Ding, Y. Multi-Feature Fusion Recognition and Localization Method for Unmanned Harvesting of Aquatic Vegetables. Agriculture 2024, 14, 971. [Google Scholar] [CrossRef]
  69. Hu, T.; Wang, W.; Gu, J.; Xia, Z.; Zhang, J.; Wang, B. Research on Apple Object Detection and Localization Method Based on Improved YOLOX and RGB-D Images. Agronomy 2023, 13, 1816. [Google Scholar] [CrossRef]
  70. Wu, S.; Chen, Z.; Bangura, K.; Jiang, J.; Ma, X.; Li, J.; Qi, L. A Navigation Method for Paddy Field Management Based on Seedlings Coordinate Information. Comput. Electron. Agric. 2023, 215, 108436. [Google Scholar] [CrossRef]
  71. Mwitta, C.; Rains, G.C.; Burlacu, A.; Mandal, S. The Integration of GPS and Visual Navigation for Autonomous Navigation of an Ackerman Steering Mobile Robot in Cotton Fields. Front. Robot. AI 2024, 11, 1359887. [Google Scholar] [CrossRef] [PubMed]
  72. Li, Z.; Xu, R.; Li, C.; Fu, L. Visual Navigation and Crop Mapping of a Phenotyping Robot MARS-PhenoBot in Simulation. Smart Agric. Technol. 2025, 11, 100910. [Google Scholar] [CrossRef]
  73. Li, C.; Wu, J.; Pan, X.; Dou, H.; Zhao, X.; Gao, Y.; Zhai, C. Design and Experiment of a Breakpoint Continuous Spraying System for Automatic-Guidance Boom Sprayers. Agriculture 2023, 13, 2203. [Google Scholar] [CrossRef]
  74. Chen, X.; Dang, P.; Chen, Y.; Qi, L. A Tactile Recognition Method for Rice Plant Lodging Based on Adaptive Decision Boundary. Comput. Electron. Agric. 2025, 230, 109890. [Google Scholar] [CrossRef]
  75. Chen, X.; Mao, Y.; Gong, Y.; Qi, L.; Jiang, Y.; Ma, X. Intra-Row Weed Density Evaluation in Rice Field Using Tactile Method. Comput. Electron. Agric. 2022, 193, 106699. [Google Scholar] [CrossRef]
  76. Chen, X.; Dang, P.; Zhang, E.; Chen, Y.; Tang, C.; Qi, L. Accurate Recognition of Rice Plants Based on Visual and Tactile Sensing. J. Sci. Food Agric. 2024, 104, 4268–4277. [Google Scholar] [CrossRef]
  77. Gronewold, A.M.; Mulford, P.; Ray, E.; Ray, L.E. Tactile Sensing & Visually-Impaired Navigation in Densely Planted Row Crops, for Precision Fertilization by Small UGVs. Comput. Electron. Agric. 2025, 231, 110003. [Google Scholar] [CrossRef]
  78. Li, J.; Zhang, M.; Zhang, G.; Ge, D.; Li, M. Real-Time Monitoring System of Seedling Amount in Seedling Box Based on Machine Vision. Agriculture 2023, 13, 371. [Google Scholar] [CrossRef]
  79. Khan, M.N.; Rahi, A.; Rajendran, V.P.; Al Hasan, M.; Anwar, S. Real-Time Crop Row Detection Using Computer Vision-Application in Agricultural Robots. Front. Artif. Intell. 2024, 7, 1435686. [Google Scholar] [CrossRef] [PubMed]
  80. Rocha, B.M.; da Fonseca, A.U.; Pedrini, H.; Soares, F. Automatic Detection and Evaluation of Sugarcane Planting Rows in Aerial Images. Inf. Process. Agric. 2023, 10, 400–415. [Google Scholar] [CrossRef]
  81. De Bortoli, L.; Marsi, S.; Marinello, F.; Gallina, P. Cost-Efficient Algorithm for Autonomous Cultivators: Implementing Template Matching with Field Digital Twins for Precision Agriculture. Comput. Electron. Agric. 2024, 227, 109509. [Google Scholar] [CrossRef]
  82. He, L.; Liao, K.; Li, Y.; Li, B.; Zhang, J.; Wang, Y.; Fu, X. Extraction of Tobacco Planting Information Based on UAV High-Resolution Remote Sensing Images. Remote Sens. 2024, 16, 359. [Google Scholar]
  83. Navone, A.; Martini, M.; Ambrosio, M.; Ostuni, A.; Angarano, S.; Chiaberge, M. GPS-Free Autonomous Navigation in Cluttered Tree Rows with Deep Semantic Segmentation. Robot. Auton. Syst. 2025, 183, 104854. [Google Scholar] [CrossRef]
  84. Katari, S.; Venkatesh, S.; Stewart, C.; Khanal, S. Integrating Automated Labeling Framework for Enhancing Deep Learning Models to Count Corn Plants Using UAS Imagery. Sensors 2024, 24, 6467. [Google Scholar] [CrossRef] [PubMed]
  85. De Silva, R.; Cielniak, G.; Gao, J. Vision Based Crop Row Navigation under Varying Field Conditions in Arable Fields. Comput. Electron. Agric. 2024, 217, 108581. [Google Scholar] [CrossRef]
  86. Kostić, M.M.; Grbović, Ž.; Waqar, R.; Ivošević, B.; Panić, M.; Scarfone, A.; Tagarakis, A.C. Corn Plant In-Row Distance Analysis Based on Unmanned Aerial Vehicle Imagery and Row-Unit Dynamics. Appl. Sci. 2024, 14, 10693. [Google Scholar]
  87. Ullah, M.; Islam, F.; Bais, A. Quantifying Consistency of Crop Establishment Using a Lightweight U-Net Deep Learning Architecture and Image Processing Techniques. Comput. Electron. Agric. 2024, 217, 108617. [Google Scholar] [CrossRef]
  88. Affonso, F.; Tommaselli, F.A.G.; Capezzuto, G.; Gasparino, M.V.; Chowdhary, G.; Becker, M. CROW: A Self-Supervised Crop Row Navigation Algorithm for Agricultural Fields. J. Intell. Robot. Syst. 2025, 111, 28. [Google Scholar] [CrossRef]
  89. Li, Q.; Zhu, H. Performance Evaluation of 2D LiDAR SLAM Algorithms in Simulated Orchard Environments. Comput. Electron. Agric. 2024, 221, 108994. [Google Scholar] [CrossRef]
  90. Fujinaga, T. Autonomous Navigation Method for Agricultural Robots in High-Bed Cultivation Environments. Comput. Electron. Agric. 2025, 231, 110001. [Google Scholar] [CrossRef]
  91. Pan, Y.; Hu, K.; Cao, H.; Kang, H.; Wang, X. A Novel Perception and Semantic Mapping Method for Robot Autonomy in Orchards. Comput. Electron. Agric. 2024, 219, 108769. [Google Scholar] [CrossRef]
  92. Zhou, Y.; Wang, X.; Wang, Z.; Ye, Y.; Zhu, F.; Yu, K.; Zhao, Y. Rolling 2D Lidar-Based Navigation Line Extraction Method for Modern Orchard Automation. Agronomy 2025, 15, 816. [Google Scholar] [CrossRef]
  93. Hong, Y.; Ma, R.; Li, C.; Shao, C.; Huang, J.; Zeng, Y.; Chen, Y. Three-Dimensional Localization and Mapping of Multiagricultural Scenes via Hierarchically-Coupled LiDAR-Inertial Odometry. Comput. Electron. Agric. 2024, 227, 109487. [Google Scholar] [CrossRef]
  94. Li, S.; Miao, Y.; Li, H.; Qiu, R.; Zhang, M. RTMR-LOAM: Real-Time Maize 3D Reconstruction Based on Lidar Odometry and Mapping. Comput. Electron. Agric. 2025, 230, 109820. [Google Scholar]
  95. Teng, H.; Wang, Y.; Chatziparaschis, D.; Karydis, K. Adaptive LiDAR Odometry and Mapping for Autonomous Agricultural Mobile Robots in Unmanned Farms. Comput. Electron. Agric. 2025, 232, 110023. [Google Scholar] [CrossRef]
  96. Ban, C.; Wang, L.; Chi, R.; Su, T.; Ma, Y. A Camera-LiDAR-IMU Fusion Method for Real-Time Extraction of Navigation Line between Maize Field Rows. Comput. Electron. Agric. 2024, 223, 109114. [Google Scholar]
  97. Zhu, X.; Zhao, X.; Liu, J.; Feng, W.; Fan, X. Autonomous Navigation and Obstacle Avoidance for Orchard Spraying Robots: A Sensor-Fusion Approach with ArduPilot, ROS, and EKF. Agronomy 2025, 15, 1373. [Google Scholar] [CrossRef]
  98. Li, Y.; Xiao, L.; Liu, Z.; Liu, M.; Fang, P.; Chen, X.; Yu, J.; Lin, J.; Cai, J. Recognition and Localization of Ratoon Rice Rolled Stubble Rows Based on Monocular Vision and Model Fusion. Front. Plant Sci. 2025, 16, 1533206. [Google Scholar] [CrossRef]
  99. Jiang, B.; Zhang, J.L.; Su, W.H.; Hu, R. A SPH-YOLOv5x-Based Automatic System for Intra-Row Weed Control in Lettuce. Agronomy 2023, 13, 2915. [Google Scholar] [CrossRef]
  100. Bah, M.D.; Hafiane, A.; Canals, R. Hierarchical Graph Representation for Unsupervised Crop Row Detection in Images. Expert Syst. Appl. 2023, 216, 119478. [Google Scholar] [CrossRef]
  101. Sun, J.; Wang, Z.; Ding, S.; Xia, J.; Xing, G. Adaptive Disturbance Observer-Based Fixed Time Nonsingular Terminal Sliding Mode Control for Path-Tracking of Unmanned Agricultural Tractors. Biosyst. Eng. 2024, 246, 96–109. [Google Scholar]
  102. Cui, B.; Cui, X.; Wei, X.; Zhu, Y.; Ma, Z.; Zhao, Y.; Liu, Y. Design and Testing of a Tractor Automatic Navigation System Based on Dynamic Path Search and a Fuzzy Stanley Model. Agriculture 2024, 14, 2136. [Google Scholar] [CrossRef]
  103. Afzaal, H.; Rude, D.; Farooque, A.A.; Randhawa, G.S.; Schumann, A.W.; Krouglicof, N. Improved Crop Row Detection by Employing Attention-Based Vision Transformers and Convolutional Neural Networks with Integrated Depth Modeling for Precise Spatial Accuracy. Smart Agric. Technol. 2025, 11, 100934. [Google Scholar] [CrossRef]
  104. Gong, H.; Zhuang, W.; Wang, X. Improving the Maize Crop Row Navigation Line Recognition Method of YOLOX. Front. Plant Sci. 2024, 15, 1338228. [Google Scholar] [CrossRef]
  105. Li, B.; Li, D.; Wei, Z.; Wang, J. Rethinking the Crop Row Detection Pipeline: An End-to-End Method for Crop Row Detection Based on Row-Column Attention. Comput. Electron. Agric. 2024, 225, 109264. [Google Scholar]
  106. Wei, J.; Zhang, M.; Wu, C.; Ma, Q.; Wang, W.; Wan, C. Accurate Crop Row Recognition of Maize at the Seedling Stage Using Lightweight Network. Int. J. Agric. Biol. Eng. 2024, 17, 189–198. [Google Scholar] [CrossRef]
  107. Gómez, A.; Moreno, H.; Andújar, D. Intelligent Inter-and Intra-Row Early Weed Detection in Commercial Maize Crops. Plants 2025, 14, 881. [Google Scholar] [CrossRef]
  108. Diao, Z.; Guo, P.; Zhang, B.; Zhang, D.; Yan, J.; He, Z.; Zhang, J. Navigation Line Extraction Algorithm for Corn Spraying Robot Based on Improved YOLOv8s Network. Comput. Electron. Agric. 2023, 212, 108049. [Google Scholar] [CrossRef]
  109. Zhu, C.; Hao, S.; Liu, C.; Wang, Y.; Jia, X.; Xu, J.; Wang, W. An Efficient Computer Vision-Based Dual-Face Target Precision Variable Spraying Robotic System for Foliar Fertilisers. Agronomy 2024, 14, 2770. [Google Scholar] [CrossRef]
  110. Liang, Z.; Xu, X.; Yang, D.; Liu, Y. The Development of a Lightweight DE-YOLO Model for Detecting Impurities and Broken Rice Grains. Agriculture 2025, 15, 848. [Google Scholar] [CrossRef]
  111. Jiang, L.; Wang, Y.; Wu, C.; Wu, H. Fruit Distribution Density Estimation in YOLO-Detected Strawberry Images: A Kernel Density and Nearest Neighbor Analysis Approach. Agriculture 2024, 14, 1848. [Google Scholar] [CrossRef]
  112. Memon, M.S.; Chen, S.; Shen, B.; Liang, R.; Tang, Z.; Wang, S.; Memon, N. Automatic Visual Recognition, Detection and Classification of Weeds in Cotton Fields Based on Machine Vision. Crop Prot. 2025, 187, 106966. [Google Scholar] [CrossRef]
  113. Zhang, S.; Wei, X.; Liu, C.; Ge, J.; Cui, X.; Wang, F.; Wang, A.; Chen, W. Adaptive Path Tracking and Control System for Unmanned Crawler Harvesters in Paddy Fields. Comput. Electron. Agric. 2025, 230, 109878. [Google Scholar] [CrossRef]
  114. Lu, E.; Xue, J.; Chen, T.; Jiang, S. Robust Trajectory Tracking Control of an Autonomous Tractor-Trailer Considering Model Parameter Uncertainties and Disturbances. Agriculture 2023, 13, 869. [Google Scholar] [CrossRef]
  115. Yang, Y.; Shen, X.; An, D.; Han, H.; Tang, W.; Wang, Y.; Chen, L. Crop Row Detection Algorithm Based on 3-D LiDAR: Suitable for Crop Row Detection in Different Periods. IEEE Trans. Instrum. Meas. 2024, 73, 1–13. [Google Scholar] [CrossRef]
  116. Kong, X.; Guo, Y.; Liang, Z.; Zhang, R.; Hong, Z.; Xue, W. A Method for Recognizing Inter-Row Navigation Lines of Rice Heading Stage Based on Improved ENet Network. Measurement 2025, 241, 115677. [Google Scholar] [CrossRef]
  117. Zhang, Z.; Lu, Y.; Zhao, Y.; Pan, Q.; Jin, K.; Xu, G.; Hu, Y. Ts-YOLO: An All-Day and Lightweight Tea Canopy Shoots Detection Model. Agronomy 2023, 13, 1411. [Google Scholar] [CrossRef]
  118. Luo, Y.; Wei, L.; Xu, L.; Zhang, Q.; Liu, J.; Cai, Q.; Zhang, W. Stereo-Vision-Based Multi-Crop Harvesting Edge Detection for Precise Automatic Steering of Combine Harvester. Biosyst. Eng. 2022, 215, 115–128. [Google Scholar] [CrossRef]
  119. He, Z.; Yuan, F.; Zhou, Y.; Cui, B.; He, Y.; Liu, Y. Stereo vision based broccoli recognition and attitude estimation method for field harvesting. Artif. Intell. Agric. 2025, 15, 526–536. [Google Scholar] [CrossRef]
  120. Lac, L.; Da Costa, J.P.; Donias, M.; Keresztes, B.; Bardet, A. Crop Stem Detection and Tracking for Precision Hoeing Using Deep Learning. Comput. Electron. Agric. 2022, 192, 106606. [Google Scholar] [CrossRef]
  121. Guo, P.; Diao, Z.; Zhao, C.; Li, J.; Zhang, R.; Yang, R.; Zhang, B. Navigation Line Extraction Algorithm for Corn Spraying Robot Based on YOLOv8s-CornNet. J. Field Robot. 2024, 41, 1887–1899. [Google Scholar] [CrossRef]
  122. Yang, K.; Sun, X.; Li, R.; He, Z.; Wang, X.; Wang, C.; Liu, H. A Method for Quantifying Mung Bean Field Planting Layouts Using UAV Images and an Improved YOLOv8-obb Model. Agronomy 2025, 15, 151. [Google Scholar] [CrossRef]
  123. Lin, Y.; Xia, S.; Wang, L.; Qiao, B.; Han, H.; Wang, L.; He, X.; Liu, Y. Multi-Task Deep Convolutional Neural Network for Weed Detection and Navigation Path Extraction. Comput. Electron. Agric. 2025, 229, 109776. [Google Scholar] [CrossRef]
  124. Patidar, P.; Soni, P. A Rapid Estimation of Intra-Row Weed Density Using an Integrated CRM, BTSORT and HSV Model across Entire Video Stream of Chilli Crop Canopies. Crop Prot. 2025, 189, 107039. [Google Scholar] [CrossRef]
  125. Song, Y.; Xu, F.; Yao, Q.; Liu, J.; Yang, S. Navigation Algorithm Based on Semantic Segmentation in Wheat Fields Using an RGB-D Camera. Inf. Process. Agric. 2023, 10, 475–490. [Google Scholar] [CrossRef]
  126. Costa, I.F.D.; Leite, A.C.; Caarls, W. Data Set Diversity in Crop Row Detection Based on CNN Models for Autonomous Robot Navigation. J. Field Robot. 2025, 42, 525–538. [Google Scholar] [CrossRef]
  127. Vrochidou, E.; Oustadakis, D.; Kefalas, A.; Papakostas, G.A. Computer Vision in Self-Steering Tractors. Machines 2022, 10, 129. [Google Scholar] [CrossRef]
  128. Zhao, R.; Yuan, X.; Yang, Z.; Zhang, L. Image-Based Crop Row Detection Utilizing the Hough Transform and DBSCAN Clustering Analysis. IET Image Process. 2024, 18, 1161–1177. [Google Scholar] [CrossRef]
  129. Pang, Y.; Shi, Y.; Gao, S.; Jiang, F.; Veeranampalayam-Sivakumar, A.N.; Thompson, L.; Luck, J.; Liu, C. Improved crop row detection with deep neural network for early-season maize stand count in UAV imagery. Comput. Electron. Agric. 2020, 178, 105766. [Google Scholar] [CrossRef]
  130. Liang, X.; Chen, B.; Wei, C.; Zhang, X. Inter-Row Navigation Line Detection for Cotton with Broken Rows. Plant Methods 2022, 18, 90. [Google Scholar] [CrossRef] [PubMed]
  131. Zhang, C.; Ge, X.; Zheng, Z.; Wu, X.; Wang, W.; Chen, L. A Plant Unit Relates to Missing Seeding Detection and Reseeding for Maize Precision Seeding. Agriculture 2022, 12, 1634. [Google Scholar] [CrossRef]
  132. Cox, J.; Tsagkopoulos, N.; Rozsypálek, Z.; Krajník, T.; Sklar, E.; Hanheide, M. Visual Teach and Generalise (VTAG)—Exploiting Perceptual Aliasing for Scalable Autonomous Robotic Navigation in Horticultural Environments. Comput. Electron. Agric. 2023, 212, 108054. [Google Scholar] [CrossRef]
  133. Calera, E.S.; Oliveira, G.C.D.; Araujo, G.L.; Filho, J.I.F.; Toschi, L.; Hernandes, A.C.; Becker, M. Under-Canopy Navigation for an Agricultural Rover Based on Image Data. J. Intell. Robot. Syst. 2023, 108, 29. [Google Scholar] [CrossRef]
  134. Torres-Sánchez, J.; Escolà, A.; De Castro, A.I.; López-Granados, F.; Rosell-Polo, J.R.; Sebé, F.; Peña, J.M. Mobile Terrestrial Laser Scanner vs. UAV Photogrammetry to Estimate Woody Crop Canopy Parameters–Part 2: Comparison for Different Crops and Training Systems. Comput. Electron. Agric. 2023, 212, 108083. [Google Scholar] [CrossRef]
  135. Lai Lap Hong, B.; Bin Mohd Izhar, M.A.; Ahmad, N.B. Improved Monte Carlo Localization for Agricultural Mobile Robots with the Normal Distributions Transform. Int. J. Adv. Comput. Sci. Appl. 2025, 16, 1043. [Google Scholar] [CrossRef]
  136. De Silva, R.; Cielniak, G.; Wang, G.; Gao, J. Deep Learning-Based Crop Row Detection for Infield Navigation of Agri-Robots. J. Field Robot. 2024, 41, 2299–2321. [Google Scholar] [CrossRef]
  137. Guo, Z.; Geng, Y.; Wang, C.; Xue, Y.; Sun, D.; Lou, Z.; Quan, L. InstaCropNet: An Efficient Unet-Based Architecture for Precise Crop Row Detection in Agricultural Applications. Artif. Intell. Agric. 2024, 12, 85–96. [Google Scholar] [CrossRef]
  138. Jayathunga, S.; Pearse, G.D.; Watt, M.S. Unsupervised Methodology for Large-Scale Tree Seedling Mapping in Diverse Forestry Settings Using UAV-Based RGB Imagery. Remote Sens. 2023, 15, 5276. [Google Scholar] [CrossRef]
  139. Rana, S.; Crimaldi, M.; Barretta, D.; Carillo, P.; Cirillo, V.; Maggio, A.; Gerbino, S. GobhiSet: Dataset of Raw, Manually, and Automatically Annotated RGB Images across Phenology of Brassica oleracea var. Botrytis. Data Brief 2024, 54, 110506. [Google Scholar] [CrossRef]
  140. Roggiolani, G.; Rückin, J.; Popović, M.; Behley, J.; Stachniss, C. Unsupervised Semantic Label Generation in Agricultural Fields. Front. Robot. AI 2025, 12, 1548143. [Google Scholar] [CrossRef]
  141. Lin, H.; Lu, Y.; Ding, R.; Gou, Y.; Yang, F. Detection of Wheat Seedling Lines in the Complex Environment via Deep Learning. Int. J. Agric. Biol. Eng. 2024, 17, 255–265. [Google Scholar] [CrossRef]
  142. Feng, A.; Vong, C.N.; Zhou, J.; Conway, L.S.; Zhou, J.; Vories, E.D.; Kitchen, N.R. Developing an Image Processing Pipeline to Improve the Position Accuracy of Single UAV Images. Comput. Electron. Agric. 2023, 206, 107650. [Google Scholar] [CrossRef]
  143. Sampurno, R.M.; Liu, Z.; Abeyrathna, R.R.D.; Ahamed, T. Intrarow Uncut Weed Detection Using You-Only-Look-Once Instance Segmentation for Orchard Plantations. Sensors 2024, 24, 893. [Google Scholar] [CrossRef]
  144. Meng, J.; Xian, W.; Li, F.; Li, Z.; Li, J. A Monocular Camera-Based Algorithm for Sugar Beet Crop Row Extraction. Eng. Agrícola 2024, 44, e20240034. [Google Scholar] [CrossRef]
  145. Gong, H.; Zhuang, W. An Improved Method for Extracting Inter-Row Navigation Lines in Nighttime Maize Crops Using YOLOv7-Tiny. IEEE Access 2024, 12, 27444–27455. [Google Scholar] [CrossRef]
  146. Shi, Y.; Xu, R.; Qi, Z. MSNet: A Novel Deep Learning Framework for Efficient Missing Seedling Detection in Maize Fields. Appl. Artif. Intell. 2025, 39, 2469372. [Google Scholar] [CrossRef]
  147. Xue, L.; Xing, M.; Lyu, H. Improved Early-Stage Maize Row Detection Using Unmanned Aerial Vehicle Imagery. ISPRS Int. J. Geo-Inf. 2024, 13, 376. [Google Scholar] [CrossRef]
  148. Saha, S.; Noguchi, N. Smart Vineyard Row Navigation: A Machine Vision Approach Leveraging YOLOv8. Comput. Electron. Agric. 2025, 229, 109839. [Google Scholar] [CrossRef]
  149. Yang, M.; Huang, C.; Li, Z.; Shao, Y.; Yuan, J.; Yang, W.; Song, P. Autonomous Navigation Method Based on RGB-D Camera for a Crop Phenotyping Robot. J. Field Robot. 2024, 41, 2663–2675. [Google Scholar] [CrossRef]
  150. Sun, T.; Le, F.; Cai, C.; Jin, Y.; Xue, X.; Cui, L. Soybean–Corn Seedling Crop Row Detection for Agricultural Autonomous Navigation Based on GD-YOLOv10n-Seg. Agriculture 2025, 15, 796. [Google Scholar] [CrossRef]
  151. Gong, H.; Wang, X.; Zhuang, W. Research on Real-Time Detection of Maize Seedling Navigation Line Based on Improved YOLOv5s Lightweighting Technology. Agriculture 2024, 14, 124. [Google Scholar] [CrossRef]
  152. Zheng, K.; Zhao, X.; Han, C.; He, Y.; Zhai, C.; Zhao, C. Design and Experiment of an Automatic Row-Oriented Spraying System Based on Machine Vision for Early-Stage Maize Corps. Agriculture 2023, 13, 691. [Google Scholar] [CrossRef]
  153. Zhang, T.; Zhou, J.; Liu, W.; Yue, R.; Shi, J.; Zhou, C.; Hu, J. SN-CNN: A Lightweight and Accurate Line Extraction Algorithm for Seedling Navigation in Ridge-Planted Vegetables. Agriculture 2024, 14, 1446. [Google Scholar] [CrossRef]
  154. Geng, A.; Hu, X.; Liu, J.; Mei, Z.; Zhang, Z.; Yu, W. Development and Testing of Automatic Row Alignment System for Corn Harvesters. Appl. Sci. 2022, 12, 6221. [Google Scholar] [CrossRef]
  155. Hruska, A.; Hamouz, P. Verification of a Machine Learning Model for Weed Detection in Maize (Zea mays) Using Infrared Imaging. Plant Prot. Sci. 2023, 59, 292–297. [Google Scholar] [CrossRef]
  156. Yang, Z.; Yang, Y.; Li, C.; Zhou, Y.; Zhang, X.; Yu, Y.; Liu, D. Tasseled Crop Rows Detection Based on Micro-Region of Interest and Logarithmic Transformation. Front. Plant Sci. 2022, 13, 916474. [Google Scholar] [CrossRef]
  157. Guo, Z.; Quan, L.; Sun, D.; Lou, Z.; Geng, Y.; Chen, T.; Wang, J. Efficient Crop Row Detection Using Transformer-Based Parameter Prediction. Biosyst. Eng. 2024, 246, 13–25. [Google Scholar] [CrossRef]
  158. Yu, T.; Chen, J.; Gui, Z.; Jia, J.; Li, Y.; Yu, C.; Wu, C. Multi-Scale Cross-Domain Augmentation of Tea Datasets via Enhanced Cycle Adversarial Networks. Agriculture 2025, 15, 1739. [Google Scholar] [CrossRef]
  159. Deng, L.; Miao, Z.; Zhao, X.; Yang, S.; Gao, Y.; Zhai, C.; Zhao, C. HAD-YOLO: An Accurate and Effective Weed Detection Model Based on Improved YOLOV5 Network. Agronomy 2025, 15, 57. [Google Scholar] [CrossRef]
  160. Zuo, Z.; Gao, S.; Peng, H.; Xue, Y.; Han, L.; Ma, G.; Mao, H. Lightweight detection of broccoli heads in complex field environments based on LBDC-YOLO. Agronomy 2024, 14, 2359. [Google Scholar] [CrossRef]
  161. Zhang, Y.W.; Liu, M.N.; Chen, D.; Xu, X.M.; Lu, J.; Lai, H.R.; Yin, Y.X. Development and Testing of Row-Controlled Weeding Intelligent Robot for Corn. J. Field Robot. 2025, 42, 850–866. [Google Scholar] [CrossRef]
  162. Xiang, M.; Gao, X.; Wang, G.; Qi, J.; Qu, M.; Ma, Z.; Song, K. An Application Oriented All-Round Intelligent Weeding Machine with Enhanced YOLOv5. Biosyst. Eng. 2024, 248, 269–282. [Google Scholar] [CrossRef]
  163. Wang, B.; Du, X.; Wang, Y.; Mao, H. Multi-Machine Collaboration Realization Conditions and Precise and Efficient Production Mode of Intelligent Agricultural Machinery. Int. J. Agric. Biol. Eng. 2024, 17, 27–36. [Google Scholar] [CrossRef]
  164. Ulloa, C.C.; Krus, A.; Barrientos, A.; del Cerro, J.; Valero, C. Robotic Fertilization in Strip Cropping Using a CNN Vegetables Detection-Characterization Method. Comput. Electron. Agric. 2022, 193, 106684. [Google Scholar] [CrossRef]
  165. Yan, Z.; Zhao, Y.; Luo, W.; Ding, X.; Li, K.; He, Z.; Cui, Y. Machine Vision-Based Tomato Plug Tray Missed Seeding Detection and Empty Cell Replanting. Comput. Electron. Agric. 2023, 208, 107800. [Google Scholar] [CrossRef]
  166. Wang, Y.; Li, T.; Chen, T.; Zhang, X.; Taha, M.F.; Yang, N.; Shi, Q. Cucumber Downy Mildew Disease Prediction Using a CNN-LSTM Approach. Agriculture 2024, 14, 1155. [Google Scholar] [CrossRef]
  167. Wang, S.; Su, D.; Jiang, Y.; Tan, Y.; Qiao, Y.; Yang, S.; Hu, N. Fusing Vegetation Index and Ridge Segmentation for Robust Vision Based Autonomous Navigation of Agricultural Robots in Vegetable Farms. Comput. Electron. Agric. 2023, 213, 108235. [Google Scholar] [CrossRef]
  168. Yang, Y.; Xie, H.; Zhang, K.; Wang, Y.; Li, Y.; Zhou, J.; Xu, L. Design, Development, Integration, and Field Evaluation of a Ridge-Planting Strawberry Harvesting Robot. Agriculture 2024, 14, 2126. [Google Scholar] [CrossRef]
  169. Jin, X.; Tang, L.; Li, R.; Zhao, B.; Ji, J.; Ma, Y. Edge Recognition and Reduced Transplantation Loss of Leafy Vegetable Seedlings with Intel RealsSense D415 Depth Camera. Comput. Electron. Agric. 2022, 198, 107030. [Google Scholar] [CrossRef]
  170. Huang, P.; Huang, P.; Wang, Z.; Wu, X.; Liu, J.; Zhu, L. Deep-Learning-Based Trunk Perception with Depth Estimation and DWA for Robust Navigation of Robotics in Orchards. Agronomy 2023, 13, 1084. [Google Scholar] [CrossRef]
  171. Zhang, H.; Meng, Z.; Wen, S.; Liu, G.; Hu, G.; Chen, J.; Zhang, S. Design and Experiment of Active Obstacle Avoidance Control System for Grapevine Interplant Weeding Based on GNSS. Smart Agric. Technol. 2025, 10, 100781. [Google Scholar] [CrossRef]
  172. Devanna, R.P.; Romeo, L.; Reina, G.; Milella, A. Yield Estimation in Precision Viticulture by Combining Deep Segmentation and Depth-Based Clustering. Comput. Electron. Agric. 2025, 232, 110025. [Google Scholar] [CrossRef]
  173. Xu, Z.; Liu, J.; Wang, J.; Cai, L.; Jin, Y.; Zhao, S. Realtime Picking Point Decision Algorithm of Trellis Grape for High-Speed Robotic Cut-and-Catch Harvesting. Agronomy 2023, 13, 1618. [Google Scholar] [CrossRef]
  174. Salas, B.; Salcedo, R.; Garcia-Ruiz, F.; Gil, E. Design, Implementation and Validation of a Sensor-Based Precise Airblast Sprayer to Improve Pesticide Applications in Orchards. Precis. Agric. 2024, 25, 865–888. [Google Scholar] [CrossRef]
  175. Nakaguchi, V.M.; Abeyrathna, R.R.D.; Liu, Z.; Noguchi, R.; Ahamed, T. Development of a Machine Stereo Vision-Based Autonomous Navigation System for Orchard Speed Sprayers. Comput. Electron. Agric. 2024, 227, 109669. [Google Scholar] [CrossRef]
  176. Khan, Z.; Liu, H.; Shen, Y.; Zeng, X. Deep learning improved YOLOv8 algorithm: Real-time precise instance segmentation of crown region orchard canopies in natural environment. Comput. Electron. Agric. 2024, 224, 109168. [Google Scholar] [CrossRef]
  177. Zhang, L.; Li, M.; Zhu, X.; Chen, Y.; Huang, J.; Wang, Z.; Fang, K. Navigation Path Recognition between Rows of Fruit Trees Based on Semantic Segmentation. Comput. Electron. Agric. 2024, 216, 108511. [Google Scholar] [CrossRef]
  178. Xie, X.; Li, Y.; Zhao, L.; Wang, S.; Han, X. Method for the Fruit Tree Recognition and Navigation in Complex Environment of an Agricultural Robot. Int. J. Agric. Biol. Eng. 2024, 17, 221–229. [Google Scholar] [CrossRef]
  179. Xu, S.; Rai, R. Vision-Based Autonomous Navigation Stack for Tractors Operating in Peach Orchards. Comput. Electron. Agric. 2024, 217, 108558. [Google Scholar] [CrossRef]
  180. Yang, Z.; Ouyang, L.; Zhang, Z.; Duan, J.; Yu, J.; Wang, H. Visual Navigation Path Extraction of Orchard Hard Pavement Based on Scanning Method and Neural Network. Comput. Electron. Agric. 2022, 197, 106964. [Google Scholar] [CrossRef]
  181. Jia, W.; Tai, K.; Dong, X.; Ou, M.; Wang, X. Design of and Experimentation on an Intelligent Intra-Row Obstacle Avoidance and Weeding Machine for Orchards. Agriculture 2025, 15, 947. [Google Scholar] [CrossRef]
  182. Syed, T.N.; Zhou, J.; Lakhiar, I.A.; Marinello, F.; Gemechu, T.T.; Rottok, L.T.; Jiang, Z. Enhancing Autonomous Orchard Navigation: A Real-Time Convolutional Neural Network-Based Obstacle Classification System for Distinguishing ‘Real’ and ‘Fake’ Obstacles in Agricultural Robotics. Agriculture 2025, 15, 827. [Google Scholar] [CrossRef]
  183. Cao, G.; Zhang, B.; Li, Y.; Wang, Z.; Diao, Z.; Zhu, Q.; Liang, Z. Environmental Mapping and Path Planning for Robots in Orchard Based on Traversability Analysis, Improved LeGO-LOAM and RRT* Algorithms. Comput. Electron. Agric. 2025, 230, 109889. [Google Scholar] [CrossRef]
  184. Scalisi, A.; McClymont, L.; Underwood, J.; Morton, P.; Scheding, S.; Goodwin, I. Reliability of a Commercial Platform for Estimating Flower Cluster and Fruit Number, Yield, Tree Geometry and Light Interception in Apple Trees under Different Rootstocks and Row Orientations. Comput. Electron. Agric. 2021, 191, 106519. [Google Scholar] [CrossRef]
  185. Mao, W.; Murengami, B.; Jiang, H.; Li, R.; He, L.; Fu, L. UAV-Based High-Throughput Phenotyping to Segment Individual Apple Tree Row Based on Geometrical Features of Poles and Colored Point Cloud. J. ASABE 2024, 67, 1231–1240. [Google Scholar] [CrossRef]
  186. Krklješ, D.; Kitić, G.; Panić, M.; Petes, C.; Filipović, V.; Stefanović, D.; Marko, O. Agrobot Gari, a Multimodal Robotic Solution for Blueberry Production Automation. Comput. Electron. Agric. 2025, 237, 110626. [Google Scholar] [CrossRef]
  187. Kim, K.; Deb, A.; Cappelleri, D.J. P-AgNav: Range View-Based Autonomous Navigation System for Cornfields. IEEE Robot. Autom. Lett. 2025, 10, 366–3373. [Google Scholar] [CrossRef]
  188. Li, H.; Lai, X.; Mo, Y.; He, D.; Wu, T. Pixel-Wise Navigation Line Extraction of Cross-Growth-Stage Seedlings in Complex Sugarcane Fields and Extension to Corn and Rice. Front. Plant Sci. 2025, 15, 1499896. [Google Scholar] [CrossRef]
  189. Pradhan, N.C.; Sahoo, P.K.; Kushwaha, D.K.; Mani, I.; Srivastava, A.; Sagar, A.; Makwana, Y. A Novel Approach for Development and Evaluation of LiDAR Navigated Electronic Maize Seeding System Using Check Row Quality Index. Sensors 2021, 21, 5934. [Google Scholar] [CrossRef] [PubMed]
  190. Chen, Z.; Cai, Y.; Liu, Y.; Liang, Z.; Chen, H.; Ma, R.; Qi, L. Towards End-to-End Rice Row Detection in Paddy Fields Exploiting Two-Pathway Instance Segmentation. Comput. Electron. Agric. 2025, 231, 109963. [Google Scholar] [CrossRef]
  191. Wang, Y.; Fu, Q.; Ma, Z.; Tian, X.; Ji, Z.; Yuan, W.; Su, Z. YOLOv5-AC: A Method of Uncrewed Rice Transplanter Working Quality Detection. Agronomy 2023, 13, 2279. [Google Scholar] [CrossRef]
  192. Wu, S.; Ma, X.; Jin, Y.; Yang, J.; Zhang, W.; Zhang, H.; Qi, L. A Novel Method for Detecting Missing Seedlings Based on UAV Images and Rice Transplanter Operation Information. Comput. Electron. Agric. 2025, 229, 109789. [Google Scholar] [CrossRef]
  193. Fu, D.; Chen, Z.; Yao, Z.; Liang, Z.; Cai, Y.; Liu, C.; Qi, L. Vision-Based Trajectory Generation and Tracking Algorithm for Maneuvering of a Paddy Field Robot. Comput. Electron. Agric. 2024, 226, 109368. [Google Scholar] [CrossRef]
  194. Guan, X.; Shi, L.; Ge, H.; Ding, Y.; Nie, S. Development, Design, and Improvement of an Intelligent Harvesting System for Aquatic Vegetable Brasenia schreberi. Agronomy 2025, 15, 1451. [Google Scholar] [CrossRef]
  195. Wang, S.; Yu, S.; Zhang, W.; Wang, X.; Li, J. The Seedling Line Extraction of Automatic Weeding Machinery in Paddy Field. Comput. Electron. Agric. 2023, 205, 107648. [Google Scholar] [CrossRef]
  196. Liu, Q.; Zhao, J. MA-Res U-Net: Design of Soybean Navigation System with Improved U-Net Model. Phyton 2024, 93, 2663. [Google Scholar] [CrossRef]
  197. Tsiakas, K.; Papadimitriou, A.; Pechlivani, E.M.; Giakoumis, D.; Frangakis, N.; Gasteratos, A.; Tzovaras, D. An Autonomous Navigation Framework for Holonomic Mobile Robots in Confined Agricultural Environments. Robotics 2023, 12, 146. [Google Scholar] [CrossRef]
  198. Huang, S.; Pan, K.; Wang, S.; Zhu, Y.; Zhang, Q.; Su, X.; Yu, H. Design and Test of an Automatic Navigation Fruit-Picking Platform. Agriculture 2023, 13, 882. [Google Scholar] [CrossRef]
  199. Ferro, M.V.; Sørensen, C.G.; Catania, P. Comparison of Different Computer Vision Methods for Vineyard Canopy Detection Using UAV Multispectral Images. Comput. Electron. Agric. 2024, 225, 109277. [Google Scholar] [CrossRef]
  200. Tan, Y.; Su, W.; Zhao, L.; Lai, Q.; Wang, C.; Jiang, J.; Li, P. Navigation Path Extraction for Inter-Row Robots in Panax notoginseng Shade House Based on Im-YOLOv5s. Front. Plant Sci. 2023, 14, 1246717. [Google Scholar] [CrossRef]
  201. Gao, X.; Wang, G.; Qi, J.; Wang, Q.; Xiang, M.; Song, K.; Zhou, Z. Improved YOLO v7 for Sustainable Agriculture Significantly Improves Precision Rate for Chinese Cabbage (Brassica pekinensis Rupr.) Seedling Belt (CCSB) Detection. Sustainability 2024, 16, 4759. [Google Scholar] [CrossRef]
  202. Di Gennaro, S.F.; Vannini, G.L.; Berton, A.; Dainelli, R.; Toscano, P.; Matese, A. Missing Plant Detection in Vineyards Using UAV Angled RGB Imagery Acquired in Dormant Period. Drones 2023, 7, 349. [Google Scholar] [CrossRef]
  203. Lu, Z.; Han, B.; Dong, L.; Zhang, J. COTTON-YOLO: Enhancing Cotton Boll Detection and Counting in Complex Environmental Conditions Using an Advanced YOLO Model. Appl. Sci. 2024, 14, 6650. [Google Scholar] [CrossRef]
Figure 1. Crop row navigation for soybean seedlings [19].
Figure 1. Crop row navigation for soybean seedlings [19].
Agriculture 15 02151 g001
Figure 2. Traditional visual methods: (a) obstacle detection algorithm based on YOLO [36]; (b) relationship between the crop row and the motion wheel of the farm machine [37]; (c) crop root rows detection based on crop canopy image [38]; (d) crop row detection at different growth stages [39].
Figure 2. Traditional visual methods: (a) obstacle detection algorithm based on YOLO [36]; (b) relationship between the crop row and the motion wheel of the farm machine [37]; (c) crop root rows detection based on crop canopy image [38]; (d) crop row detection at different growth stages [39].
Agriculture 15 02151 g002
Figure 3. Two-dimensional LiDAR-based system for canopy sensing [47]. The lines in the picture are used for the position of tree rows.
Figure 3. Two-dimensional LiDAR-based system for canopy sensing [47]. The lines in the picture are used for the position of tree rows.
Agriculture 15 02151 g003
Table 1. Comparison of different methods for crop row detection technology.
Table 1. Comparison of different methods for crop row detection technology.
Technical TypeApplication ScenariosCharacteristicRef.
Traditional visualMachine visionMonitoring of rice seedling rowsAdapts to seedling row gaps and missing seedlings[78]
Projection transformationAgricultural robot navigationGood robustness in high weed scenes[79]
KNN-RGB gradient filteringTesting of sugarcane industryCan adapt to 40 day and 80 day reproductive periods[80]
Template matching + digital twinMeasurement of row deviationAdapts to scenarios with insufficient seedlings and dense weeds[81]
Deep learningYOLOv8s EFF+geometric algorithmTobacco row spacing extractionAdapts to irregular plots and dense planting scenarios[82]
Deep semantic segmentationNavigation of high-density canopy orchardAnti canopy obstruction, path planning can be achieved without GPS[83]
Automated annotation + VGG16Cornrow plant countingAdapts to different land parcels[84]
Skeleton segmentationVariable field conditionsCan resist the interference of dense weeds and discontinuous crop rows[85]
Mask R-CNNDrone cornrow plant recognition Can detect seeding errors such as missed sowing and replay[86]
Lightweight U-Net segmentationCrop row consistency assessment Adapt to irregular plots and dense planting scenarios[87]
LiDAR-based technologySelf supervised deep learningCanopy navigationCopes with changes in brightness[88]
2D LiDAR SLAMOrchard environment positioningAdapt to different terrain roughness and orchard sizes[89]
LiDAR high bed cultivation navigationStrawberry Farm High Bed Cultivation EnvironmentNo need to rely on path planning[90]
LiDAR-based SLAM semantic mappingOrchard robot autonomous navigationIntegrates terrain analysis; supports phenotype monitoring/harvesting[91]
Rolling 2D LiDAR navigation line extractionThin tree trunks and dense foliage block the environmentOvercomes traditional LiDAR’s narrow vertical FOV and sparse point clouds[92]
Hierarchical coupled LiDAR inertial odometerMultiple agricultural scenariosCapable of adapting to different LiDAR and dense/open environments[93]
RTMR-LOAMReconstruction of corn cropsHigh-density crop morphological parameter measurement[94]
Adaptive LiDAR odometer and mappingAutonomous Navigation for Unmanned FarmsCapable of resisting motion distortion and dynamic object interference[95]
Multi-sensor fusionCamera LiDAR IMU FusionExtraction of inter-row navigation linesHigh precision, capable of dealing with complex farmland environment interference[96]
LiDAR and multi-sensor fusion (EKF)Navigation of orchard spray robotSupports obstacle avoidance/multitasking[97]
Fusion of instance segmentation and depth predictionPositioning of lodging stubble rowsAdapt to different monocular cameras to reduce cross device errors[98]
Fusion of fluorescence imaging and computer visionSoybean seedling stage positioningStable fluorescence signal, not affected by early growth interference[99]
OthersUnsupervised graph representationAgricultural scenarios lacking labeled dataDisposable weed aggregation and other inconsistent structures[100]
Table 2. Comparison of performance evaluation indicators for crop inspection.
Table 2. Comparison of performance evaluation indicators for crop inspection.
Indicator CategorySpecific IndicatorsMethodPerformance ParametersRef.
PrecisionIoU of cropsDeep learning + row-structure constraintsCrop IoU = 88.6%[140]
Angle and distance errorsImproved YOLOv3Angle error = 0.75°, distance error = 10.84 pix[141]
Position errorUAV image processing pipelineCotton = 0.32 ± 0.21 m, Maize = 0.57 ± 0.28 m[142]
Segmentation accuracyYOLOv8n-segSuperior to YOLOv5n-seg[143]
EfficiencyProcessing timeBeet extraction algorithmSingle frame = 11.751 ms[144]
Detection speedImproved YOLOv7 TinyDetection speed = 32.4 fps[145]
Reasoning speedSeedNet/PeakNetSeedNet = 105, PeakNet = 2295 fps[146]
RobustnessAngular deviationROI based methodAvg angle dev = 0.456–0.789°[147]
Lateral ErrorYOLOv8m-vine-classesLateral RMSE ≈ 5 cm[148]
Walking deviationPP LiteSeg semantic segmentationAvg = 1.33 cm,
max = 3.82 cm
[149]
Table 3. Comparison of adaptability of crop row detection methods in different scenarios.
Table 3. Comparison of adaptability of crop row detection methods in different scenarios.
ScenarioPurposeMethodPerformance ParametersRef.
Soybean fieldLight and shadow changes/broken lines/dense weedsMA-Res U-NetDeviation 3°, mIOU > traditional U-Net[196]
GreenhouseClosed environment crop inspectionStereo camera-LiDAR-semantic segmentationAuto inspection, fit confined spaces[197]
Dwarfing orchardNavigation of high-density planting and harvesting platformBeidou Navigation-Stanley AlgorithmMax straight lateral dev 101.5 mm[198]
VineyardCanopy parameter monitoringMask R-CNN/U-Net semantic segmentationOA, F1, IoU > OBIA[199]
Greenhouse SanqiSunshade net shadow recognitionIm YOLOv5s-least squares fittingMax dev 1.64°, 94.9% mAP[200]
Vegetable fieldsImage recognition of Chinese cabbage seedling stageImproved YOLOv7Fit accuracy = 94.2%, identification rate = 91.3%[201]
VineyardDetection of plants missing during dormancy periodTilted RGB image + point cloud spatial analysisOverall accuracy = 92.72%[202]
Cotton fieldCotton boll detection and countingCOTTON-YOLO (YOLOv8n)AP50 up10.3%[203]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, Z.; Wang, X.; Chen, X.; Hu, B.; Li, J. Advances in Crop Row Detection for Agricultural Robots: Methods, Performance Indicators, and Scene Adaptability. Agriculture 2025, 15, 2151. https://doi.org/10.3390/agriculture15202151

AMA Style

Ma Z, Wang X, Chen X, Hu B, Li J. Advances in Crop Row Detection for Agricultural Robots: Methods, Performance Indicators, and Scene Adaptability. Agriculture. 2025; 15(20):2151. https://doi.org/10.3390/agriculture15202151

Chicago/Turabian Style

Ma, Zhen, Xinzhong Wang, Xuegeng Chen, Bin Hu, and Jingbin Li. 2025. "Advances in Crop Row Detection for Agricultural Robots: Methods, Performance Indicators, and Scene Adaptability" Agriculture 15, no. 20: 2151. https://doi.org/10.3390/agriculture15202151

APA Style

Ma, Z., Wang, X., Chen, X., Hu, B., & Li, J. (2025). Advances in Crop Row Detection for Agricultural Robots: Methods, Performance Indicators, and Scene Adaptability. Agriculture, 15(20), 2151. https://doi.org/10.3390/agriculture15202151

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop