Next Article in Journal
3D Semantic Map Reconstruction for Orchard Environments Using Multi-Sensor Fusion
Previous Article in Journal
Pyramiding of Low-Nitrogen-Responsive QTL Clusters Enhances Yield and Nutrient-Use Efficiency in Barley
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Visual Navigation Line Detection and Extraction for Hybrid Rapeseed Seed Production Parent Rows

1
College of Mechanical and Electrical Engineering, Hunan Agricultural University, Changsha 410128, China
2
Sunward Intelligent Equipment Co., Ltd., Changsha 410128, China
*
Author to whom correspondence should be addressed.
Agriculture 2026, 16(4), 454; https://doi.org/10.3390/agriculture16040454
Submission received: 14 January 2026 / Revised: 10 February 2026 / Accepted: 11 February 2026 / Published: 14 February 2026

Abstract

We aim to address the insufficient robustness of navigational line detection for rapeseed seed production sires in complex field scenarios and the challenges faced by existing models in balancing precision, real-time performance, and resource consumption. Taking YOLOv8n-seg as the baseline, we first introduced the ADown module to mitigate feature subsampling information loss and enhance computational efficiency. Subsequently, the DySample module was employed to strengthen target feature representation and improve object discrimination in complex scenarios. Finally, the c2f module was replaced with c2f_FB to optimise feature fusion and reinforce multi-scale feature integration. Performance was evaluated through comparative experiments, ablation studies, and scenario testing. The model achieves an average precision of 99.2%, mAP50-95 of 84.5%, a frame rate of 90.21 frames per second, and 2.6 million parameters, demonstrating superior segmentation performance in complex scenarios. SegNav-YOLOv8n balances performance and resource requirements, validating the effectiveness of the improvements and providing reliable technical support for navigating agricultural machinery in rapeseed seed production.

1. Introduction

Rapeseed is a vital oilseed crop, and the quality of its seed production directly impacts agricultural productivity and seed industry security. Field management, as a critical component of the seed production process, encompasses multiple aspects, including plant regulation, weed and inferior plant removal, and paternal parent elimination [1,2]. Traditional manual methods are not only inefficient and labour-intensive but also prone to inconsistent results due to subjective judgement errors. In field management, male plant removal primarily relies on chemical emasculation, a method susceptible to environmental conditions and posing risks of pesticide residues [3,4,5]. Integrating field management machinery with intelligent agricultural equipment to establish a technology system for intelligent recognition and navigation line extraction holds significant importance for advancing intelligent, high-precision field management techniques [6,7]. This has become an urgent requirement for enhancing the modernisation level of rapeseed seed production [8].
With the rapid advancement of machine vision and artificial intelligence technologies, navigation methods based on visual perception have become a significant research direction in agricultural automation [9]. In the field of crop row navigation technology research, early approaches primarily relied on conventional image processing algorithms. For instance, Gao Guoqin et al. [10] integrated the K-means algorithm with the HSV colour model to cluster image pixels into two categories, thereby directly separating target path regions and mitigating the impact of lighting conditions. This method demonstrated faster processing speeds compared to the HSV colour model alone. JIANG et al. [11] converted deep images of farmland into point clouds, demonstrating superior performance in handling environmental factors such as illumination, shadows, and weeds compared to colour space-based methods. GONG Jinliang et al. [12] employed 2G-B-R and morphological segmentation techniques for maize root images, enhancing the vertical projection algorithm for feature point extraction. This approach exhibited greater adaptability and real-time capability than traditional peak point methods, enabling real-time navigation paths for agricultural machinery in maize fields. Chen Jiqing et al. [13] employed greyscale conversion of vegetation colour indices, Gabor filtering, and PCA dimensionality reduction for feature extraction. Combining K-means clustering with the central axis algorithm, they generated navigation paths demonstrating higher recognition precision and faster processing speeds than the 2G-B-R approach. Whilst computationally efficient, such approaches fundamentally rely on superficial features like colour and texture, lacking an understanding of image semantic content. Consequently, their adaptability is limited in complex scenarios where crop-background contrast is low.
With the rapid advancement of deep learning, various deep learning algorithms—particularly convolutional neural networks—have been extensively applied in the agricultural sector, yielding remarkable results and providing robust technological support for enhancing agricultural productivity [14]. Yu Tan et al. [15] introduced MobileNetv3 and ECANet to enhance the YOLOv5s algorithm for navigating between ginseng crop rows, thereby improving root detection. Most of the aforementioned studies focused on navigational line extraction when crops exhibited relatively distinct characteristics. Su Tong et al. [16] addressed the challenge of detecting navigation lines in mid-to-late-stage maize rows, where insufficient light and obstruction hinder detection. They introduced an Edge Extraction Module (EEM) and an ASPP module within the Fast-SCNN model to optimise path edge segmentation, thereby enhancing detection precision. Concurrently, they employed pixel scanning, weighted averaging, and least squares methods to detect and fit navigation lines, reducing heading angle deviation. Peng Shubo et al. [17] enhanced apple tree trunk recognition precision in YOLOv7 networks by integrating CBAM attention and SPD-Conv modules, achieving a 2.31% improvement over the baseline model. They validated the feasibility of using midpoints of rectangular bounding boxes instead of trunk root points for navigation line fitting. Ying Qiukai et al. [18] employed the YOLOv8 instance segmentation model for strawberry ridge detection, proposing a navigation line fitting method combining the Canny edge detection algorithm with the intercept method. This approach demonstrated higher navigation line precision compared to traditional least squares methods. Yu Gaohong et al. [19] addressed barren field ridges by refining the DeepLabV3+ algorithm. They enhanced detection speed by substituting the Xception backbone with MobileNetV2 and introduced the Convolutional Block Attention Module (CBAM) to improve ridge boundary extraction. Hailiang Gong et al. [20] refined the YOLOX-Tiny model by incorporating adaptive illumination adjustment, multi-scale prediction, visual attention mechanisms, and the Fast-SPP module. Combined with the CIoU loss function and least squares method, this approach achieved high-precision identification of maize crop row navigation lines. Dong et al. [21] developed a detection method for rapeseed seedling rows by combining an improved BiSeNetV2 with dynamic sliding window fitting, integrating ECA, ASPP, and DS Conv for optimisation. This method performs well in various environments, but its effectiveness in extreme scenarios and real-time performance for high-speed operations need further verification. Zhao et al. [22] developed an autonomous laser weeding robot for strawberry fields based on DIN-LW-YOLO, an improved YOLOv8s-pose integrated with EMA attention and C2f-DCNv3. The model achieves excellent detection performance, with field tests showing 92.6% weed control rate and 1.2% seedling injury rate that meet agronomic requirements. Saha and Noguchi [23] proposed a machine vision-based autonomous navigation framework for vineyards, optimising YOLOv8 to develop the dedicated YOLOv8m-vine-classes model with 95% precision and 93.7% mAP50, enabling accurate vine row recognition and safe EV navigation. Most of the aforementioned studies focused on navigational line extraction when crops exhibited relatively distinct characteristics. These findings demonstrate the robust performance of deep learning in structured scenarios. However, in the specific context of rapeseed parent plants exhibiting high visual similarity and interplant growth, challenges persist, including insufficient model adaptability and inadequate sensitivity to subtle feature variations.
Particularly during field management of rapeseed seed production, the morphological and colour similarities between parental lines result in low distinguishability. Traditional threshold-based segmentation or feature engineering methods prove ineffective for extracting navigation lines [24,25,26], while existing deep learning models often fail to simultaneously achieve both lightweight operation and high precision [27,28,29]. High planting density, severe plant-to-plant occlusion, and weed interference further compound the challenges of visual recognition. In summary, current research lacks lightweight, high-precision segmentation models specifically tailored to the unique scenario of rapeseed parental lines exhibiting high morphological similarity and dense planting conditions. Existing crop row navigation studies predominantly focus on crops with markedly distinct characteristics, such as maize and strawberries, leaving a gap in adaptable solutions for complex scenarios involving highly similar rapeseed parental lines and severe field occlusion. This paper proposes a rapeseed parent row detection algorithm based on SegNav-YOLOv8n. Distinct from existing approaches that merely stack attention modules or optimise lightweight backbone networks, this study innovatively constructs a synergistic optimisation framework comprising “lightweight downsampling—dynamic upscaling—feature fusion”: the ADown module addresses boundary information loss in traditional downsampling, the DySample module achieves adaptive feature upscaling reconstruction, while the C2f_FB module enhances multi-scale feature integration under lightweight constraints. These components form an organic whole rather than a simple concatenation. Through this collaborative optimisation framework, this study enhances the ability to distinguish subtle differences between parental rows while maintaining high computational efficiency. This not only fills a technical gap in navigational detection for rapeseed seed production parental rows but also provides a transferable, modular improvement paradigm for visual navigation tasks involving highly similar crop rows. It offers a reliable visual navigation solution for intelligent, mechanised field management in rapeseed seed production.

2. Materials and Methods

2.1. Data Collection and Preprocessing

This research dataset was collected between March and May 2025 at the Provincial Rapeseed Breeding Centre Experimental Base in Xunlong River Ecological Art Town, Changsha County. This period spans the rapeseed plant’s budding stage through flowering to the initial pod formation phase, representing the core period for critical field management operations such as weed removal and emasculation. The collected data precisely aligns with actual production requirements.
The data collection area totalled approximately 150 mu (approximately 10 hectares), encompassing three distinct rapeseed varieties sown at different times. Both mechanical direct-drilling and manual transplanting methods were employed, adhering to a 1:2 (paternal:maternal) seed production model. Within this setup, paternal rows are spaced at 1.8 m intervals, and maternal rows at 2.1 m intervals, with inter-row furrows measuring 0.2–0.25 m in width. This configuration integrates agronomic practices with mechanised field operations.
Data acquisition employed a Realsense D435i depth camera(Intel Corporation, Santa Clara, CA, USA), with raw capture resolution set at 1920 × 1080 pixels. Images ultimately used for model training were uniformly cropped and resized to 640 × 640 pixels. During capture, the camera was mounted on a high-clearance chassis (1.8 m wide, positioned above the paternal rapeseed row) fixed at the front of the vehicle. It was suspended vertically 2 m above ground level at a 30° angle to the ridge surface, enabling complete coverage of one paternal row and its corresponding two maternal rows.
The vehicle’s operational speed was controlled at 0.5 m/s, matching the working speed of actual rapeseed field management machinery to simulate dynamic image capture under real-world working conditions. Data collection periods were concentrated between 09:00–11:30 and 14:00–16:30 daily, encompassing three typical light conditions: clear skies, cloudy skies, and overcast skies. A total of 21 video segments were collected, primarily encompassing four core scenarios, as illustrated in Figure 1.
Frames were extracted at 0.6 s intervals (approximately 16.7 frames per second), yielding 60–100 raw images per video segment. These were subsequently manually filtered to remove blurred, overexposed, or invalid frames, ultimately yielding 831 valid images. To ensure sample independence and evaluation objectivity, a “video session grouping” strategy was employed for dataset partitioning: the 21 video segments were divided into independent units based on “capture date + scene type”. All frames from the same video segment were assigned to the same dataset (training set/validation set/test set), with no cross-set distribution. The training set encompassed 17 videos (covering different crop varieties, sowing methods, and lighting scenarios), while the validation and test sets each contained 2 videos. The test set videos were captured ≥ 3 days apart from the training set videos, with natural variations in scene conditions (e.g., weed coverage and light intensity) to avoid evaluation bias caused by frame-to-frame correlation. The final partitioning yielded: training set: 17 videos (665 raw images), validation set: 2 videos (83 raw images), and test set: 2 videos (83 raw images). The three datasets exhibit an approximate 8:1:1 distribution ratio across scenarios, ensuring balanced representation of all conditions within the dataset.

2.2. Data Augmentation

To further enhance the model’s generalisation capability and strictly prevent data leakage risks, all data augmentation operations in this study were applied exclusively to the training set. The validation and test sets retained the original image data without undergoing any form of augmentation processing. The training set was expanded through the following data augmentation strategies: random rotation, brightness adjustment, contrast adjustment, and Gaussian blurring. Each training image underwent one or more augmentation operations with a 50% probability. Through data augmentation techniques, the training set was expanded to 2760 images, bringing the total dataset size to 2926 images, thereby ensuring data diversity. Following the partitioning of the dataset, a training set comprising 2760 images, a validation set of 83 images, and a test set of 83 images were generated. Figure 2 illustrates partial results of the data augmentation process.

2.3. Data Annotation

The annotation of the dataset was completed using the LabelMe semantic segmentation annotation tool, with the annotation process strictly adhering to the annotation specifications for instance segmentation tasks: annotators manually traced the mask regions pixel-by-pixel for each rapeseed male parent target within each image, precisely delineating the effective coverage area of the rapeseed male parent while excluding background interference factors such as soil, weeds, and field debris. Upon completion, the tool automatically generated YOLO-format text annotation files containing key information including mask coordinates, target category labels, and original image parameters. This ensured precise correspondence between mask regions and rapeseed male parent targets, providing high-quality, reliable supervised learning data for subsequent segmentation model training. Figure 3 illustrates the data annotation process using LabelMe.

3. Navigation Line Detection Method

3.1. Improved Rapeseed Row Extraction Model

Addressing the challenges of cross-growth between parental rows and lines in rapeseed seed production fields, severe weed interference, and the inherent limitations of traditional YOLOv8-Seg in capturing parental row features and preserving boundary details, this study implements targeted enhancements based on YOLOv8-Seg. Unlike existing single-point enhancement strategies for YOLO models, this research establishes a comprehensive feature optimisation paradigm spanning core processes from backbone feature extraction to Neck layer feature fusion. First, the conventional downsampling convolutional layer in the backbone network is replaced with the lightweight ADown module. Its “average pooling + dual-branch parallel” architecture minimises computational overhead while maximally preserving critical boundary information of parent rows, addressing the loss of fine features caused by traditional downsampling. Secondly, the conventional upsampling module in the Neck layer is replaced with the dynamic upsampler DySample. This generates upsampling parameters adaptively based on input features, superseding fixed interpolation rules to enhance edge feature reconstruction precision. Concurrently, the C2f module preceding the detection head in the Neck section is replaced with the lightweight partial convolution C2f_FB. This achieves streamlining through feature channel compression and branch-parallel computation, enhancing feature extraction efficiency while reducing parameter count. This enhancement strategy represents not a simple substitution of existing modules but rather achieves the synergistic objectives of “minimising feature loss—refining feature expression—optimising computational efficiency” through complementary functionalities across modules. The improved network architecture is illustrated in Figure 4.
The proposed collaborative optimisation framework comprises three modules: ADown, DySample, and C2f_FB. This is not a simple amalgamation of existing lightweight components, but rather a methodologically designed approach tailored to address three core constraints identified during rapeseed parent row navigation. These modules form a progressive collaborative mechanism of “feature fidelity—edge enhancement—computational efficiency optimisation”, achieving a fundamental leap from segmentation model refinement to enhanced navigation system performance.
The three enhancement modules proposed herein (ADown, DySample, and C2f_FB) are not independently optimised but constitute a collaborative system addressing key challenges in rapeseed parent row detection. 1. ADown Module: A lightweight downsampling scheme designed under boundary fidelity constraints for navigation. To resolve the loss of rapeseed parent row boundaries caused by traditional downsampling—which leads to navigation line fitting deviations exceeding tolerance thresholds—an “average pooling + dual-branch parallel” architecture is employed. This design reduces parameter count by 30% while increasing boundary feature retention to 92%, directly addressing navigation failures caused by boundary blurring. The design explicitly focuses on “boundary feature integrity” under navigation constraints, rather than merely optimising segmentation metrics. 2. DySample Module: A dynamic oversampling design developed under navigation edge precision constraints. To resolve local navigation line shifts caused by blurred rapeseed parent row edge features due to fixed oversampling, this module dynamically generates oversampling parameters based on input features. This approach elevates edge feature reconstruction precision to 89% while reducing local navigation line deviation by 40%, establishing a direct correlation between “segmentation edge precision” and “navigation alignment precision”. 3. C2f_FB Module: A lightweight feature fusion solution developed under real-time navigation constraints. To overcome computational limitations in agriculturally embedded devices, a local convolution mechanism reduces computational load by 40% while maintaining multi-scale feature fusion efficiency. This mechanism stabilises the model frame rate above 90 FPS, fulfilling “real-time requirements” under navigation constraints. The three modules synergistically form a methodological framework: “navigation performance constraints—customised module design—end-to-end performance optimisation”.

3.1.1. Lightweight Downsampling Convolution Module ADown

The core design objective of this module is to address the challenge of “navigation deviation exceeding tolerance limits due to boundary information loss” in rapeseed parent row navigation, whilst balancing lightweight implementation with the integrity of boundary features. Due to the intermingled growth of rapeseed parental lines in the field and significant weed interference, the boundary details of rapeseed paternal rows are crucial for navigation precision. However, traditional stride-2 convolution downsampling tends to lose such critical information, while real-time navigation imposes stringent computational demands on the model. These issues directly impact the precision and stability of navigation path extraction. To address this, the present study employs the ADown lightweight downsampling module, whose scene adaptability and core advantages are as follows. Centred on an architecture of “average pooling and dual-branch parallel processing”, this module achieves spatial dimension reduction and background noise smoothing through preliminary downsampling via AvgPool2d, while concurrently preserving the overall contour trends of crops to effectively counteract weed interference. The dual-branch design splits features along the channel dimension: a 3 × 3 convolution captures local boundary details, while MaxPool2d combined with a 1 × 1 convolution preserves global structural features. This avoids the boundary information loss inherent in traditional downsampling, enhancing paternal row segmentation precision in complex environments. Concurrently, parallel processing and lightweight design balance precision with computational efficiency, reducing parameters by approximately 30% to accommodate real-time field computing demands. The fusion of local and global features further enhances the model’s adaptability to varying parental growth states, significantly improving row extraction robustness and providing critical support for precise navigation in rapeseed fields. The ADown network architecture is illustrated in Figure 5.

3.1.2. Lightweight Dynamic Oversampling DySample

The core design objective of this module is to address the challenge of “localised deviation in navigation lines caused by blurred edge features” during rapeseed parent row navigation, thereby achieving precision and adaptability in feature reconstruction. Traditional sampling methods, reliant on fixed parameters, often result in blurred edge features along the paternal rows of rapeseed. The precision of these edge features directly impacts the quality of crop row segmentation and subsequent navigation line fitting, rendering them ill-suited to the complex demands of field scenarios. To address this, this study introduces the DySample lightweight dynamic upsampling module. Its core innovation lies in dynamically generating upsampling parameters from input features, replacing fixed interpolation rules. Through dynamic parameter modulation and feature-adaptive interpolation, it achieves high-precision feature reconstruction, effectively preserving subtle edge features of rapeseed parent rows and enhancing target-background discrimination. Concurrently, this module combines lightweight advantages with high efficiency, meeting real-time field processing demands while ensuring enhanced segmentation precision. It is therefore suitable for replacing traditional upsampling modules in rapeseed crop row extraction tasks. The DySample network architecture is illustrated in Figure 6.

3.1.3. Lightweight Partial Convolutional C2f_FB

The core design objective of this module is to address the challenge of insufficient real-time performance due to computational constraints in rapeseed parental row navigation, achieving a balance between feature fusion efficiency and lightweight implementation. To enhance the detection head’s efficiency in extracting characteristics from rapeseed paternal rows while controlling computational complexity to meet the processing demands of real-time field navigation systems, this study replaces the original C2f module with the lightweight C2f_FB partial convolutional module. Building upon the C2f architecture, this module incorporates partial convolution mechanisms. By implementing feature channel compression, branch parallel processing, and partial-region convolution operations, it reduces redundant computations to achieve lightweight optimisation. Compared to the original C2f module, it achieves approximately 40% lower computational load and 35% fewer parameters, without significantly compromising its ability to extract key features such as rapeseed parent row contours and texture of rapeseed paternal rows. This effectively enhances inference speed on embedded devices, perfectly meeting the dual demands for efficiency and lightweight design in real-time navigation scenarios within rapeseed fields. The C2f_FB network architecture is illustrated in Figure 7.

3.2. Navigation Line Fitting Method

Mainstream navigation line fitting algorithms include the least squares method [30] and the Hough transform [31], both of which are applicable to the navigation line fitting task in this study. However, the least squares method is only suitable for fitting ideal linear crop rows and assumes observation errors are unidirectional (occurring solely along the y-axis), which does not align with the actual situation where rapeseed parent rows may exhibit curvature due to uneven sowing or wind disturbance. The Hough transform demonstrates greater robustness to noise but offers lower curve fitting precision and consumes substantial computational resources, compromising real-time navigation performance. In contrast, polynomial fitting offers greater flexibility in describing non-linear crop row distributions, accommodating the slight curvature of rapeseed parent rows in the field.
Addressing phenomena such as interlaced growth of paternal and maternal rows in rapeseed seed production fields, localised gaps caused by uneven sowing of paternal rows, and inconsistent growth patterns, this paper proposes a polynomial fitting method based on an improved YOLOv8-seg model to extract navigation lines. The application of polynomial fitting in this study rests upon three core assumptions: (1) The spatial distribution of rapeseed parental rows in the field conforms to low-order polynomial curve characteristics (a quadratic polynomial was employed herein), with the curvature of rows and plants remaining within a narrow range that does not impede agricultural machinery navigation; (2) edge feature points extracted from rapeseed parent row segmentation masks possess high reliability, with noise points (e.g., weed interference) remaining within manageable limits; and (3) fitting errors between observation points and polynomial curves follow a normal distribution, enabling optimal fitting results through minimising the sum of squared orthogonal distances between observation points and the curve.
Principle of polynomial fitting: Given a series of observation points (xi, yi) (i = 1, 2, 3…N), the fitting formula is defined as y = ax2 + bx + c (quadratic polynomial). The objective function is the sum of the squared orthogonal distances from all observation points to the quadratic polynomial curve, expressed as follows:
f = i = 1 N ( y i a x i 2 b x i c ) 2 ( 2 a x i + b ) 2 + 1
The denominator ( 2 a x i + b ) 2 + 1 originates from the formula for the orthogonal distance from a point to a conic section: for the conic section y = ax2 + bx + c, the slope of the tangent line at the point ( x i , ax2 + bx + c) is 2 a x i + b . The denominator is derived from this slope, ensuring the fitting process minimises the true orthogonal distance (perpendicular to the curve itself), rather than the projected distance perpendicular to the X-axis.
When crop rows in the field approximate a straight line (curvature ≈ 0), the quadratic term coefficient a approaches 0, the fitting formula degenerates to y = b x + c , and the objective function is simplified to: f = i = 1 N ( y i b x i c ) 2 b 2 + 1 . This simplified function is entirely equivalent to linear orthogonal fitting (minimising the distance perpendicular to the line); hence, linear fitting constitutes a special case of quadratic polynomial fitting. It can accommodate crop rows exhibiting different geometric configurations, such as straight lines or mildly curved patterns. When f attains its minimum value, parameters a, b, and c represent the optimal fitting parameters, achieving a quadratic polynomial orthogonal fit for the observation points.
This method suppresses the impact of random noise through global data fitting, generating a smooth navigation line while effectively addressing curvature issues arising from non-standard planting practices, thereby exhibiting enhanced robustness. The key steps of the polynomial fitting algorithm for obtaining navigation lines are as follows:
Binarisation processing: The original image of the rapeseed paternal row (Figure 8a) undergoes instance segmentation via the SegNav-YOLOv8n model to obtain segmentation results (Figure 8b). This is subsequently binarised to produce the final binarised image (Figure 8c).
Edge extraction of rapeseed parent row: applying the Canny edge detection algorithm to the masked region of the target ridge surface extracts its contour, yielding high-precision ridge boundary information (as shown in Figure 9).
Navigation line fitting: Establish a coordinate system based on the image dimensions. Perform quadratic polynomial fitting on the boundary points extracted via Canny edge detection to generate navigation lines. Plot the fitted lines onto the image; the fitting results are shown in Figure 9b.

4. Results and Analysis

4.1. Experimental Conditions and Evaluation Criteria

The experiment employed a desktop computer equipped with an Intel® Core™ i5-13490F processor, 16GB of RAM, and an NVIDIA GeForce RTX 4060 GPU. The operating environment was Windows 11 (64-bit), with the deep learning environment configured as Python = 3.8.20 + torch = 2.1.0 + torchvision = 0.17.0 and CUDA version 11.8. Selected hyperparameters are detailed in Table 1.
The performance metrics in the following model training represent the arithmetic mean of five independent training random samples. Standard deviation (SD) is employed to quantify the variability in model performance. All experimental conditions remain consistent except for the random samples, thereby eliminating the incidental influence of random factors on the training results.
S D = i = 1 n ( x i x ¯ ) 2 n 1
where n is the number of samples, x i is the i-th observation, and x ¯ is the sample mean.
This paper employs precision (P, the proportion of samples correctly classified as positive among those predicted to be positive), recall (R, of all actual positive classes, how many were correctly predicted), mean average precision (mAP, having obtained APs of different categories, take the average of these APs), parameters (Params, the number of parameters contained within the type), and frames per second (FPS, frames per second) to evaluate the model. Precision denotes the proportion of correctly predicted positives among all predicted positives. Recall represents the proportion of correctly predicted positives among all actual positives. Parameters indicate the spatial complexity of the model, while frames per second evaluate the model’s recognition speed. To further validate the rationality of dataset partitioning and the reliability of evaluation results, this study assessed model performance through “cross-video session generalisation testing”: test set data originated exclusively from independent videos not involved in training, with a ≥3-day interval between capture times and training set videos. Scenario conditions (e.g., illumination and weed coverage) exhibited natural variation, ensuring both challenge and independence of test scenarios. The experimental results indicate that the model’s performance fluctuations on the test set (precision ±0.09%, frame rate ±0.31 FPS) remained within reasonable limits across five independent training rounds, with no significant overfitting observed. This demonstrates that the dataset partitioning effectively mitigates leakage issues, enabling evaluation results to authentically reflect the model’s practical application capabilities. The relevant calculation formulas are as follows:
P = T P T P + F P
R = T P T P + F N
A P = 0 1 P ( R ) d R
M A P = i = 1 l A P i l
F P S = 1000 t p + t i + t N
In the formula, TP (True Positive) refers to samples that were correctly identified as positive by the model; FP (False Positive) indicates samples that the model incorrectly predicted as positive despite being negative; FN (False Negative) denotes samples that the model incorrectly predicted as negative despite being positive; l represents the number of detection categories; tp represents image preprocessing time; ti represents inference time; and tN represents non-maximum suppression processing time, ms.

4.2. Model Performance Testing

4.2.1. Comparative Experiment

To validate the advantages of the SegNav-YOLOv8n model presented herein, it was compared against YOLOv8n-seg, YOLOv9e-seg [32], YOLOv9c-seg, and YOLOv11n-seg [33]. This study selected YOLOv8n-seg, YOLOv9c-seg, and YOLOv11n-seg as baseline models, with the selection criteria closely aligned with the core requirements for lightweight, real-time, and embedded deployment in rapeseed parent row detection. The specifics are as follows: (1) YOLOv8n-seg as the direct baseline: This model represents the lightweight segmentation variant of the YOLOv8 series, sharing the same architecture as the enhanced SegNav-YOLOv8n. This ensures minimal interference from underlying structural differences, enabling direct validation of improvements in the ADown, DySample, and C2f_FB modules. (2) YOLOv9c-seg as the high-precision reference: This model represents the high-performance iteration of the YOLOv9 series, achieving top-tier precision in semantic segmentation tasks. Its substantial parameter count and computational demands make it an ideal benchmark to validate whether the improved models can approximate high-precision performance while preserving lightweight advantages. (3) YOLOv11n-seg as the latest lightweight benchmark: This model represents the next generation of lightweight segmentation within the YOLO series, embodying the current frontier of lightweight real-time segmentation algorithms. Comparison with this model demonstrates the technical advancement and competitiveness of the improvement strategies proposed in this research. Under identical hardware conditions, datasets, and experimental parameters, the test results are presented in Table 2.
This experiment systematically evaluated the comprehensive performance of the improved model SegNav-YOLOv8n against the baseline models YOLOv8n-seg, YOLOv9e-seg, YOLOv9c-seg, and YOLOv11n-seg, assessing detection performance, inference efficiency, and resource consumption. The results indicate that SegNav-YOLOv8n achieves an average precision of 99.2%, outperforming YOLOv8n-seg and approaching YOLOv9c-seg. Its recall matches mainstream lightweight models, with an average precision of 84.5%, slightly exceeding YOLOv8n-seg. In terms of inference efficiency, the model achieves a frame rate of 90.21 frames per second, comparable to YOLOv8n-seg’s 90.58 frames per second, and significantly surpasses YOLOv9e-seg’s 11.65 frames per second and YOLOv9c-seg’s 21.16 frames per second. Concurrently, its parameter count of 2.6 million, model size of 5.5 MB, and computational load of 11.4 G are all lower than those of all comparison models. This model achieves the optimisation objectives of modestly enhanced detection performance, effectively reduced resource consumption, and stable real-time inference capability. It validates the effectiveness of the proposed model improvement strategy, rendering it particularly suitable for object detection and segmentation scenarios on embedded devices and edge computing platforms where computational and storage resources are constrained, yet real-time performance demands are stringent.

4.2.2. Dissolution Test

To further validate the effectiveness of each improvement method proposed in the SegNav-YOLOv8n model, ablation tests were conducted on the same dataset using the original YOLOv8n-seg model as a baseline. The experimental parameters were identical to those in Table 1, with results presented in Table 3. Different modules were added to distinct positions within the model, and comparisons were made regarding mean precision (P), detection speed (FPS), number of parameters (Params), model size (Modelsize), and computational load (Flops).
This ablation study employs YOLOv8n-seg as the baseline model, systematically analysing the impact of three enhancement modules—ADown, DySample, and c2f_FB—on detection performance and resource consumption. When introduced individually, ADown elevates the mean detection precision from 98.7% to 98.9% while boosting the detection rate to 100.5 frames per second. It concurrently reduces the number of parameters, model size, and computational load. DySample increases the mean precision to 99.1%, though the detection rate slightly decreases with a modest reduction in computational load. c2f_FB elevated detection rate to 103.41 frames per second while achieving 98.8% mean precision and optimised resource consumption. In combined application scenarios, pairing ADown with DySample boosted mean precision to 99.2%. Combining ADown with c2f_FB further reduced parameter count and model size. Simultaneously introducing ADown, DySample, and c2f_FB maintains an average precision of 99.2% while achieving the lowest parameters (2.6 M), model size (5.5 MB), and computational cost (11.4 G) among all ablation groups. Detection rate remains high at 90.21 frames per second. These results demonstrate that the ADown, DySample, and c2f_FB modules exhibit synergistic effects in enhancing performance and optimising resource consumption. The proposed module combination strategy effectively reduces model resource requirements while preserving detection performance, validating both the efficacy of individual improvements and the rationality of the combined approach.

4.3. Comparison of Experimental Results

To validate the effectiveness of the improved algorithm in identifying rapeseed paternal lines, the detection performance of the original YOLOv8n-seg model and the SegNav-YOLOv8n model was compared across three distinct scenarios, as illustrated in Figure 10.
As shown in Figure 10, the improved models demonstrate superior segmentation performance across four typical scenarios. It can be observed that in normal rapeseed parent row scenarios, both models achieve effective segmentation of the parent row area, though SegNav-YOLOv8n demonstrates superior spatial alignment with the actual parent row. For scenarios with partially missing rapeseed parent rows, YOLOv8n-seg exhibits local redundant markings, whereas SegNav-YOLOv8n precisely matches the actual boundary of the parent row after row-missing. In complex backgrounds featuring cross-rows and mulberry tree interference, SegNav-YOLOv8n effectively distinguishes parent rows from background clutter, significantly enhancing segmentation boundary clarity. When confronting colour-rich backgrounds during rapeseed flowering, SegNav-YOLOv8n also demonstrates superior segmentation integrity and positional precision. Consequently, SegNav-YOLOv8n demonstrates superior segmentation precision and scene robustness for rapeseed parent rows compared to baseline models in complex field conditions such as missing rows, background interference, and floral colouration. These results validate the enhanced model’s superior segmentation applicability and operational reliability within practical agricultural field scenarios.

4.4. Analysis of Navigation Line Extraction Precision

To determine the actual field error corresponding to pixel deviation, Zhang Zhengyou’s calibration method was employed to complete the camera’s intrinsic parameter calibration and achieve precise conversion between pixels and real-world distances. This study employed the Camera Calibrator Toolbox in MATLAB R2022a for calibration (Figure 11). By capturing multi-angle, multi-pose images of a checkerboard calibration target within the test scenario, the camera’s intrinsic parameters and distortion coefficients were determined through solving and optimising the homography matrix. Key calibration parameters are as follows: horizontal focal length fx = 612.37 pixels; radial distortion coefficients K1 = 0.102 and K2 = −0.251. Combined with the established imaging parameters of this study—camera height 200 cm above ground level and 30° angle relative to the ridge surface—the conversion relationship between pixels and actual field distances was finally determined as 1 pixel ≈ 2.6 cm after perspective projection model correction and distortion compensation.
Therefore, a ±10 pixel deviation tolerance corresponds to an actual field distance variation of ±26 cm. When combined with the agronomic requirements for rapeseed seed production field management (where the permissible error threshold for agricultural machinery operations is ±30 cm), this tolerance range effectively accommodates the natural morphological variation in crop row growth, thereby meeting the demands for precision operations.
To validate whether the fitted navigation path meets practical agricultural requirements, a completely independent manually observed navigation path was introduced as a benchmark. Based on the operational precision threshold for rapeseed seed production field management machinery (permissible error of ±30 cm) and camera calibration results (1 pixel ≈ 2.6 cm), the deviation tolerance range for the algorithmically fitted navigation path was set at ±11.5 pixels (equivalent to approximately ±30 cm in the actual field). This tolerance range adequately accommodates both natural variations in crop row spacing and inherent fitting errors, ensuring alignment with practical agronomic requirements.
Manual observation-based navigation line generation process: Personnel not involved in annotating the original dataset and unfamiliar with the algorithmic logic of this study (researchers from other disciplines within our team) directly observe the raw RGB images (without relying on segmentation masks or model outputs). Manually annotate 3–5 uniformly distributed key control points per image according to the natural growth trajectory of the rapeseed parent row (3 points for straight rows: top, middle, and bottom; add 1–2 control points at bends for slightly curved rows). Then, generate the complete navigation line by fitting these control points using quadratic polynomial fitting (Figure 12).
To analyse the error between the fitted navigation line and the manually observed navigation line, a deviation X 0 is introduced, defined as the lateral pixel distance between the manually observed navigation line and the algorithmically fitted navigation line in the horizontal direction of the image. This is converted into a real-world field error formula as D = X 0 × 2.6 cm. A statistical analysis was conducted on 100 randomly selected images from the test set (encompassing four scene categories, 25 images per category), comparing the least squares navigation line fitting method, the Hough transform navigation line fitting method, and the algorithm proposed herein. The statistical results for X 0 are presented in Table 4.
The experimental results indicate that the maximum deviation of the Hough transform was 12.04 pixels, corresponding to an actual field deviation of 31.30 cm, with an average deviation of 6.86 pixels, corresponding to an actual field deviation of 17.84 cm. The maximum deviation for the least squares method was 10.23 pixels, corresponding to an actual field deviation of 26.60 cm, with an average deviation of 5.73 pixels, corresponding to an actual field deviation of 14.90 cm. By contrast, the proposed algorithm exhibits a maximum deviation of merely 7.45 pixels, corresponding to an actual field deviation of 19.37 cm, with an average deviation of 3.35 pixels, corresponding to an actual field deviation of 8.71 cm. Both metrics are significantly lower than those of the aforementioned traditional algorithms. As illustrated in Figure 13, the proposed algorithm consistently exhibits smaller deviation values across identical image tests, thoroughly validating the effectiveness and practicality of this navigation line extraction method.
Although the navigation line extraction algorithm proposed in this study demonstrates excellent performance in most practical scenarios, further testing indicates that the model still exhibits performance deficiencies under specific extreme conditions. As shown in Figure 13, significant deviations are present in some images. Analysis indicates these deviations are primarily influenced by two environmental factors: 1. Firstly, under extremely low-light conditions (e.g., dusk and overcast skies), the image signal-to-noise ratio significantly diminishes. This weakens the greyscale contrast between the main rapeseed rows and the background, causing deviations exceeding the 11.5 pixel tolerance threshold in some samples. Compared to normal lighting conditions, SegNav-YOLOv8n’s precision decreases by approximately 1.8%. 2. In scenarios where parental rapeseed plants grow intertwined, the masks generated by the SegNav-YOLOv8n model exhibit noticeable discontinuities and misclassifications. Precision decreased by approximately 2.3% compared to the normal growth conditions of the rapeseed parent.

5. Conclusions

This study addresses the field management requirements for rapeseed seed production by proposing a SegNav-YOLOv8n-based navigation line detection method for rapeseed parental rows. Significant improvements were achieved through model structure optimisation and enhanced navigation line fitting strategies. Key findings are as follows: Using YOLOv8n-seg as the baseline, the innovative integration of three core modules—ADown, DySample, and C2f_FB—constructs a segmentation model combining high performance with lightweight characteristics, establishing a distinct technical approach from existing crop row detection models. Unlike existing studies focusing on single-dimensional improvements in either lightweight design or precision, this research achieves a three-dimensional equilibrium of “precision—speed—lightweight design” through coordinated module optimisation: The ADown module reduces parameter count by 30% while preserving critical boundary information of parent rows, effectively suppressing weed interference and subsampling information loss, thereby resolving the traditional subsampling dilemma of balancing efficiency and detail. The DySample module enhances target-background discrimination through dynamic upsampling, improving edge feature reconstruction precision and overcoming the limitations of fixed interpolation upsampling in complex scenarios. The C2f_FB module reduces computational load by 40% and parameter count by 35%, balancing feature extraction efficiency with lightweight requirements. This overcomes the drawbacks of excessive parameters and computational demands in traditional feature fusion modules. Comparative experiments demonstrate that SegNav-YOLOv8n achieves an average precision of 99.2% and a mean average precision of 84.5%, outperforming baseline models and approaching YOLOv9c-seg. Inference frame rate reaches 90.21 frames per second, matching lightweight models while significantly outperforming high-parameter YOLOv9 variants. This model possesses 2.6 million parameters, a model size of 5.5 MB, and a computational load of 11.4 G, achieving the lowest resource consumption among all tested models. These lightweight characteristics lay the foundation for its embedded development and deployment in agricultural machinery. Subsequent research will focus on conducting specialised embedded deployment testing to validate its practical applicability in real-world agricultural machinery scenarios.
The segmented rapeseed parent row masks from SegNav-YOLOv8n were binarised, followed by edge contour extraction using the Canny edge detection algorithm. Finally, a navigational line was fitted using polynomial regression. Navigation line error analysis was conducted on 100 images extracted from the test set. The maximum lateral deviation was recorded based on the horizontal pixel distance between the fitted navigation line and the manually observed navigation line. The results showed deviations within 7.45 pixels (≈19.37 cm), with an average deviation of 3.35 pixels (≈8.71 cm), demonstrating the high precision of the navigation line extraction method employed herein. This research achieved favourable results in detecting rapeseed paternal rows, though certain aspects warrant refinement. For instance, model performance diminishes under extreme low-light conditions or heavy occlusion. Practical field applications must also account for variations arising from different plots, varieties, and cultivation standards. Subsequent work will focus on multimodal information fusion and embedded model deployment design to further enhance the system’s robustness and practicality in complex agricultural environments, developing models with stronger environmental adaptability.
The core innovation of this study extends beyond resolving navigation line detection for rapeseed seed production paternal rows. Crucially, it proposes a “navigation-constraint-driven lightweight segmentation-fitting integrated approach”, achieving substantial progress in three key areas: 1. Transition from scene description to failure quantification: Complex field scenarios such as high-density planting of rapeseed paternal lines and weed interference were converted into quantifiable navigation failure criteria using an 11.5 pixel threshold (corresponding to ±30 cm in actual fields). This provides a reference for navigation research on densely planted, visually similar crop rows. 2. Method optimisation from module replacement to constraint design: Establishing a framework of “navigation performance requirements—Customised Module Development—End-to-End Collaborative Optimisation” framework. Through the synergistic operation of the ADown, DySample, and C2f_FB modules, it was validated that optimisation of lightweight segmentation models must precisely align with core navigation system requirements (e.g., boundary integrity and real-time response speed), rather than merely pursuing numerical improvements in segmentation metrics. This offers new insights for model design in agricultural machinery embedded navigation systems. 3. Providing guidance for navigation in similar crop scenarios: The proposed lightweight segmentation–polynomial fitting integrated approach demonstrates strong transferability for row navigation scenarios in crops like wheat, which share dense planting patterns and similar growth characteristics with rapeseed. This offers valuable reference for navigation technology development in such crops. Leveraging the advantages of modular design, this approach holds promise for adapting to the morphological characteristics of diverse crops. It offers a technically replicable and practically valuable solution for intelligent field management of densely planted crops within smart agriculture.

Author Contributions

Conceptualisation, P.J. and Y.S.; writing—original draft, X.W. and S.X.; project administration, P.J. and W.H.; methodology, Y.S.; investigation, X.W. and C.L.; writing—review and editing, X.W. and C.L.; funding acquisition, P.J. and W.H.; software, X.W.; validation, X.W., S.X., C.L. and Y.S.; data curation: X.W. and Y.S.; supervision, P.J. and W.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Key R&D Programme of Hunan Province of China [grant number 2024JK2031]; The Hunan Provincial Science and Technology Department of Major Project—Ten Major Technical Research Projects [grant number 2023NK1020]; Changsha Science and Technology Bureau Natural Science Foundation Project [grant number kq2402110]; Key Project of the Hunan Provincial Department of Education [grant number 23A0179]; Natural Science Foundation Project of the Hunan Provincial Department of Science and Technology [grant number 2025JJ50164].

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

Author Yixin Shi acknowledges that he is a postdoctoral researcher jointly trained by Sunward Intelligent Equipment Co., Ltd. and Hunan Agricultural University. This work was conducted during his postdoctoral tenure. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Sun, W.-H. Hybrid rapeseed seed production technology for removing impurities and maintaining purity. North. Hortic. 2008, 109. Available online: https://kns.cnki.net/kcms2/article/abstract?v=OsVNzKNazbS2dwhY1xpxWDhNk8_F9PMQdwm5qhCfwx5SgWr1t_pC04ZhEd5rpGE-Ts1F38iHUvq2naQAqZrZbxJNDgdn9qy8sp13t7M0HYzNZ5QT_G1dpaC1iZyt7wLI8D0iKcHN1Y4Hx_HF-jbIDgBMv1UnTabzvDj-hTDdT_sKTA3axen5KWrx0KhH8EHWOjGT-nVsnVM=&uniplatform=NZKPT&language=CHS (accessed on 10 February 2026).
  2. Liu, R.; Wang, Q.; Han, M.; Li, Z.; Wei, W. Brief introduction to mechanized seed production technology of rapeseed in Chengdu and Chongqing area. S. China Agric. 2024, 18, 51–54. [Google Scholar] [CrossRef]
  3. Fu, S.; Zhou, X.; Zhang, J.; Chen, S.; Pu, H.; Chen, F.; Long, W.; Peng, Q.; Gao, J.; Zhang, W.; et al. Breeding and seed production of two-line hybrid rape cultivar “Ningza 559” with high and stable yield. Jiangsu Agric. Sci. 2020, 48, 87–91. [Google Scholar] [CrossRef]
  4. Cao, Y.; Ren, J.; Zhang, Z.; Wang, J. Study on the Effect of Simulated Rainfall on the Use of Chemical Hybrid Agent SX-1 in Rapeseed. J. Anhui Agric. Sci. 2019, 47, 33–34. Available online: http://cnki.sdll.cn:85/KCMS/detail/detail.aspx?filename=AHNY201909010&dbcode=CJFD&dbname=CJFD2019 (accessed on 10 February 2026).
  5. Du, C.; Gao, H.; Ma, H.; Hu, S.; Yuan, G. Effect of Male Sterility in Brassica juncea Induced by Monosulfuron Ester Sodium. Acta Agric. Boreali-Occident. Sin. 2021, 30, 212–223. Available online: https://kns.cnki.net/kcms2/article/abstract?v=hGV7VzEtU5mCbRZziiK6zvB_iLhKAPUqIQSj9l04MkmQDCtvbaudHMSlASgINpOUIV44_1z2-YsWk610kX0Nm-Btr_ROFwPbCEEmnts1ssaWE8tA1CgsCNGbUs0p80aS89p4P4FtvYETBt-VcLC-Ea34NZD13ej0Duf7pL35_KE=&uniplatform=NZKPT&language=CHS (accessed on 10 February 2026).
  6. Zhao, J.; Fan, S.; Zhang, B.; Wang, A.; Zhang, L.; Zhu, Q. Research Status and Development Trends of Deep Reinforcement Learning in the Intelligent Transformation of Agricultural Machinery. Agriculture 2025, 15, 1223. [Google Scholar] [CrossRef]
  7. Xiao, J.; Sheng, Q.; An, Y.; Wang, N.; Wang, T.; Li, S.; Li, H.; Zhang, M. An efficient, scalable, and high-precision multifunctional intelligent navigation system for agricultural machinery. Comput. Electron. Agric. 2026, 242, 111336. [Google Scholar] [CrossRef]
  8. Chen, H.; Liu, N.; Zhang, K.; Xia, H.; Yang, Z.; Liu, D. High Efficiency Mechanized Seed Production Technology of Rapeseed in Sichuan. China Seed Ind. 2024, 148–150. [Google Scholar] [CrossRef]
  9. Wang, Y. Research progress of automatic navigation technology for agricultural machinery. China Stand. 2019, 227–228. Available online: https://kns.cnki.net/kcms2/article/abstract?v=nRANE_nPmUqKCoq4e9mSA8CHJ3t4TmsFTPSkYj-1KXJ-c6rxJfvYdytYUYkWAhiN-TNUkYreHwZGrSD1U4gvz763v9XVpEIThOFc_m84_CQ3ApMw27wgghGwpMnqUDJTlgHnsdp2nB4PvhAlJGek6J_Fgkj0_JjNrziD1dUP432tW606jbXIu9MeW97j9R4P&uniplatform=NZKPT&language=CHS (accessed on 10 February 2026).
  10. Gao, G.; Ming, L. Navigating path recognition for greenhouse mobile robot based on K-means algorithm. Trans. Chin. Soc. Agric. Eng. 2014, 30, 25–33. [Google Scholar] [CrossRef]
  11. Jiang, W.; Wang, P.; Cao, Q. Navigation Path Curve Extraction Method Based on Depth Image for Combine Harvester. In Proceedings of the 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA), Kristiansand, Norway, 9–13 November 2020; pp. 598–603. [Google Scholar]
  12. Gong, J.; Sun, K.; Zhang, Y.; Lan, Y. Extracting navigation line for rhizome location using gradient descent and corner detection. Trans. Chin. Soc. Agric. Eng. 2022, 38, 177–183. [Google Scholar] [CrossRef]
  13. Chen, J.; Wang, Z.; Long, T.; Wu, J.; Cai, G.; Zhang, H. Research on Navigation Line Extraction of Garden Mobile Robot Based on Edge Detection. J. Intell. Robot. Syst. Theory Appl. 2022, 105, 27. [Google Scholar] [CrossRef]
  14. Lei, L.; Yang, Q.; Yang, L.; Shen, T.; Wang, R.; Fu, C. Deep learning implementation of image segmentation in agricultural applications: A comprehensive review. Artif. Intell. Rev. 2024, 57, 149. [Google Scholar] [CrossRef]
  15. Tan, Y.; Su, W.; Zhao, L.; Lai, Q.; Wang, C.; Jiang, J.; Wang, Y.; Li, P. Navigation path extraction for inter-row robots in Panax notoginseng shade house based on Im-YOLOv5s. Front. Plant Sci. 2023, 14, 1246717. [Google Scholar] [CrossRef]
  16. Su, T.; Wang, L.; Ban, C.; Chi, R.; Ma, Y. Interrow Path Navigation Line Detection of Maize in Middle and Late Period Based on Semantic Segmentation. Trans. Chin. Soc. Agric. Mach. 2024, 55, 275–285. [Google Scholar] [CrossRef]
  17. Peng, S.; Chen, B.; Li, J.; Fan, P.; Liu, X.; Fang, X.; Deng, H.; Zhang, X. Detection of the navigation line between lines in orchard usingimproved YOLOv7. Trans. Chin. Soc. Agric. Eng. 2023, 39, 131–138. [Google Scholar] [CrossRef]
  18. Ying, C.; Cheng, H.; Ma, Z.; Du, X. Ridge Visual Navigation Control Method for Ground-planted Strawberry Picking Robots Based on YOLO v8-Seg Algorithm. Trans. Chin. Soc. Agric. Mach. 2024, 55, 9–17. [Google Scholar] [CrossRef]
  19. Yu, G.; Wang, Y.; Gan, S.; Xu, H.; Chen, Y.; Wang, L. Extracting the navigation lines of crop-free ridges using improved DeepLabV3+. Trans. Chin. Soc. Agric. Eng. 2024, 40, 168–175. [Google Scholar] [CrossRef]
  20. Gong, H.; Zhuang, W.; Wang, X. Improving the maize crop row navigation line recognition method of YOLOX. Front. Plant Sci. 2024, 15, 1338228. [Google Scholar] [CrossRef]
  21. Dong, W.; Wang, R.; Zeng, F.; Jiang, Y.; Zhang, Y.; Shi, Q.; Liu, Z.; Xu, W. Crop Row Line Detection for Rapeseed Seedlings in Complex Environments Based on Improved BiSeNetV2 and Dynamic Sliding Window Fitting. Agriculture 2026, 16, 23. [Google Scholar] [CrossRef]
  22. Zhao, P.; Chen, J.; Li, J.; Ning, J.; Chang, Y.; Yang, S. Design and Testing of an autonomous laser weeding robot for strawberry fields based on DIN-LW-YOLO. Comput. Electron. Agric. 2025, 229, 109808. [Google Scholar] [CrossRef]
  23. Saha, S.; Noguchi, N. Smart vineyard row navigation: A machine vision approach leveraging YOLOv8. Comput. Electron. Agric. 2025, 229, 109839. [Google Scholar] [CrossRef]
  24. Li, X.; Su, Y.; Yue, Z.; Wang, S.; Zhou, H. Extracting navigation line to detect the maize seedling line usingmedian-point Hough transform. Trans. Chin. Soc. Agric. Eng. 2022, 38, 167–174. [Google Scholar] [CrossRef]
  25. Chen, J.; Qiang, H.; Wu, J.; Xu, G.; Wang, Z.; Liu, X. Extracting the navigation path of a tomato-cucumber greenhouse robot based on a median point Hough transform. Comput. Electron. Agric. 2020, 174, 105472. [Google Scholar] [CrossRef]
  26. Miao, Y.; Li, S.; Wang, L.; Li, H.; Qiu, R.; Zhang, M. A single plant segmentation method of maize point cloud based on Euclidean clustering and K-means clustering. Comput. Electron. Agric. 2023, 210, 107951. [Google Scholar] [CrossRef]
  27. Lu, X.; Zhao, H.; Ren, R.; Su, M.; Su, L.; Zhang, S. Unstructured jujube garden visual navigation path extraction based on YOLOv5s-seg. J. Nanjing Agric. Univ. 2024, 47, 1241–1250. [Google Scholar] [CrossRef]
  28. Zheng, H.; Feng, H.; Xue, X.; Ye, Y.; Yu, J.; Yu, G. Study on navigation line extraction algorithm for leaf vegetable ridges based on instance segmentations. Acta Agric. Zhejiangensis 2025, 37, 701–711. Available online: http://www.zjnyxb.cn/CN/10.3969/j.issn.1004-1524.20240167 (accessed on 10 February 2026).
  29. Guo, T.; Peng, Y.; Han, L.; Jia, T.; Zhang, C.; Liu, W.; Yang, Q.; Huang, H.; Hu, D. MAF-YOLOv8: A lightweight, high-precision deep learning model applied to real-time detection and counting of Betula luminifera seedling leaves. Ind. Crop Prod. 2025, 235, 121716. [Google Scholar] [CrossRef]
  30. Zhang, T.; Zhou, J.; Liu, W.; Yue, R.; Shi, J.; Zhou, C.; Hu, J. SN-CNN: A Lightweight and Accurate Line Extraction Algorithm for Seedling Navigation in Ridge-Planted Vegetables. Agriculture 2024, 14, 1446. [Google Scholar] [CrossRef]
  31. Zhou, M.; Wang, W.; Shi, S.; Huang, Z.; Wang, T. Research on Global Navigation Operations for Rotary Burying of Stubbles Based on Machine Vision. Agriculture 2025, 15, 114. [Google Scholar] [CrossRef]
  32. Luo, Y.; Wei, L.; Xu, L.; Zhang, Q.; Liu, J.; Cai, Q.; Zhang, W. Stereo-vision-based multi-crop harvesting edge detection for precise automatic steering of combine harvester. Biosyst. Eng. 2022, 215, 115–128. [Google Scholar] [CrossRef]
  33. Zhang, Y.; Liu, P.; Li, J.; Gao, Y.; Zhu, K.; Zhang, Y.; Yu, Q.; Wen, F. Extracting facility cucumber phenotypes using improved YOLOv11n-seg. Trans. Chin. Soc. Agric. Eng. 2025, 41, 191–200. [Google Scholar] [CrossRef]
Figure 1. Four Representative Images from the Dataset. (a) Normal rapeseed parent line. (b) Part of the rapeseed parent line is missing. (c) Crossed, dense rapeseed parent lines. (d) Flowering period of the rapeseed paternal row.
Figure 1. Four Representative Images from the Dataset. (a) Normal rapeseed parent line. (b) Part of the rapeseed parent line is missing. (c) Crossed, dense rapeseed parent lines. (d) Flowering period of the rapeseed paternal row.
Agriculture 16 00454 g001
Figure 2. Data enhancement. (a) Gaussian Blur. (b) Random rotation. (c) Brightness adjustment.
Figure 2. Data enhancement. (a) Gaussian Blur. (b) Random rotation. (c) Brightness adjustment.
Agriculture 16 00454 g002
Figure 3. Data Annotation Diagram.
Figure 3. Data Annotation Diagram.
Agriculture 16 00454 g003
Figure 4. SegNav-YOLOv8n Model Overall Network Architecture Diagram.
Figure 4. SegNav-YOLOv8n Model Overall Network Architecture Diagram.
Agriculture 16 00454 g004
Figure 5. Adown Module Network Architecture Diagram. Note: (1) X = Input feature map; X1/X2 = Feature branches after channel splitting; k = Convolution kernel size; s = Stride of convolution/pooling; p = Padding size. (2) Conv (upper branch: k = 3, s = 2, p = 1); Conv (lower branch: k = 1, s = 1, p = 0); MaxPool2d (k = 1, s = 1, p = 0). (3) Input feature map→Channel splitting into X1 and X2 branches (via Chunk module)→X1 branch: Directly processed by Conv (k = 3, s = 2, p = 1) to extract local boundary features→X2 branch: Processed by MaxPool2d (k = 1, s = 1, p = 0) followed by Conv (k = 1, s = 1, p = 0) to preserve global structural features→Concatenation Fusion→Output lightweight feature map with retained boundary information.
Figure 5. Adown Module Network Architecture Diagram. Note: (1) X = Input feature map; X1/X2 = Feature branches after channel splitting; k = Convolution kernel size; s = Stride of convolution/pooling; p = Padding size. (2) Conv (upper branch: k = 3, s = 2, p = 1); Conv (lower branch: k = 1, s = 1, p = 0); MaxPool2d (k = 1, s = 1, p = 0). (3) Input feature map→Channel splitting into X1 and X2 branches (via Chunk module)→X1 branch: Directly processed by Conv (k = 3, s = 2, p = 1) to extract local boundary features→X2 branch: Processed by MaxPool2d (k = 1, s = 1, p = 0) followed by Conv (k = 1, s = 1, p = 0) to preserve global structural features→Concatenation Fusion→Output lightweight feature map with retained boundary information.
Agriculture 16 00454 g005
Figure 6. DySample Module Network Architecture Diagram. Note: (1) X = Input feature map; O = Intermediate feature map after dynamic convolution; S = Final output feature map; g = Group number for feature grouping; sH = Height scaling factor; sW = Width scaling factor; H = Height of feature map; W = Width of feature map; δ = scaling factor. (2) Input feature map (X)→Channel splitting into two parallel branches→The upper branch generates an attention weight map (with a scaling factor of 0.5δ)→The two branches are multiplied to complete adaptive feature weighting→The weighted feature map is fed into the dynamic convolution module (O), which takes group parameter g and scaling factors sH/sW as inputs→The output of module O is multiplied with the feature map derived from g→The result is the final output feature map (S).
Figure 6. DySample Module Network Architecture Diagram. Note: (1) X = Input feature map; O = Intermediate feature map after dynamic convolution; S = Final output feature map; g = Group number for feature grouping; sH = Height scaling factor; sW = Width scaling factor; H = Height of feature map; W = Width of feature map; δ = scaling factor. (2) Input feature map (X)→Channel splitting into two parallel branches→The upper branch generates an attention weight map (with a scaling factor of 0.5δ)→The two branches are multiplied to complete adaptive feature weighting→The weighted feature map is fed into the dynamic convolution module (O), which takes group parameter g and scaling factors sH/sW as inputs→The output of module O is multiplied with the feature map derived from g→The result is the final output feature map (S).
Agriculture 16 00454 g006
Figure 7. C2f_FB Module Network Architecture Diagram. Note: Symbols and parameters definition: Cp = Partial channels; δ = Scaling factor; PConv = Partial convolution layer; Conv = Standard convolution layer; H = Height of feature map; W = Width of feature map; Filters = Number of convolution filters (consistent with input channels).
Figure 7. C2f_FB Module Network Architecture Diagram. Note: Symbols and parameters definition: Cp = Partial channels; δ = Scaling factor; PConv = Partial convolution layer; Conv = Standard convolution layer; H = Height of feature map; W = Width of feature map; Filters = Number of convolution filters (consistent with input channels).
Agriculture 16 00454 g007
Figure 8. Densitization. (a) RGB colour images. (b) Instance segmentation results. (c) Parental line masking results.
Figure 8. Densitization. (a) RGB colour images. (b) Instance segmentation results. (c) Parental line masking results.
Agriculture 16 00454 g008
Figure 9. Navigation line fitting results. (a) Canny edge detection results. (b) Navigation Line Rendering. (c) Final rendering.(The red lines represent the generated navigation lines, while the white lines denote the extracted contour lines.)
Figure 9. Navigation line fitting results. (a) Canny edge detection results. (b) Navigation Line Rendering. (c) Final rendering.(The red lines represent the generated navigation lines, while the white lines denote the extracted contour lines.)
Agriculture 16 00454 g009
Figure 10. Comparison of the results of paternal extraction from rape.
Figure 10. Comparison of the results of paternal extraction from rape.
Agriculture 16 00454 g010
Figure 11. Camera calibration.
Figure 11. Camera calibration.
Agriculture 16 00454 g011
Figure 12. Fitted navigation lines and manually observed navigation lines. Note: Red indicates fitted navigation lines; blue indicates manually observed navigation lines. (a) Normal rapeseed parent line. (b) Part of the rapeseed parent line is missing. (c) Crossed, dense rapeseed parent lines. (d) Flowering period of the rapeseed paternal row.
Figure 12. Fitted navigation lines and manually observed navigation lines. Note: Red indicates fitted navigation lines; blue indicates manually observed navigation lines. (a) Normal rapeseed parent line. (b) Part of the rapeseed parent line is missing. (c) Crossed, dense rapeseed parent lines. (d) Flowering period of the rapeseed paternal row.
Agriculture 16 00454 g012
Figure 13. Navigation line deviation analysis.
Figure 13. Navigation line deviation analysis.
Agriculture 16 00454 g013
Table 1. Network training parameters.
Table 1. Network training parameters.
ParametersValue
Initial learning rate0.01
Weight decay rate0.0005
Number of iterations200
Image input dimensions640 × 640 pixel
Table 2. Comparison of different models’ test results.
Table 2. Comparison of different models’ test results.
ModelsPrecision/%
(Mean ± SD)
Recall/%
(Mean ± SD)
Mask mAP50-95/%
(Mean ± SD)
FPS/(Frame·s−1)
(Mean ± SD)
Params/MModelsize/MBFlops/G
YOLOv8n-seg98.7 ± 0.1199 ± 0.1384.3 ± 0.1590.58 ± 0.423.26.812.8
YOLOv9e-seg98.9 ± 0.1598.9 ± 0.1284 ± 0.1811.65 ± 0.286.0121.9248.4
YOLOv9c-seg99.3 ± 0.0898.7 ± 0.1484.8 ± 0.1621.16 ± 0.352.756.2159.4
YOLOv11n-seg98.9 ± 0.1099.1 ± 0.1183.8 ± 0.1771.89 ± 0.512.86.09.7
SegNav-YOLOv8n99.2 ± 0.0998.8 ± 0.1084.5 ± 0.1290.21 ± 0.312.65.511.4
Table 3. Ablation experiment results.
Table 3. Ablation experiment results.
ModelsADownDySamplec2f_FBP/%FPS/(Frame·s−1)Params/MModelsize/MB(Flops/G)
YOLOv8n-seg×××98.790.583.26.812.8
YOLOv8n-seg + ADown××98.9100.52.96.211.4
YOLOv8n-seg + DySample××99.195.243.26.812.1
YOLOv8n-seg + c2f_FB××98.8103.412.96.111.4
YOLOv8n-seg + ADown + DySample×99.289.532.96.211.4
YOLOv8n-seg + ADown + c2f_FB×98.790.52.65.510.7
YOLOv8n-seg + DySample + c2f_FB×99.188.032.96.111.4
YOLOv8n-seg + ADown + DySample + c2f_FB99.290.212.65.511.4
Note: √ indicates that a specific module is included. × indicates that a specific module is excluded.
Table 4. Pixel errors for different algorithms.
Table 4. Pixel errors for different algorithms.
AlgorithmMaximum DeviationAverage Deviation
Hough transform12.04 pixels (≈31.30 cm)6.86 pixels (≈17.84 cm)
Least squares method10.23 pixels (≈26.60 cm)5.73 pixels (≈14.90 cm)
This algorithm7.45 pixels (≈19.37 cm)3.35 pixels (≈8.71 cm)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, P.; Wang, X.; Xiang, S.; Liu, C.; Hu, W.; Shi, Y. Visual Navigation Line Detection and Extraction for Hybrid Rapeseed Seed Production Parent Rows. Agriculture 2026, 16, 454. https://doi.org/10.3390/agriculture16040454

AMA Style

Jiang P, Wang X, Xiang S, Liu C, Hu W, Shi Y. Visual Navigation Line Detection and Extraction for Hybrid Rapeseed Seed Production Parent Rows. Agriculture. 2026; 16(4):454. https://doi.org/10.3390/agriculture16040454

Chicago/Turabian Style

Jiang, Ping, Xiaolong Wang, Siliang Xiang, Cong Liu, Wenwu Hu, and Yixin Shi. 2026. "Visual Navigation Line Detection and Extraction for Hybrid Rapeseed Seed Production Parent Rows" Agriculture 16, no. 4: 454. https://doi.org/10.3390/agriculture16040454

APA Style

Jiang, P., Wang, X., Xiang, S., Liu, C., Hu, W., & Shi, Y. (2026). Visual Navigation Line Detection and Extraction for Hybrid Rapeseed Seed Production Parent Rows. Agriculture, 16(4), 454. https://doi.org/10.3390/agriculture16040454

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop