Next Article in Journal
Antifungal Efficacy of Ethanolic Extracts from Four Medicinal Plants Against Major Postharvest Fungal Pathogens of Apple Fruit
Next Article in Special Issue
Technological Parameter Optimization of Double-Press Precision Depth-Control Seeding and Its Application in Rice Production
Previous Article in Journal
Estimating Soil Cd Contamination in Wheat Farmland Using Hyperspectral Data and Interpretable Stacking Ensemble Learning
Previous Article in Special Issue
Surface Defect and Malformation Characteristics Detection for Fresh Sweet Cherries Based on YOLOv8-DCPF Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Multi-Threshold Otsu Algorithm for Corn Seedling Band Centerline Extraction in Straw Row Grouping

1
College of Information and Technology & Smart Agriculture Research Institute, Jilin Agricultural University, Changchun 130118, China
2
Jilin Provincial Academy of Forestry Sciences, Changchun 130033, China
3
College of Engineering and Technology, Jilin Agricultural University, Changchun 130118, China
*
Author to whom correspondence should be addressed.
Agronomy 2025, 15(7), 1575; https://doi.org/10.3390/agronomy15071575
Submission received: 30 May 2025 / Revised: 22 June 2025 / Accepted: 25 June 2025 / Published: 27 June 2025

Abstract

Straw row grouping is vital in conservation tillage for precision seeding, and accurate centerline extraction of the seedling bands enhances agricultural spraying efficiency. However, the traditional single-threshold Otsu segmentation struggles with adaptability and accuracy under complex field conditions. To overcome these issues, this study proposes an adaptive multi-threshold Otsu algorithm optimized by a Simulated Annealing-Enhanced Differential Evolution–Whale Optimization Algorithm (SADE-WOA). The method avoids premature convergence and improves population diversity by embedding the crossover mechanism of Differential Evolution (DE) into the Whale Optimization Algorithm (WOA) and introducing a vector disturbance strategy. It adaptively selects thresholds based on straw-covered image features. Combined with least-squares fitting, it suppresses noise and improves centerline continuity. The experimental results show that SADE-WOA accurately separates soil regions while preserving straw texture, achieving higher between-class variance and significantly faster convergence than the other tested algorithms. It runs at just one-tenth of the time of the Grey Wolf Optimizer and one-ninth of that of DE and requires only one-sixth to one-seventh of the time needed by DE-GWO. During centerline fitting, the mean yaw angle error (MEA) ranged from 0.34° to 0.67°, remaining well within the 5° tolerance required for agricultural navigation. The root-mean-square error (RMSE) fell between 0.37° and 0.73°, while the mean relative error (MRE) stayed below 0.2%, effectively reducing the influence of noise and improving both accuracy and robustness.

1. Introduction

Conservation tillage has become a cornerstone of sustainable agriculture, offering significant benefits such as improved soil health, enhanced water retention, and increased carbon sequestration. By minimizing soil disturbance and preserving crop residues on the surface, conservation tillage helps mitigate soil erosion, retain nutrients, and address the growing challenges of climate change. In the black soil regions of Northeast China, the straw row grouping pattern—a tillage practice that alternates straw-covered strips with clean seedbeds—has emerged as a key technique for preserving soil fertility and improving crop quality [1,2].
Wang et al. [3] mentioned this pattern to integrate mechanically aligned straw placement with a wide-narrow row no-tillage system, which effectively reduced soil compaction and enhanced seedbed conditions. However, Ahmad et al. [4] pointed out that an uneven straw distribution can significantly reduce the effectiveness of chemical applications. Moreover, Jin et al. [5] emphasized that accurate crop row detection is critical for improving the operational precision of agricultural sprayers. These findings collectively highlight the necessity of precisely detecting corn seedling band centerlines to support precision agricultural machinery operations. In this study, complex field conditions refer to a combination of environmental and visual challenges that significantly affect image acquisition and segmentation quality in straw row-grouped maize fields. These include variable lighting conditions, such as overexposure, shadows, and glare caused by weather and time-of-day differences; low contrast between soil and straw, especially in areas where straw is partially buried, degraded, or affected by soil moisture; irregular and non-uniform straw distribution, resulting from mechanical placement inconsistencies and wind displacement; visual interference from background noise, such as residual crop debris or stones; and topographical variability, including field undulations and slope-induced image distortion.
These factors introduce noise and ambiguity into the image data, making it difficult for traditional thresholding or segmentation algorithms to maintain high accuracy and robustness. Therefore, any segmentation method intended for use in such environments must demonstrate adaptability to these real-world complexities.
Agricultural robots are machines designed for agricultural production that can be controlled by various software programs. They are capable of adapting to changes in tasks and environments and are equipped with artificial intelligence enabling their fully autonomous operation. Cheng et al. [6] pointed out through a large number of literature reviews that the development of agricultural robots can be divided into two stages. Before the end of the 20th century, agricultural automation equipment was predominantly based on electromechanical control technologies. With advancements in sensor and computer technologies, intelligent agricultural equipment began to emerge and has gradually become widespread in the 21st century. Agricultural robots now encompass a wide variety of types. For instance, fertilizing robots can apply fertilizer precisely based on soil conditions, thereby reducing costs and improving water quality [7]. Field weeding robots integrate computer systems, GPS, and tractor technologies to apply herbicides accurately [8]. Citrus-picking robots determine fruit maturity based on visual features and can rapidly harvest and sort fruit [9]. Mushroom-picking robots use cameras and other sensors to identify and harvest mushrooms with high speed and efficiency [10]. The development of agricultural spraying machinery has undergone significant transformation. From the mid-19th to the early 20th century, methods evolved from manual sprinkling using brooms and powder blowers to the emergence of motorized plant protection machinery [11]. In the 1930s, China began developing “Gongnong” brand pesticide application equipment, such as the Gongnong-7 compression sprayer. After the founding of the People’s Republic of China, a variety of backpack-type manual, electric, and motorized sprayers became widely used, playing a central role in plant protection operations. In the 1950s and 1960s, industrialized countries entered the era of “machines carrying humans”, marked by the rise of large self-propelled boom sprayers and orchard air-blast mist sprayers. China subsequently imported and independently developed various large-scale boom sprayers [12]. In recent years, low-altitude, low-volume pesticide application using unmanned aerial vehicles (UAVs) has seen comprehensive development. Although China entered the field of plant protection UAVs relatively late, this field has experienced rapid growth, with market deployment reaching a significant scale [13]. With the advancement of digital information technologies, global agriculture is transitioning toward smart farming, leading to the emergence of spraying robots. China’s domestically developed orchard autonomous navigation spraying robots now enable unmanned and precise pesticide application. Li et al. [14] designed and tested an automatic contour-following orchard sprayer featuring adjustable air volume and flow rate, which significantly enhances spray uniformity and operational accuracy in orchard environments. Zhu et al. [15] proposed a maize pest and disease detection and precision variable-spraying system based on visual recognition, which enables targeted pesticide application and reduces chemical usage. These developments play a critical role in addressing low pesticide utilization rates and environmental pollution, while supporting food security and the sustainable development of agriculture.
The complex textures introduced by scattered straw complicate visual recognition tasks, hindering the performance of traditional image segmentation techniques. Conventional approaches, such as single-threshold Otsu segmentation, suffer from limited adaptability to changing field conditions, poor accuracy when segmenting overlapping regions, and insufficient real-time performance. These shortcomings lead to high error rates in seedling band centerline extraction. While deep learning methods have been increasingly adopted for navigation line detection in precision agriculture, offering high adaptability and accuracy, they typically require extensive annotated datasets and significant computational resources. For example, Zhou et al. [16] proposed a deep learning-based method to extract visual navigation lines in orchards using a U-Net architecture, achieving robust performance in structured environments, but requiring substantial training data. Liu et al. [17] developed a maize row detection method at the seedling stage using a multi-scale ERFNet model, which improved detection accuracy under varying field textures; however, it involved heavy computational costs unsuitable for lightweight deployment. Likewise, Diao et al. [18] introduced an improved U-Net-based algorithm to enhance crop row recognition accuracy, especially in complex backgrounds, though their method depends on extensive pixel-level annotation. Shi et al. [19] reviewed various row detection and navigation techniques for agricultural robots and autonomous vehicles, emphasizing the effectiveness of deep models but also highlighting issues such as generalization and real-time constraints. Furthermore, Diao et al. [20] proposed an improved YOLOv8s-based navigation line extraction algorithm tailored for spraying robots, which showed excellent real-time detection capability but still necessitates a well-trained backbone and GPU acceleration. This limits their practical deployment, especially in resource-constrained agricultural environments.
To address these challenges, our research group has focused on threshold-based segmentation methods that require minimal data annotation and offer greater computational efficiency. Specifically, we developed a DE-GWO-based segmentation algorithm to optimize the Otsu thresholding method [21]. The DE-GWO algorithm combines the global search capabilities of Differential Evolution (DE) [22] with the local exploitation strengths of the Grey Wolf Optimizer (GWO) [23], improving the segmentation precision of overlapping targets under straw-covered conditions. This hybrid approach enhances between-class variance optimization and yields results that are closer to the global optimum.
To further improve segmentation under complex field environments, we employed a modified semantic segmentation model, m-DeepLabV3+ [24], to detect straw coverage with high accuracy, achieving a prediction rate of 93.96%. In related work, other researchers have also applied deep learning to this problem. For instance, a study utilizing UAV-acquired imagery and an improved U-Net architecture reported an accuracy of 93.87% in corn stover segmentation [25]. Another methodology integrated the Sauvola and Otsu algorithms to enhance stover extraction by leveraging chromatic aberration features [26]. While these methods have improved straw segmentation accuracy, they typically focus only on stover detection, neglecting the accurate distinction between soil and seedling bands—an essential step in identifying corn seedling rows.
In addition, achieving high real-time performance remains a persistent challenge. To address this issue, we propose an improved optimization algorithm, SADE-WOA, aimed at enhancing segmentation accuracy and computational efficiency. Compared to the DE-GWO segmentation algorithm previously developed by our research group [21], SADE-WOA incorporates a simulated annealing mechanism to strengthen the local search and help escape local optima. By combining the collaborative search strategies of differential evolution and whale optimization, SADE-WOA achieves a higher maximum between-class variance with the Otsu thresholding method during corn seedling image segmentation experiments, while the computation time is only one-sixth to one-seventh of that of DE-GWO. The SADE-WOA integrates three key innovations: (1) a quasi-oppositional initialization strategy to enhance population diversity and search space coverage; (2) a crossover mechanism from DE to improve global exploration while preserving WOA’s local search capability; (3) a simulated annealing-based perturbation mechanism to refine solutions during the later stages of convergence. These improvements collectively boost the algorithm’s real-time performance and segmentation accuracy in complex agricultural scenarios.
Following the segmentation of straw-covered regions, centerline detection is carried out. This involves morphological denoising, connected component analysis, and edge detection using the Canny operator, followed by centerline fitting via the least-squares method. This complete pipeline supports the accurate extraction of seedling band centerlines and lays a foundation for the intelligent adaptation of agricultural machinery.

2. Materials and Methods

2.1. Image Acquisition

This study focuses on cornfields without seedlings under the straw row grouping pattern. The experiments were conducted at the research base of the Agricultural Machinery Research Institute in Changchun, Jilin Province, China (125.3893342° E, 43.8168784° N), where data were collected during the experimental period. An overview of the research base is shown in Figure 1a. Figure 1b illustrates the schematic diagram of the straw row grouping pattern, one of the no-tillage planting methods characterized by the alternating arrangement of a 70 cm straw-covered row and a 60 cm planting row.
As shown in Figure 2, the image acquisition system integrates a Kubota M704 tractor and a DJI Osmo Action camera to achieve high-precision field image monitoring. The Kubota M704 is equipped with a 74-horsepower diesel engine and a robust hydraulic system, providing stable traction for the sensor array. The DJI Osmo Action camera was mounted on a custom-designed telescopic pole at a height of 3 m, ensuring a clear field of view for image capture. The camera was tilted at a 20° pitch angle to optimize perspective correction and minimize the geometric distortion typically caused by wide-angle imaging [27]. During data collection, images were captured in RGB format at a resolution of 4000 × 2250 pixels and stored in both JPEG and RAW formats. The DJI Osmo Action features electronic image stabilization, which makes it well-suited for mitigating various disturbances commonly encountered during field operations. A stepper motor enables automated lateral translation of the spray nozzle, facilitating precise movement during pesticide application. The system also includes adjustable spray nozzles and a backpack-mounted chemical tank, enabling simultaneous data acquisition and precision spraying during field trials.

2.2. Corn Seedling Band Centerline Detection Algorithm

The adaptive multi-threshold Otsu-based centerline extraction algorithm for corn seedling bands under the straw row grouping pattern proposed in this study consists of four main modules: image preprocessing, image segmentation based on SADE-WOA, morphological postprocessing, and centerline detection. The overall workflow for corn seedling band centerline detection under adaptive multi-threshold segmentation is illustrated in Figure 3.

2.3. Grayscale Preprocessing of Corn Seedling Band Images Based on the YUV Luminance Component

Under complex lighting conditions in agricultural fields, grayscale conversion is a crucial preprocessing step for subsequent image analysis. The YUV luminance-based grayscale method [28] effectively extracts brightness information from an image and enhances the contrast between straw-covered regions and the background. YUV luminance grayscale conversion is achieved by constructing a grayscale image solely using the Y component, which aligns closely with human visual perception of brightness. The original RGB image is first converted to the YUV color space using the following transformation formula, from which the Y component is extracted:
Y = 0.299 R + 0.587 G + 0.114 B
In this application scenario, YUV luminance-based grayscale conversion reduces the computational complexity while enhancing the brightness contrast between straw and background, resulting in greater robustness under varying lighting conditions [29]. A grayscale image obtained using the YUV luminance component is illustrated in Figure 4b.

2.4. Otsu Multi-Threshold Segmentation Based on the Improved Simulated Annealing Differential Evolution–Whale Optimization Algorithm (SADE-WOA)

The accurate extraction of corn seedling band centerline critically depends on effectively identifying the boundaries between straw and seedling regions, making threshold selection a key factor [30]. Considering the complexity of field images under the straw row grouping pattern, this study introduces an improved SADE-WOA. At its core, SADE-WOA enhances WOA by integrating a crossover mechanism from DE after each position update, which introduces vector perturbations to increase population diversity and avoid premature convergence. The between-class variance used in Otsu’s thresholding serves as the objective function for optimization. A partial workflow of the SADE-WOA is shown in Figure 5. A hybrid optimization strategy was adopted to enhance the performance of Otsu’s multi-threshold segmentation. Initially, high-quality initial populations are generated using histogram peak sampling combined with opposition-based learning. The WOA performs a global search with adaptive parameter tuning, while DE-based crossover operations introduce additional diversity. Subsequently, a periodic enhanced simulated annealing mechanism is introduced to refine the local search using a dynamically adjusted step-size neighborhood strategy [31]. An early stopping criterion is also applied to balance efficiency and accuracy. Finally, with histogram precomputation for acceleration, the algorithm achieves efficient and stable optimization of Otsu thresholds.

2.4.1. Principle of the Otsu Thresholding Algorithm

Otsu image segmentation, also known as the maximum between-class variance method [32], determines the optimal threshold for image segmentation by maximizing the between-class variance between the background and the target. Let the image have L gray levels (0~L − 1), where the number of pixels at gray level i is n i , and the total number of pixels is N = Σ i = 0 L 1 n i .The probability of gray level i is:
p i = n i N
Assuming that the segmentation threshold is t , the pixels are divided into a background class C 0 (0~t) and a foreground class C 1 (t + 1~L − 1). The probabilities of each class are given by:
ω 0 ( t ) = i = 0 t P i
ω 1 ( t ) = 1 ω 0 ( t )
Based on this, the mean gray levels of the two classes are calculated as:
μ 0 ( t ) = 1 ω 0 ( t ) i = 0 t i p i
μ 1 ( t ) = 1 ω 1 ( t ) i = t + 1 L 1 i p i
The total mean of the image is:
μ = ω 0 ( t ) μ 0 ( t ) + ω 1 ( t ) μ 1 ( t )
To measure the separability between the two classes, the between-class variance is defined as:
σ B 2 ( t ) = ω 0 ( t ) ω 1 ( t ) μ 0 ( t ) μ 1 ( t ) 2
The formula quantifies the distinction between the background and the foreground by weighting the difference between their class means [33]. Finally, by traversing t ϵ 0 , L 1 , the optimal threshold t * = arg m a x σ B 2 t  is found, which maximizes the separability between the classes after segmentation. This is equivalent to minimizing the between-class variance or the signal-to-noise ratio, achieving optimal binary segmentation of the image. The image after Otsu thresholding is shown in Figure 6.

2.4.2. Principle of the Differential Evolution (DE) Algorithm

The DE algorithm was proposed in 1997 by Storn and Price as an efficient population-based global optimization stochastic search algorithm. Its basic principle is to generate new individuals through differential operations and replace old individuals with these new ones, progressively approaching the optimal solution. The core operations of the algorithm include mutation, crossover, and selection. The mutation operation generates a mutation vector by performing differential operations on two or more individuals. The crossover operation combines the mutation vector with the current individual to generate candidate solutions. The formula for the crossover operation is:
u i , j = v i , j , if   r a n d j C R   or   j = j rand x i , j , if   r a n d j > C R   and   j j rand
where u i j  is the j -th component of the candidate solution, v i j  is the j -th component of the mutation vector, x i j is the j -th component of the current individual, r a n d j  is a random number in the range [0, 1], C R  is the crossover probability, and j r a n d  is the randomly selected crossover point. The selection operation then decides whether to accept the candidate solution based on the fitness function. If the candidate solution is better than the current one, it will replace it. This process is iterated continuously, and the algorithm eventually converges to the optimal solution.

2.4.3. Principle of the Whale Optimization Algorithm (WOA)

Seyedali et al. (2016) [34] proposed the population-based Whale Optimization Algorithm (WOA). It is inspired by the hunting behavior of humpback whales, particularly their “bubble-net feeding” strategy. The algorithm primarily includes three behaviors: encircling prey, spiral updating of positions, and searching for prey.
  • Encircling prey: In this behavior, the whale considers the current best solution as the prey and adjusts its position to move closer to this solution. The update formula is as follows:
D = | C X * X |
X ( t + 1 ) = X * A D
where X * is the current best solution (prey), X is the whale’s position, and A and C are coefficient vectors, defined as:
A = 2 a r 1 a
C = 2 r 2
where a decreases linearly from 2 to 0 throughout iterations, and r 1 , r 2 are random numbers in the range of [0, 1].
2.
Spiral position update: This simulates the bubble-net feeding behavior of whales, where a whale approaches its prey along a spiral-shaped path. The position update is formulated as follows:
X ( t + 1 ) = D e b l cos ( 2 π l ) + X *
where D = x * x represents the distance between the whale and the prey, b is the spiral coefficient, and l is a random number in the range of [−1, 1].
3.
Search for prey: When A > 1 , the whale moves away from the current best solution to expand the search space. The position update formula is:
D = | C X r a n d X |
X ( t + 1 ) = X r a n d A D
where x r a n d is a randomly selected whale position from the current population.

2.4.4. Morphological Postprocessing

After initialization and application of the improved Otsu algorithm, a binary image is obtained. However, direct edge extraction from this binary image may still be inadequate. Therefore, it is necessary to perform morphological post-processing operations on the binary image [35].
  • Traverse all connected regions in the binary image and set those with an area smaller than the threshold min_size to background (i.e., 0). The result of this step is shown in Figure 7a. Assuming that I is the input binary image, S i is the i -th connected region, and A S i is the area of this region, with T as the predefined minimum area threshold, the output image O is defined as:
O ( x , y ) = 0 , if   ( x , y ) S i A ( S i ) < T I ( x , y ) , otherwise
2.
Apply the morphological closing operation to fill the internal voids within the straw regions. The formula is as follows: I B = ( I B ) B , where  denotes the erosion operation, and denotes the dilation operation. Subsequently, a connected component analysis is performed on the binary image after closing. This process involves traversing all pixels in the binary image and labeling connected foreground regions as the same connected component, assigning a unique label to each one. The connected regions that meet specified criteria are then identified, and their bounding boxes are extracted. The result of this step is shown in Figure 7b.
3.
Apply the hole-filling operation to detect and fill background regions that are completely enclosed by foreground pixels in the binary image. Since the foreground pixels in the original binary image are black, an inversion is required before the filling operation to make the foreground white. This inversion ensures the correct execution of the hole-filling process. The result after processing is shown in Figure 7c.
4.
First, perform a morphological opening operation using a 3 × 3 elliptical structuring element to smooth the edges of the image and remove small noise. Next, apply an additional opening operation with a 50 × 50 square structuring element to eliminate larger noise regions further, while preserving the integrity of the main target areas. The results of these steps are shown in Figure 7d and Figure 7e, respectively.

2.4.5. Watershed Algorithm Based on Distance Transform

To further segment the target regions, the distance transform method [36] was applied. This method calculates the Euclidean distance from each foreground pixel in the image to the nearest background pixel. The formula is:
D ( x , y ) = min ( x , y ) B ( x x ) 2 + ( y y ) 2
where  B  represents the set of all background pixels. After computing the distance transform, a thresholding method was used to extract the seed points. This can be expressed as:
S f ( x , y ) = 1 , if   D ( x , y ) > λ max ( D ) 0 , otherwise
Here, λ was set to 0.4. This ensured that the segmented foreground regions had a high level of confidence, as only the regions with relatively large distance values—typically corresponding to the centers of the objects—were selected as seed points for the watershed algorithm.
To further complete image segmentation, we applied the watershed algorithm. This algorithm treats an image as a topographic surface, where lower grayscale values represent valleys, and higher values represent ridges. The fundamental principle of the watershed transform is to simulate water filling from valleys, eventually meeting at the ridges to form boundaries. We used connected components to label the foreground regions and initialized a marker matrix as follows:
M ( x , y ) = k + 1 , if   ( x , y ) S f ( the   k - th   connected   component ) 1 , if   ( x , y )   is   background 0 , if   ( x , y )   is   unknown   region
Then, this marker matrix was used as the input to the watershed algorithm to segment the different target regions. Pixels located on the watershed boundaries were finally set to 0.

2.4.6. Connected Domain Analysis and Edge Detection

After applying the watershed algorithm based on distance transformation, connected domain analysis was conducted to filter out large target regions. Let the number of connected domains be N . For each connected domain C i i = 1 ,   2 ,   ,   N , its area A i is calculated as follows:
A i = ( x , y ) C i 1
After multiple tests, a minimum area threshold (200,000 pixels, adjustable in real time) was set, and only the connected domains satisfying A i 20,000  were retained.
Subsequently, edge detection was performed on the selected connected regions. The Canny edge detection algorithm was employed, which involves the following steps:
  • Calculate the image gradients G x and G y :
G x = I × S x , G y = I × S y
2.
Compute the gradient magnitude:
G = G x 2 + G y 2
3.
Apply thresholding for edge linking to remove non-significant edges.
Using the above algorithm, edge extraction was performed separately on the left and right halves of the target region. The coordinates of the extracted edge points were then obtained for subsequent centerline computation.

2.4.7. Least-Squares Line Fitting

To compute the centerline, we analyzed the detected left and right edge points and applied the least-squares method [37] for linear regression. This technique minimizes the sum of squared vertical distances (residuals) between the observed values and the fitted line, ensuring unbiased and efficient parameter estimation. Given the set of centerline points P = x i , y i , our goal was to determine the linear equation:
y = ax + b
The parameters a and b are determined by minimizing the following error function:
E ( a , b ) = i = 1 N ( y i a x i b ) 2
By taking the partial derivatives of the error function with respect to a and b , and setting them to zero, we obtain:
E a = 2 i = 1 N x i ( y i a x i b ) = 0
E b = 2 i = 1 N ( y i a x i b ) = 0
Solving the equations yields the slope a and the intercept b of the fitted line as:
a = N x i y i x i y i N x i 2 x i 2
b = i = 1 N y i a i = 1 N x i N

3. Results

To evaluate the performance of the proposed algorithm, adaptive testing, real-time testing, and comparative performance analysis were conducted. The experimental computer system was configured with Windows 10, an Intel Core i5 CPU, an NVIDIA GeForce MX150 GPU, 8 GB of video memory, and Visual Studio Code 1.101.2 as the image processing platform.

3.1. Otsu Multi-Threshold Segmentation Results Based on the Improved Enhanced Simulated Annealing Differential Evolution–Whale Optimization Algorithm (SADE-WOA)

In the field of image segmentation, segmentation methods are widely used in image processing and computer vision tasks. To improve the accuracy and efficiency of segmentation results, this study proposes an Otsu thresholding method based on an improved enhanced SADE-WOA. Comparative experiments were conducted against several existing optimization algorithms. Specifically, we compared the proposed method with the traditional GWO, DE, DE-GWO, and the improved Whale Optimization Algorithm (WOA+) on the task of Otsu multi-threshold segmentation. For all algorithms, the initial population size was set to 30, the maximum number of iterations was 200, and each algorithm was independently run 10 times with segmentation thresholds set to 2, 3, and 4.
Multi-threshold segmentation experiments were conducted using the proposed algorithm and four other algorithms on three original images of maize fields, applying two-, three-, and four-level thresholding, as shown in Figure 8. Figure 8a–c display the original images. In Figure 8d–f, two-threshold segmentation effectively distinguishes between soil and straw, resulting in a simplified image structure and reduced computational complexity, which is suitable for fast processing tasks. However, due to the limited number of thresholds, this method struggled to capture subtle variations within the field, leading to blurred boundaries between straw and soil in some areas. Figure 8g–i present the results of three-threshold segmentation. By introducing an additional threshold, the image was further divided into more distinct agricultural regions, such as dark soil, light soil, and straw. This enhanced the clarity of the straw regions and made the structure between rows more prominent. Nonetheless, this approach could still cause over-segmentation or detail loss in certain areas: for instance, in Figure 8i, some straw boundaries remained imprecise. Figure 8j–l show the results of four-threshold segmentation. With an increased number of thresholds, the hierarchical segmentation of the image was significantly improved, making the field structure more intuitive and the boundaries between straw and soil more distinct. This level of segmentation achieved the best balance among various regions. However, the computational cost is higher compared to that of the two- and three-threshold methods, and in some cases, small noisy regions may emerge—for example, localized background noise caused slight inaccuracies in Figure 8l. In summary, the results of multi-threshold segmentation demonstrated that for complex images of maize fields during the seedling-absent stage under straw return patterns, the proposed SADE-WOA can effectively perform multi-threshold segmentation. Among the tested configurations, four-threshold segmentation performed best, enabling the precise delineation of soil areas while preserving straw texture information.
Table 1 presents the optimal thresholds and corresponding maximum between-class variance values obtained by the five different algorithms when segmenting cornfield images. As shown, when the number of thresholds was relatively small, the optimal thresholds selected by the various algorithms tended to be similar, making it difficult to distinguish segmentation accuracy clearly. However, as the number of thresholds increased, the number of segmented regions also rose, leading to a corresponding increase in the complexity of the image information to be processed. Under these circumstances, the demand for segmentation accuracy grew exponentially. In this study, the criterion for threshold selection was the maximization of between-class variance, which essentially determines the optimal threshold by maximizing the statistical separability between foreground and background. A higher between-class variance indicates a better threshold selection and thus greater segmentation accuracy. By comparing the threshold values and between-class variances across different numbers of thresholds and test image scenarios, it was evident that in multi-threshold segmentation—particularly when the number of thresholds was ≥3—the SADE-WOA effectively enhanced the between-class variance through optimized threshold combinations. For instance, under the three-threshold condition in Figure 8a, the between-class variance exceeded that of GWO by 0.9972, and under the four-threshold condition in the same scenario, it exceeded that of GWO by 3.7195. This demonstrated an improved discriminability between image classes. In contrast, traditional algorithms such as GWO and DE showed limited improvement in between-class variance as the threshold number increased and were more affected by image complexity. In summary, these results validate the effectiveness of the SADE-WOA in image threshold segmentation tasks, particularly in addressing the precision demands of complex agricultural imagery. The results also confirm the effectiveness of integrating the differential evolution crossover mechanism into the whale optimization algorithm’s position updating process.
The convergence curves obtained from applying two-, three-, and four-level thresholding to the three groups of images (Figure 8a–c) are shown in Figure 9. The insets within each plot display a magnified view of the convergence behavior during the final 50 iterations for detailed observation. From the results, it is evident that for two-level thresholding (Figure 9a–c), the curve rapidly ascends to a high between-class variance value, indicating that the proposed algorithm efficiently locates the optimal segmentation thresholds due to its improved design. As the number of thresholds increases, such as in the four-level segmentation scenarios (Figure 9g–i), SADE-WOA consistently maintains stable convergence speed and accuracy, outperforming other algorithms in both precision and robustness. These results collectively demonstrate that SADE-WOA exhibits strong robustness and fast convergence when applied to multi-objective adaptive threshold optimization problems in complex field image segmentation tasks.
Table 2 summarizes the average runtime of the five algorithms during the multi-threshold segmentation process. The results indicate that the SADE-WOA achieved the shortest execution time in eight out of nine test cases. For example, in Figure 8a, when the number of thresholds was two, SADE-WOA (0.0640 s) significantly outperformed GWO (0.6785 s) and DE (0.4969 s); when the threshold count increased to four, SADE-WOA (0.0922 s) still exhibited the fastest performance among all algorithms. Similarly, in Figure 8b, for two thresholds, SADE-WOA (0.0491 s) was faster than GWO (0.8896 s) and DE (0.3786 s). In Figure 8c, for two thresholds, SADE-WOA (0.0550 s) again surpassed GWO (0.5728 s) and DE (0.4319 s). Overall, the runtime of SADE-WOA was approximately one-tenth of that of GWO, one-ninth of that of DE, one-sixth to one-seventh of that of DE-GWO, and one-quarter of that of the improved WOA (WOA+), which demonstrates its outstanding computational efficiency. Moreover, SADE-WOA achieved the lowest standard deviation in runtime, indicating superior stability. These findings confirmed that the proposed method enables fast and stable performance under complex farmland conditions, fully satisfying the requirements for real-time processing.

3.2. Centerline Detection Results for Corn Seedling Band Recognition

The purpose of detecting the centerlines of corn seedling bands during the seedling-free stage using the proposed algorithm is to enable precise row-based pesticide spraying by agricultural machinery. Since field operations require broad coverage, fast response times, and high precision, this algorithm was designed to detect multiple seedling band centerlines simultaneously.
To evaluate the accuracy of the proposed algorithm in detecting corn seedling band centerlines, manual annotations were conducted on seedling band images. These manually annotated centerlines served as the reference standard, created in strict accordance with agronomic requirements. According to the criteria for agricultural machinery navigation and pesticide spraying, the angular deviation between the manually annotated centerlines and those extracted by the algorithm must not exceed 5° [38]. If this threshold is exceeded, the algorithm-detected centerline is considered invalid.
Field images captured in November 2024 at the research base of the Changchun Institute of Agricultural Machinery were used for evaluation. A total of 200 corn seedling band centerlines were identified across 50 selected images, and the detection results for five of them are illustrated in Figure 10 as an example. The manually annotated reference centerline is displayed alongside the fitted centerline generated by the algorithm within the same image for direct comparison. The algorithm results, which included both least-squares and polynomial fitting methods, were evaluated against the reference by calculating the angular deviation between them. To assess the precision of the system, densely sampled interpolation points were taken along both the fitted and the reference centerlines. At each point, the tangent (heading) angle was computed using Equation (30) [39], and the difference in heading direction across the continuous spatial domain was used to quantify the deviation in trajectory between the two lines.
θ i = arctan d y i d x i
In the above equation, d x i  and d y i  represent the gradients of the curve at the i -th point—calculated through numerical differentiation—and the angular difference between the manually annotated centerline and the algorithm-fitted centerline at each interpolated point is denoted as Δ θ i = unwrap θ ls , i θ manual , i . To strengthen the reliability of the evaluation, three mathematical metrics were employed to assess the angular deviations: mean error angle (MEA), root-mean-square error (RMSE), and mean relative error (MRE). These are defined as follows:
MEA = 1 N i = 1 N | Δ θ i |
RMSE = 1 N i = 1 N | Δ θ i | 2
MRE = 1 N i = 1 N | Δ θ i | | θ manual , i |
where N  denotes the total number of interpolated points. All resulting angular measurements were converted from radians to degrees to ensure interpretability and consistency in the evaluation.
A smaller MEA (mean absolute error) indicates a lower overall angular deviation between the fitted line and the reference line. The RMSE (root-mean-square error) reflects both the magnitude and the distribution of the error; a lower RMSE suggests smaller fluctuations and higher stability across the dataset. The MRE (mean relative error) quantifies the angular error as a proportion of the reference line’s own angle, with lower values indicating better relative accuracy. However, the MRE may become unreliable when the reference angle approaches zero, as small denominators can exaggerate the relative error.
Based on the aforementioned angular deviation evaluation framework, a comparative assessment was conducted between two centerline fitting algorithms and the manually annotated reference lines for Images 1 through 5. The evaluation results are presented in Figure 11.
Figure 11 illustrates the comparison between two centerline fitting algorithms and the manually annotated reference lines for corn seedling rows under straw alignment conditions during the seedling-absent growth stage. The first three subfigures (Figure 11a–e) show results from the least-squares linear fitting method, which demonstrated high consistency in detecting centerlines, especially in relatively straight seedling row regions, indicating stable performance. In contrast, the last three subfigures (Figure 11f–j) present results from the polynomial fitting method, which exhibited a stronger capacity to accommodate curved trajectories but also showed signs of overfitting fluctuations in complex terrain.
Table 3 compares the performance of the least-squares method (LSM) and polynomial fitting (PF) across five test images using the mean absolute error (MEA), root-mean-square error (RMSE), and mean relative error (MRE). For Image 1, the LSM demonstrated lower errors across all metrics (MEA: 0.52°, RMSE: 0.59°, MRE: 0.30%) compared to PF (MEA: 0.65°, RMSE: 0.73°, MRE: 0.37%). The 23.1% reduction in MEA and 19.2% lower RMSE highlight LSM’s superior consistency in bias control, as its linear optimization framework avoids the overfitting risk inherent in PF’s higher order term adjustments.
In Image 2, this advantage becomes even more pronounced: the LSM maintained stable performance (MEA: 0.52°, RMSE: 0.56°, MRE: 0.29%), while PF exhibited a significant degradation (MEA: 2.05°, RMSE: 2.36°, MRE: 1.17%). Here, PF’s error metrics surged by over 300% relative to those of LSM, likely due to overfitting or instability in handling complex data patterns—LSM’s simplicity in linear fitting proved more robust against noise or subtle nonlinearities that PF amplified.
For Image 3, the LSM continued to dominate with minimal errors (MEA: 0.34°, RMSE: 0.52°, MRE: 0.20%), whereas PF struggled with higher deviations (MEA: 1.06°, RMSE: 1.30°, MRE: 0.61%). The 212% increase in PF’s MEA and 150% higher RMSE further emphasize its susceptibility to nonlinear trends; LSM’s reliance on least-squares minimization effectively smoothed out irregularities, unlike PF’s tendency to overfit local data fluctuations.
In Image 4, the LSM achieved near-optimal RMSE (0.37°) and MRE (0.20%), outperforming PF (RMSE: 0.52°, MRE: 0.24%), despite a marginal MEA difference (LSM: 0.34° vs. PF: 0.41°). This suggests better error distribution control in LSM—its linear model balances bias—and a more effective variance tradeoff, whereas PF’s flexibility introduced uneven error contributions (e.g., larger residuals in edge cases).
Image 5 presents a unique case: for PF there was a marginal improvement in MEA (0.64°) and MRE (0.36%) compared to LSM (MEA: 0.67°, MRE: 0.38%), but its higher RMSE (0.75° vs. 0.73°) indicates inconsistent performance. This anomaly stemmed from PF’s localized fitting gains in certain data segments, which came at the cost of increased variance in others, reinforcing LSM’s reliability as a globally stable solution.
Overall, the least-squares method consistently delivered lower errors and greater stability across diverse scenarios, while polynomial fitting showed conditional effectiveness but remained prone to instability in complex or noisy environments.

3.3. Field Spraying Results of Agricultural Machinery

The field spraying trial using agricultural machinery was conducted on 18 May 2025 in corn fields under straw-banding planting patterns. The spraying results are shown in Figure 12. During the non-seedling period, the field operation achieved consistent coverage and effective droplet distribution along the anticipated corn seedling rows. The integration of automatic spray nozzles with real-time positioning based on centerline detection enabled uniform pesticide application while minimizing the overlap between adjacent seedling rows. Throughout the experiment, the spraying precision remained within acceptable agricultural standards, with minimal liquid waste, further confirming the high accuracy of the centerline detection method.

4. Discussion

4.1. Segmentation Result Analysis

In this study, the segmentation results were evaluated from multiple dimensions, including visual quality, quantitative metrics, and algorithmic efficiency. The proposed SADE-WOA-based Otsu multi-threshold segmentation algorithm was compared against several benchmark methods, such as GWO, DE, DE-GWO, and WOA+, under varying threshold levels (2, 3, and 4). The experimental results demonstrated that the proposed SADE-WOA-based multi-threshold segmentation algorithm achieved superior performance in both segmentation accuracy and computational efficiency. Visual analysis across three maize field images indicated that the two-threshold segmentation method effectively separated soil from straw, offering a simplified image structure and low computational cost. However, this simplicity came at the expense of finer detail, particularly at soil–straw boundaries. As the number of thresholds increased, segmentation granularity improved—three-threshold segmentation introduced an intermediate class, enhancing the clarity of inter-row structures and straw regions. Four-threshold segmentation achieved the most detailed and stratified results, clearly delineating different agricultural elements, albeit with a slight increase in processing time and occasional local noise. Quantitatively, the SADE-WOA consistently yielded the highest or near-highest between-class variance values across all threshold levels, particularly excelling under higher thresholds. This confirmed the algorithm’s ability to optimize the results in high-dimensional solution spaces. Furthermore, the convergence analysis revealed that SADE-WOA maintained a fast and stable convergence profile across all tests, outperforming comparative algorithms including GWO, DE, and DE-GWO. Runtime evaluations further validated the algorithm’s practicality, as it achieved the shortest execution time in eight out of nine cases and exhibited the lowest standard deviation, which underscores its robustness and suitability for real-time agricultural applications.
Compared with recent studies, the proposed SADE-WOA-based segmentation method demonstrates competitive performance in both segmentation accuracy and computational efficiency. For instance, Ning et al. (2022) [40] employed a hybrid Whale Optimization Algorithm to enhance two-dimensional Otsu thresholding, reporting improved segmentation quality. However, their approach did not guarantee efficient runtime performance, particularly in processing large-scale images. Similarly, Liu et al. (2024) [32] applied an optimized Otsu-based method for the segmentation of electrical equipment images, achieving segmentation accuracy comparable to that of conventional algorithms. The quality of segmentation was validated using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Nevertheless, their method exhibited reduced accuracy when dealing with images containing multiple regions with similar grayscale intensities.
In another recent study, Zheng et al. (2022) [41] utilized an improved particle swarm optimization (IPSO) algorithm to optimize Otsu multi-threshold segmentation. Their results indicated that IPSO outperformed other algorithms in terms of average value and standard deviation across most test functions. Moreover, the segmentation accuracy improved significantly with the increase in the number of thresholds. However, the computational load also increased substantially, rendering it less suitable for scenarios that require high real-time performance.
In contrast, our proposed SADE-WOA method integrates a self-adaptive mutation mechanism and an enhanced population diversity strategy, leading to more effective convergence and superior threshold optimization—especially in high-dimensional segmentation tasks. Across all tested scenarios, our algorithm consistently maintained the lowest standard deviation in runtime, which highlights its robustness and stability. These findings suggest that SADE-WOA offers a well-balanced tradeoff between segmentation precision and computational cost, making it particularly suitable for real-time segmentation applications in complex agricultural environments.
Despite the promising performance, the proposed SADE-WOA still has certain limitations. First, although the algorithm maintains high segmentation accuracy and fast convergence, the computational complexity slightly increases with the number of thresholds, which may affect efficiency in resource-constrained environments. Second, in scenarios with severe background noise or extreme lighting variations, the algorithm may produce small misclassifications or fragmented regions. Lastly, while the current approach effectively segments straw and soil, it lacks adaptive adjustment mechanisms for dynamically varying field conditions, which could impact its generalizability across different crop types or growth stages. Future work will focus on integrating contextual information and adaptive thresholding strategies to enhance robustness and scalability.

4.2. Analysis of the Corn Seedling Band Centerline Detection Results

After applying the improved SADE-WOA for multi-threshold segmentation, the resulting segmented images were further processed to detect the centerlines of the maize seedling bands. The centerline detection results for the maize seedling bands demonstrated that the proposed algorithm performed with high accuracy and robustness under complex field conditions. By comparing the algorithmically extracted centerlines with manually annotated reference lines, it was observed that the minimum deviation angles consistently remained within the agronomic threshold of 5°, confirming the validity of the method for guiding agricultural machinery operations such as row-based spraying. The least-squares fitting approach provided stable performance in relatively straight seedling bands, while the polynomial fitting method showed better adaptability in curved or irregular regions, albeit with occasional overfitting. A quantitative evaluation using MEA, RMSE, and MRE further validated the reliability of the proposed approach, with the least-squares method generally achieving lower error values.
Compared with recent studies, our centerline detection method demonstrated notable improvements in both accuracy and robustness to visual noise. For instance, Ronchetti et al. (2020) [42] validated the feasibility of using UAV imagery for crop row detection, employing RGB sensors and simple algorithms such as Bayesian segmentation. While their approach achieved competitive accuracy and maintained low operational costs, it was sensitive to environmental changes, required substantial manual intervention, and exhibited relatively high computational latency. Similarly, Zhang et al. (2023) [43] proposed an enhanced U-Net-based architecture for corn row detection, which delivered high precision in structured field environments. However, their method heavily relies on large-scale annotated datasets, involves a complex network structure, and demands high-end computational resources.
In contrast, our improved SADE-WOA-based pipeline achieved reliable detection accuracy and operational stability under varying field conditions without the need for extensive labeled data or GPU acceleration. Unlike deep learning-based approaches, our method is lightweight and computationally efficient, which makes it well suited for deployment on embedded platforms. Furthermore, the use of the least-squares fitting technique effectively smooths the centerlines and mitigates local distortions without requiring additional postprocessing. These characteristics underscore the practical advantages of our method for real-time applications in precision agriculture, particularly in scenarios where computing resources are limited.
Although the centerline detection was generally effective, certain limitations were observed. In areas where straw distribution was uneven or the contrast between soil and straw was relatively low, the extracted centerlines occasionally exhibited slight deviations or discontinuities. The proposed method also demonstrated sensitivity to external environmental factors. Specifically, the time of day had a significant impact on the detection results. Trials conducted during early morning or late afternoon, when ambient light is low, often lead to poor image quality due to insufficient illumination, which in turn affects the accuracy of centerline extraction. On the other hand, operations at noon under strong sunlight can introduce excessive shadows cast by agricultural machinery, also causing detection errors. Moreover, the weather conditions play a crucial role. For example, on cloudy days, the diffuse lighting softens the contrast between different field elements, potentially leading to less precise centerline detection. These findings indicate that the method’s performance is closely related to variations in field texture, illumination conditions, and weather circumstances.

4.3. Future Work

Future work will focus on addressing the identified limitations in both the segmentation and the centerline detection processes. To enhance segmentation performance in complex environments, especially under variable lighting and soil–stover contrast conditions, we plan to incorporate adaptive thresholding strategies and explore lightweight deep learning models that balance precision and efficiency. For centerline detection, efforts will be directed toward developing more robust fitting algorithms capable of maintaining continuity and accuracy in the presence of irregular textures and sparse features. Additionally, real-time deployment on embedded systems will be considered to support in-field agricultural operations with high responsiveness and reliability.

5. Conclusions

This study addressed the challenge of centerline detection in corn fields during the non-seedling stage under a straw row grouping pattern. An enhanced SADE-WOA is proposed and was employed for the adaptive multi-threshold Otsu segmentation of field images, serving as a key preprocessing step for subsequent seedling band centerline extraction.
Specifically designed for complex visual conditions in no-seedling-period cornfields with straw row grouping patterns, the SADE-WOA-optimized segmentation effectively separated soil regions while retaining straw texture details, demonstrating dual adaptability to heterogeneous image features. Quantitatively, the method consistently outperformed conventional algorithms in between-class variance, indicating superior class separability. It also achieved faster convergence and remarkable efficiency: the average computational time was reduced to 1/10th of that of GWO and 1/9th of that of DE, with low standard deviation, highlighting its stability.
When applied to centerline detection, the least-squares fitting method, used in combination with the preprocessed images, delivered reliable and accurate results. The mean absolute error (MAE) of the yaw angle remained within 0.34–0.67°, the root-mean-square error (RMSE) fell between 0.37 and 0.73°, and the maximum relative error (MRE) stayed below 0.2%. These metrics underscore the method’s strong error control and resistance to overfitting. Furthermore, the linear constraints imposed by the least-squares model helped eliminate noise and correct local deformations, thereby minimizing the cumulative error and ensuring accurate alignment with the seedling bands’ spatial structure.
Beyond its strong segmentation performance and improved centerline detection accuracy, the proposed SADE-WOA-based framework also exhibits excellent scalability and generalizability. Specifically, it is highly compatible with modular spraying systems, in which multiple nozzles can be independently controlled according to the detected seedling bands. This unit-based design allows for flexible adaptation to various field sizes and machinery configurations, enhancing the method’s applicability across a wide range of agricultural scenarios. Moreover, the method is computationally lightweight and does not rely on GPU acceleration or large-scale annotated datasets, which makes it well suited for deployment on embedded systems and low-power edge devices. These characteristics will enable real-time operation under resource-constrained field conditions, offering a practical and scalable solution for precision agriculture.
In conclusion, this study presents a targeted and effective solution for centerline detection in corn fields during the non-seedling stage under straw row grouping conditions. The proposed method demonstrated clear advantages in segmentation accuracy, computational efficiency, and detection precision within this specific agricultural context. Nonetheless, it should be noted that the validation of the method is currently limited to the non-seedling stage under a straw row pattern. Future research should aim to extend its applicability to other growth stages and different straw management practices, broadening the scope of its agricultural utility.

Author Contributions

Conceptualization, Y.L.; methodology, Y.D.; data curation, Z.W. and J.Z.; formal analysis, K.Z. and H.Y.; funding acquisition, J.W.; project administration, Y.W.; validation, X.T., J.C. and F.L.; resources, M.L., Y.W. and J.W.; software, Y.L. and Y.D.; supervision, Y.W., K.Z. and J.W.; validation, Y.L., J.Z. and J.C.; visualization, X.T.; writing—review and editing, Y.D. and J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “Research on regionalized surface straw cover information detection methods in complex contexts for conservation tillage”, the National Natural Science Foundation of China, Product number: 42001256; and the Jilin Science and Technology Development Program Project, Product number: 20250601052RC, 20220402023GH, 20230202039NC, 20220203004SF.

Data Availability Statement

The datasets in this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Crystal-Ornelas, R.; Thapa, R.; Tully, K.L. Soil organic carbon is affected by organic amendments, conservation tillage, and cover cropping in organic farming systems: A meta-analysis. Agric. Ecosyst. Environ. 2021, 312, 107356. [Google Scholar] [CrossRef]
  2. Dong, X.; Zhang, Y.; Munyampirwa, T.; Tao, H.; Shen, Y. Effects of Long-Term Conservation Tillage on Soil Carbon Content and Invertase Activity in Dry Farmland on the Loess Plateau. Sci. Agric. Sin. 2023, 56, 907–919. [Google Scholar]
  3. Wang, Y.; Cui, S.; Liu, Y.; He, M.; Wang, P.; Wang, J. Construction and test of the precise spraying device for the stalk set-row planting mode. Trans. Chin. Soc. Agric. Eng. 2022, 38, 12–20. [Google Scholar]
  4. Ahmad, F.; Qiu, B.; Dong, X.; Ma, J.; Huang, X.; Ahmed, S.; Chandio, F.A. Effect of operational parameters of UAV sprayer on spray deposition pattern in target and off-target zones during outer field weed control application. Comput. Electron. Agric. 2020, 172, 105350. [Google Scholar] [CrossRef]
  5. Jin, Y.; Liu, J.; Xu, Z.; Yuan, S.; Li, P.; Wang, J. Development status and trend of agricultural robot technology. Int. J. Agric. Biol. Eng. 2021, 14, 1–19. [Google Scholar] [CrossRef]
  6. Cheng, C.; Fu, J.; Su, H.; Ren, L. Recent advancements in agriculture robots: Benefits and challenges. Machines 2023, 11, 48. [Google Scholar] [CrossRef]
  7. Vakilian, K.A.; Massah, J. A farmer-assistant robot for nitrogen fertilizing management of greenhouse crops. Comput. Electron. Agric. 2017, 139, 153–163. [Google Scholar] [CrossRef]
  8. Li, Y.; Guo, Z.; Shuang, F.; Zhang, M.; Li, X. Key technologies of machine vision for weeding robots: A review and benchmark. Comput. Electron. Agric. 2022, 196, 106880. [Google Scholar] [CrossRef]
  9. Xiao, X.; Wang, Y.; Zhou, B.; Jiang, Y. Flexible Hand Claw Picking Method for Citrus-Picking Robot Based on Target Fruit Recognition. Agriculture 2024, 14, 1227. [Google Scholar] [CrossRef]
  10. Huang, M.; He, L.; Choi, D.; Pecchia, J.; Li, Y. Picking dynamic analysis for robotic harvesting of Agaricus bisporus mushrooms. Comput. Electron. Agric. 2021, 185, 106145. [Google Scholar] [CrossRef]
  11. He, X. Research and development of efficient plant protection equipment and precision spraying technology in China: A review. J. Plant Prot. 2022, 49, 389–397. [Google Scholar] [CrossRef]
  12. Bochtis, D.D.; Sørensen, C.G.; Busato, P. Advances in agricultural machinery management: A review. Biosyst. Eng. 2014, 126, 69–81. [Google Scholar] [CrossRef]
  13. Chen, H.; Lan, Y.; Fritz, B.K.; Hoffmann, W.C.; Liu, S. Review of agricultural spraying technologies for plant protection using unmanned aerial vehicle (UAV). Int. J. Agric. Biol. Eng. 2021, 14, 38–49. [Google Scholar] [CrossRef]
  14. Li, L.; He, X.; Song, J.; Wang, X.; Jia, X.; Liu, C. Design and experiment of automatic profiling orchard sprayerbased on variable air volume and flow rate. Trans. Chin. Soc. Agric. Eng. 2017, 33, 70–76. [Google Scholar]
  15. Zhu, H.; Wang, M.; Bai, L.; Zhang, Y.; Liu, Q.; Li, R. Maize Pest and Disease Detection and Precise Variable Spraying System Based on Visual Recognition. Trans. Chin. Soc. Agric. Mach. 2024, 55, 210–221. [Google Scholar]
  16. Zhou, J.; Geng, S.; Qiu, Q.; Shao, Y.; Zhang, M. A deep-learning extraction method for orchard visual navigation lines. Agriculture 2022, 12, 1650. [Google Scholar] [CrossRef]
  17. Liu, X.; Qi, J.; Zhang, W.; Bao, Z.; Wang, K.; Li, N. Recognition method of maize crop rows at the seedling stage based on MS-ERFNet model. Comput. Electron. Agric. 2023, 211, 107964. [Google Scholar] [CrossRef]
  18. Diao, Z.; Guo, P.; Zhang, B.; Zhang, D.; Yan, J.; He, Z.; Zhao, S.; Zhao, C. Maize crop row recognition algorithm based on improved UNet network. Comput. Electron. Agric. 2023, 210, 107940. [Google Scholar] [CrossRef]
  19. Shi, J.; Bai, Y.; Diao, Z.; Zhou, J.; Yao, X.; Zhang, B. Row detection BASED navigation and guidance for agricultural robots and autonomous vehicles in row-crop fields: Methods and applications. Agronomy 2023, 13, 1780. [Google Scholar] [CrossRef]
  20. Diao, Z.; Guo, P.; Zhang, B.; Zhang, D.; Yan, J.; He, Z.; Zhao, S.; Zhao, C.; Zhang, J. Navigation line extraction algorithm for corn spraying robot based on improved YOLOv8s network. Comput. Electron. Agric. 2023, 212, 108049. [Google Scholar] [CrossRef]
  21. Liu, Y.; Wang, Y.; Yu, H.; Qin, M.; Sun, J. Detection of Straw Coverage Rate Based on Multi-threshold Image Segmentation Algorithm. Trans. Chin. Soc. Agric. Mach. 2018, 49, 27–35+55. [Google Scholar]
  22. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  23. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  24. Wang, Y.; Gao, X.; Sun, Y.; Liu, Y.; Wang, L.; Liu, M. Semantic segmentation-based conservation tillage corn straw return cover type recognition. Comput. Electron. Agric. 2025, 229, 109792. [Google Scholar] [CrossRef]
  25. Xu, X.; Gao, Y.; Fu, C.; Qiu, J.; Zhang, W. Research on the Corn Stover Image Segmentation Method via an Unmanned Aerial Vehicle (UAV) and Improved U-Net Network. Agriculture 2024, 14, 217. [Google Scholar] [CrossRef]
  26. Wang, L.; Xu, L.; Wei, S.; Wei, C.; Zhao, B.; Yuan, Y.; Fan, J. Straw Coverage Detection Method Based on Sauvola and Otsu Segmentation Algorithm. Trans. Chin. Soc. Agric. Eng. 2017, 7, 29–35. [Google Scholar]
  27. Santana-Cedrés, D.; Gomez, L.; Alemán-Flores, M.; Salgado, A.; Esclarín, J.; Mazorra, L.; Alvarez, L. Automatic correction of perspective and optical distortions. Comput. Vis. Image Underst. 2017, 161, 1–10. [Google Scholar] [CrossRef]
  28. Wang, Q.; Chen, W.; Wu, X.; Li, Z. Detail-enhanced multi-scale exposure fusion in YUV color space. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 2418–2429. [Google Scholar] [CrossRef]
  29. Zhang, M.; Wang, H.; Guo, Y. YUVDR: A residual network for image deblurring in YUV color space. Multimed. Tools Appl. 2024, 83, 19541–19561. [Google Scholar] [CrossRef]
  30. Otsu, N. A threshold selection method from gray-level histograms. Automatica 1975, 11, 23–27. [Google Scholar] [CrossRef]
  31. Chen, H.; Flann, N.S.; Watson, D.W. Parallel genetic simulated annealing: A massively parallel SIMD algorithm. IEEE Trans. Parallel Distrib. Syst. 1998, 9, 126–136. [Google Scholar] [CrossRef]
  32. Liu, X.; Zhang, Z.; Hao, Y.; Zhao, H.; Yang, Y. Optimized OTSU segmentation algorithm-based temperature feature extraction method for infrared images of electrical equipment. Sensors 2024, 24, 1126. [Google Scholar] [CrossRef]
  33. Nguyen, H.T.; Smeulders, A.W. Robust tracking using foreground-background texture discrimination. Int. J. Comput. Vis. 2006, 69, 277–293. [Google Scholar] [CrossRef]
  34. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  35. Wang, C.; Wu, X.; Zhang, Y.; Wang, W. Recognition and segmentation of maize seedlings in field based on dualattention semantic segmentation network. Trans. Chin. Soc. Agric. Eng. 2021, 37, 211–221. [Google Scholar]
  36. Zhang, Q.; Chen, Q.; Xu, L.; Xu, X.; Liang, Z. Wheat lodging direction detection for combine harvesters based on improved K-means and bag of visual words. Agronomy 2023, 13, 2227. [Google Scholar] [CrossRef]
  37. Yu, G.; Wang, Y.; Gan, S.; Xu, H.; Chen, Y.; Wang, L. Extracting the navigation lines of crop-free ridges using improvedDeepLabV3+. Trans. Chin. Soc. Agric. Eng. 2024, 40, 168–175. [Google Scholar]
  38. Fu, D.; Jiang, Q.; Qi, L.; Xing, H.; Chen, Z.; Yang, X. Transactions of the Chinese Society of Agricultural Engineering. Trans. Chin. Soc. Agric. Eng. 2023, 39, 47–57. [Google Scholar]
  39. Li, H.; Lai, X.; Mo, Y.; He, D.; Wu, T. Pixel-wise navigation line extraction of cross-growth-stage seedlings in complex sugarcane fields and extension to corn and rice. Front. Plant Sci. 2025, 15, 1499896. [Google Scholar] [CrossRef]
  40. Ning, G. Two-dimensional Otsu multi-threshold image segmentation based on hybrid whale optimization algorithm. Multimed. Tools Appl. 2023, 82, 15007–15026. [Google Scholar] [CrossRef]
  41. Zheng, J.; Gao, Y.; Zhang, H.; Lei, Y.; Zhang, J. OTSU multi-threshold image segmentation based on improved particle swarm algorithm. Appl. Sci. 2022, 12, 11514. [Google Scholar] [CrossRef]
  42. Ronchetti, G.; Mayer, A.; Facchi, A.; Ortuani, B.; Sona, G. Crop row detection through UAV surveys to optimize on-farm irrigation management. Remote Sens. 2020, 12, 1967. [Google Scholar] [CrossRef]
  43. Zhang, X.; Wang, Q.; Wang, X.; Li, H.; He, J.; Lu, C.; Yang, Y.; Jiang, S. Automated detection of Crop-Row lines and measurement of maize width for boom spraying. Comput. Electron. Agric. 2023, 215, 108406. [Google Scholar] [CrossRef]
Figure 1. Trial site and planting pattern. (a) overview of the experimental site; (b) planting pattern.
Figure 1. Trial site and planting pattern. (a) overview of the experimental site; (b) planting pattern.
Agronomy 15 01575 g001
Figure 2. Image acquisition setup diagram. (a) DJI Osmo Action; (b) step motor; (c) nozzle; (d) spray tank.
Figure 2. Image acquisition setup diagram. (a) DJI Osmo Action; (b) step motor; (c) nozzle; (d) spray tank.
Agronomy 15 01575 g002
Figure 3. Flowchart of the corn seedling band centerline detection method.
Figure 3. Flowchart of the corn seedling band centerline detection method.
Agronomy 15 01575 g003
Figure 4. YUV luminance-based grayscale conversion. (a) original image; (b) grayscale image.
Figure 4. YUV luminance-based grayscale conversion. (a) original image; (b) grayscale image.
Agronomy 15 01575 g004
Figure 5. Flowchart of the Improved Simulated Annealing-Based Differential Evolution–Whale Optimization Algorithm (SADE-WOA). (a) image preprocessing pipeline; (b) main loop of the algorithm.
Figure 5. Flowchart of the Improved Simulated Annealing-Based Differential Evolution–Whale Optimization Algorithm (SADE-WOA). (a) image preprocessing pipeline; (b) main loop of the algorithm.
Agronomy 15 01575 g005
Figure 6. Otsu threshold segmentation.
Figure 6. Otsu threshold segmentation.
Agronomy 15 01575 g006
Figure 7. Morphological postprocessing. (a) Removal of small, connected components; (b) closing operation and overall connected component analysis; (c) hole filling; (d) opening operation using a 3 × 3 elliptical structuring element; (e) opening operation using a 50 × 50 square structuring element.
Figure 7. Morphological postprocessing. (a) Removal of small, connected components; (b) closing operation and overall connected component analysis; (c) hole filling; (d) opening operation using a 3 × 3 elliptical structuring element; (e) opening operation using a 50 × 50 square structuring element.
Agronomy 15 01575 g007
Figure 8. Original images and segmentation results. (ac) original images; (df) two-threshold segmentation; (gi) three-threshold segmentation; (jl) four-threshold segmentation.
Figure 8. Original images and segmentation results. (ac) original images; (df) two-threshold segmentation; (gi) three-threshold segmentation; (jl) four-threshold segmentation.
Agronomy 15 01575 g008
Figure 9. Multi-threshold segmentation convergence curves for the analyzed three groups of images (the legends and axes in the insets are consistent with those of the main plots). (ac) Two-level threshold segmentation; (df) three-level threshold segmentation; (gi) four-level threshold segmentation.
Figure 9. Multi-threshold segmentation convergence curves for the analyzed three groups of images (the legends and axes in the insets are consistent with those of the main plots). (ac) Two-level threshold segmentation; (df) three-level threshold segmentation; (gi) four-level threshold segmentation.
Agronomy 15 01575 g009
Figure 10. Centerline detection results of corn seedling bands.
Figure 10. Centerline detection results of corn seedling bands.
Agronomy 15 01575 g010
Figure 11. Comparison of reference lines (the red solid line is the manually annotated reference line). (ae) Results of least-squares method; (fj) results of polynomial fitting.
Figure 11. Comparison of reference lines (the red solid line is the manually annotated reference line). (ae) Results of least-squares method; (fj) results of polynomial fitting.
Agronomy 15 01575 g011
Figure 12. Field spraying operation by agricultural machinery.
Figure 12. Field spraying operation by agricultural machinery.
Agronomy 15 01575 g012
Table 1. The threshold segmentation results of each algorithm.
Table 1. The threshold segmentation results of each algorithm.
Test ImagesSegmentation AlgorithmNumber of Thresholds
234
ThresholdBetween-Class VarianceThresholdBetween-Class VarianceThresholdBetween-Class Variance
Figure 8aGWO(125,191)2408.2167(102,155,206)2591.3793(86,136,170,212)2674.5070
DE(125,191)2408.2167(104,150,203)2590.8469(89,136,173,215)2676.9807
WOA+(125,191)2408.2167(102,156,206)2592.3765(91,130,170,214)2678.2265
DE-GWO(125,191)2408.2167(104,153,204)2591.5117(91,134,172,214)2676.8084
SADE-WOA(125,191)2408.2167(103,154,204)2592.3765(90,133,172,214)2678.2265
Figure 8bGWO(124,190)2440.6189(103,153,202)2623.8787(87,132,172,213)2708.5482
DE(124,190)2440.6189(100,149,202)2623.0514(91,132,168,218)2705.5175
WOA+(123,189)2440.6189(102,153,206)2623.9783(89,134,172,215)2709.4822
DE-GWO(123,189)2440.6189(102,151,202)2621.9620(90,132,170,213)2706.6236
SADE-WOA(122,189)2440.6189(101,151,203)2623.9783(89,131,168,211)2709.4822
Figure 8cGWO(126,190)2250.8556(106,153,202)2422.6150(96,137,173,214)2504.7693
DE(124,190)2250.7113(107,153,202)2421.7750(93,136,172,214)2504.4313
WOA+(125,190)2250.8556(106,153,203)2422.6151(94,138,175,218)2504.7694
DE-GWO(125,189)2250.8556(105,152,202)2421.5220(95,136,172,214)2503.3908
SADE-WOA(124,188)2550.8556(105,153,202)2422.6151(94,137,174,213)2504.7303
Table 2. The average runtime of the five algorithms during multi-threshold segmentation.
Table 2. The average runtime of the five algorithms during multi-threshold segmentation.
Test ImagesThresholdGWODEWOA+DE-GWOSADE-WOA
Figure 8a20.67850.49690.18830.27460.0640
30.63950.58560.18750.30520.1439
40.69190.65780.21050.30650.0922
Figure 8b20.88960.37860.18750.28780.0491
30.62290.65850.21140.30000.1537
40.68150.70470.19200.33220.1437
Figure 8c20.57280.43190.20320.27960.0550
30.62300.57850.18920.30230.2252
40.74210.64740.19830.33990.1589
Table 3. Comparison of evaluation metric results.
Table 3. Comparison of evaluation metric results.
Test ImagesFitting Line AlgorithmsEvaluation Metric Results
MEARMSEMRE
Image 1Least-Squares Method0.52°0.59°0.30%
Polynomial Fitting0.65°0.73°0.37%
Image 2Least-Squares Method0.52°0.56°0.29%
Polynomial Fitting2.05°2.36°1.17%
Image 3Least-Squares Method0.34°0.52°0.20%
Polynomial Fitting1.06°1.30°0.61%
Image 4Least-Squares Method0.34°0.37°0.20%
Polynomial Fitting0.41°0.52°0.24%
Image 5Least-Squares Method0.67°0.73°0.38%
Polynomial Fitting0.64°0.75°0.36%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Du, Y.; Zhang, K.; Yan, H.; Wu, Z.; Zhang, J.; Tong, X.; Chen, J.; Li, F.; Liu, M.; et al. Enhanced Multi-Threshold Otsu Algorithm for Corn Seedling Band Centerline Extraction in Straw Row Grouping. Agronomy 2025, 15, 1575. https://doi.org/10.3390/agronomy15071575

AMA Style

Liu Y, Du Y, Zhang K, Yan H, Wu Z, Zhang J, Tong X, Chen J, Li F, Liu M, et al. Enhanced Multi-Threshold Otsu Algorithm for Corn Seedling Band Centerline Extraction in Straw Row Grouping. Agronomy. 2025; 15(7):1575. https://doi.org/10.3390/agronomy15071575

Chicago/Turabian Style

Liu, Yuanyuan, Yuxin Du, Kaipeng Zhang, Hong Yan, Zhiguo Wu, Jiaxin Zhang, Xin Tong, Junhui Chen, Fuxuan Li, Mengqi Liu, and et al. 2025. "Enhanced Multi-Threshold Otsu Algorithm for Corn Seedling Band Centerline Extraction in Straw Row Grouping" Agronomy 15, no. 7: 1575. https://doi.org/10.3390/agronomy15071575

APA Style

Liu, Y., Du, Y., Zhang, K., Yan, H., Wu, Z., Zhang, J., Tong, X., Chen, J., Li, F., Liu, M., Wang, Y., & Wang, J. (2025). Enhanced Multi-Threshold Otsu Algorithm for Corn Seedling Band Centerline Extraction in Straw Row Grouping. Agronomy, 15(7), 1575. https://doi.org/10.3390/agronomy15071575

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop