Next Article in Journal
CSF-YOLO: A Lightweight Model for Detecting Grape Leafhopper Damage Levels
Previous Article in Journal
Integrative Analysis of the Methylome, Transcriptome, and Proteome Reveals a New Mechanism of Rapeseed Under Freezing Stress
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Corn Point Cloud Stem-Leaf Segmentation Method Based on Octree Voxelization and Region Growing

College of Computer and Control Engineering, Northeast Forestry University, Harbin 150006, China
*
Author to whom correspondence should be addressed.
Agronomy 2025, 15(3), 740; https://doi.org/10.3390/agronomy15030740
Submission received: 4 February 2025 / Revised: 7 March 2025 / Accepted: 18 March 2025 / Published: 19 March 2025
(This article belongs to the Section Precision and Digital Agriculture)

Abstract

:
Plant phenotyping is crucial for advancing precision agriculture and modern breeding, with 3D point cloud segmentation of plant organs being essential for phenotypic parameter extraction. Nevertheless, although existing approaches maintain segmentation precision, they struggle to efficiently process complex geometric configurations and large-scale point cloud datasets, significantly increasing computational costs. Furthermore, their heavy reliance on high-quality annotated data restricts their use in high-throughput settings. To address these limitations, we propose a novel multi-stage region-growing algorithm based on an octree structure for efficient stem-leaf segmentation in maize point cloud data. The method first extracts key geometric features through octree voxelization, significantly improving segmentation efficiency. In the region-growing phase, a preliminary structural segmentation strategy using fitted cylinder parameters is applied. A refinement strategy is then applied to improve segmentation accuracy in complex regions. Finally, stem segmentation consistency is enhanced through central axis fitting and distance-based filtering. In this study, we utilize the Pheno4D dataset, which comprises three-dimensional point cloud data of maize plants at different growth stages, collected from greenhouse environments. Experimental results show that the proposed algorithm achieves an average precision of 98.15% and an IoU of 84.81% on the Pheno4D dataset, demonstrating strong robustness across various growth stages. Segmentation time per instance is reduced to 4.8 s, offering over a fourfold improvement compared to PointNet while maintaining high accuracy and efficiency. Additionally, validation experiments on tomato point cloud data confirm the proposed method’s strong generalization capability. In this paper, we present an algorithm that addresses the shortcomings of traditional methods in complex agricultural environments. Specifically, our approach improves efficiency and accuracy while reducing dependency on high-quality annotated data. This solution not only delivers high precision and faster computational performance but also lays a strong technical foundation for high-throughput crop management and precision breeding.

1. Introduction

Rapid advancements in precision agriculture and plant phenotyping technologies have made acquiring 3D structural information of plants a central focus of agricultural research [1]. LiDAR technology enables efficient and precise acquisition of 3D point cloud data of crops. Terrestrial Laser Scanning (TLS), in particular, excels in analyzing complex crop structures due to its high resolution and accuracy [2,3].
Segmenting 3D point clouds with LiDAR technology is a crucial task in precision agriculture [4,5]. In particular, segmenting maize stems and leaves is vital for advancing plant phenotyping and supporting high-throughput crop management. However, existing segmentation methods face significant challenges: (1) Traditional methods are inefficient when processing large-scale point clouds, particularly with millions of points, leading to high computational demands and limiting their real-time applicability in high-throughput environments [6]. (2) The complex geometric structures of maize plants, including overlapping and interlacing stems and leaves, challenge algorithms to balance efficiency and accuracy, often leading to mis-segmentation and undersegmentation [7]. (3) The lack of high-quality annotated data limits the use of deep learning methods in agricultural point cloud segmentation, resulting in poor generalization and significant variability across different scenarios [8,9].
To overcome these challenges, this study proposes a novel maize plant point cloud segmentation method that combines an octree structure with a region-growing algorithm. The proposed method includes four key stages: (1) geometric features are extracted using octree voxelization, (2) coarse segmentation is performed with a region-growing algorithm using fitted cylinder parameters, (3) complex regions are refined with rapid and detailed strategies, and (4) segmentation accuracy is improved through central axis fitting and distance-based filtering. The study aims to (1) design a multi-stage region-growing algorithm using an octree structure for efficient stem-leaf segmentation in maize point clouds, (2) evaluate its performance across growth stages on the Pheno4D dataset, and (3) validate its generalization using tomato point clouds.
This study tackles the challenges of processing large-scale point clouds by developing a segmentation algorithm that meets high-throughput requirements. It also improves segmentation accuracy in complex geometric regions (e.g., overlapping stems and leaves) and reduces reliance on high-quality annotated data, providing a robust and generalizable solution for complex agricultural scenarios [10,11].
In summary, the contributions of this work are as follows:
(1) We present an algorithm for maize plant point cloud segmentation that employs octree voxelization combined with multi-stage region growing. Additionally, we develop a comprehensive segmentation pipeline.
(2) This approach reduces the dependency on high-quality annotated data, thereby providing robust and efficient support for real-time plant phenotyping analysis and precision breeding.

2. Related Works

Recent advancements in 3D point cloud segmentation have significantly improved plant structure analysis and phenotypic measurement [12,13]. These techniques are categorized into classical algorithms based on geometric analysis and machine learning, and deep learning models using neural networks [14,15]. While deep learning excels in complex pattern recognition and automated segmentation [16], traditional methods remain vital in agriculture due to their computational efficiency and low data dependency [17].
Traditional methods mainly utilize geometric feature extraction, clustering algorithms, and region-growing techniques [18]. Based on geometric morphology and spatial attributes, these methods have proven effective for segmenting complex plant structures. Wang et al. [19] applied voxelization and the vertical distribution of leaf area density (LAD) with LiDAR point clouds to segment maize structures. This approach achieved high precision in-ear height segmentation, proving robust across various planting densities. Miao et al. [20] proposed a framework for single-plant maize segmentation, combining Euclidean clustering and the K-means algorithm. This method was effective in high-density planting conditions and across different growth stages. Gao et al. [21] developed an enhanced QuickShift algorithm with color weights for accurate segmentation of maize 3D point clouds and simultaneous phenotypic parameter estimation in field conditions. Similarly, Wen Weiliang et al. [22] and Zhu Chao et al. [23] applied geometric methods, such as supervoxel clustering and Euclidean distance, for successful segmentation of maize kernels and ear regions.
Combining skeleton extraction with region-growing segmentation has significantly advanced the processing of complex stem and leaf point cloud data. Zhu et al. [24] and Miao et al. [25] combined skeleton extraction with region-growing algorithms, achieving precise stem-leaf segmentation and robustness across various growth stages. Li et al. [26] used 3D reconstruction from multi-view image sequences and region-growing algorithms to improve the efficiency of stem and leaf recognition in maize seedlings.
Deep learning-based point cloud segmentation has recently gained significant attention. Leveraging neural networks’ adaptive learning capabilities, these methods offer notable advantages in automating intricate plant structure segmentation. Wang et al. [27] combined the Minkowski distance field with QuickShift++ to create an efficient automated stem-leaf segmentation model. The model achieves high efficiency by encoding global and local structural relationships of maize plants. Zhang et al. [28] used PointNet++ with the shortest path algorithm to segment 3D maize ear point clouds and extract phenotypic traits. Ao et al. [29] proposed a two-stage model for maize stem-leaf segmentation in field environments, integrating PointCNN with morphological features. Herrero et al. [30] and Guo et al. [31] achieved precise maize plant point cloud segmentation using PointNet and the Feature Fusion Network (FF-Net), respectively. Integrating skeleton extraction and attention mechanisms significantly improved segmentation accuracy. Fan et al. [32] measured maize stem diameters in complex field conditions by combining Faster R-CNN with point cloud clustering techniques. Sun et al. [33] present a window Transformer-based semantic segmentation method, called Win-Former, that uses localized self-attention and cross-window information exchange to precisely segment maize stems. Luo et al. [34] present Eff-PlantNet, a framework that leverages three-dimensional self-supervised learning to extract global features from plant point clouds, facilitating fine semantic segmentation of plant stems. Notably, its performance is comparable to that of fully supervised approaches.
Although maize plant point cloud segmentation has advanced, traditional geometric methods often lack accuracy when processing complex plant structures. In contrast, deep learning methods require large labeled datasets and are computationally intensive [35]. To address these challenges, this study proposes a multi-stage region-growing algorithm that uses voxelization and multi-stage segmentation to handle the complexities of maize plant structures. The proposed method improves segmentation accuracy in overlapping stem-leaf regions, increases computational efficiency, and reduces dependence on labeled data. Compared to existing methods, the proposed algorithm offers a more efficient and versatile solution, excelling in performance and applicability.

3. Materials and Methods

3.1. Data Preparation

In this study, we perform experiments using three-dimensional point cloud data of maize plants from the Pheno4D dataset [36]. This dataset was acquired under controlled greenhouse conditions with an integrated system that includes the Perceptron Scan Works V5 laser triangulation scanner (660 nm wavelength) and the ROMER Infinite 2.0 measurement arm (45 μ m end precision). The dataset comprises time-series point clouds of seven maize plants, scanned daily over two weeks starting from the emergence of the first seedling. A total of 84 point clouds were acquired, containing approximately 90 million points. Among these, 49 point clouds (about 60 million points) are manually annotated into three categories: “ground”, “stem”, and “leaf”, with each leaf uniquely and consistently labeled. Figure 1 shows examples of 3D point clouds of maize plants at different growth stages from the Pheno4D dataset. Known for its exceptional precision (spatial resolution < 0.1 mm) and high resolution, the Pheno4D dataset is extensively used in research areas such as plant phenotyping, 3D reconstruction, and non-rigid registration. The dataset is available at https://www.ipb.uni-bonn.de/data/pheno4d/ (accessed on 20 January 2025).
To focus on semantic segmentation of plant organs, we first removed points labeled as “ground” using CloudCompare v2.12. Next, we used the leaf-tip method from the Pheno4D dataset to reclassify all leaf points into a unified “leaf” category, retaining the original “stem” label. This process produced two main semantic categories: “stem” and “leaf”.

3.2. Point Cloud Sampling

Processing maize plant point clouds reveals distinct variations in point cloud density across regions. Leaf regions have large surface areas and high point cloud density but lack intricate geometric details, whereas stem regions have lower density but more complex geometric structures. Traditional uniform sampling strategies often lead to redundant sampling in leaf regions and loss of critical details in stem regions, reducing accuracy and increasing computational costs.
To address this issue, we propose a voxel-based adaptive weighted sampling method that balances point cloud densities, preserves critical geometric details in complex structures, and minimizes redundant points. Figure 2 shows the point cloud sampling process. The point cloud data are first divided into fixed-size cubic units (voxels) using voxelization, segmenting the space into discrete elements. Subsequently, we dynamically adjust the sampling strategy based on each region’s geometric attributes, such as point cloud density and local structural complexity (i.e., irregular point distribution and rich detail). The weighting scheme is formulated based on these factors, enabling adaptive modulation of sampling density across regions. In leaf regions with high density and minimal geometric detail, only the point nearest to the centroid is kept per voxel, reducing data redundancy. In stem regions with higher geometric complexity, up to k points nearest to the centroid are retained per voxel (or all points if fewer than k exist), ensuring structural information is preserved. To refine sampling, we introduced an adjustable sampling ratio parameter, r, to control the ratio of sampled points in stem and leaf regions. After voxelization and weighted sampling, we used Farthest Point Sampling (FPS) to ensure uniform distribution of sampled points. This method reduces redundancy by selecting points that maximize spatial separation, ensuring more uniform spatial coverage. The process starts by randomly selecting an initial point from the point cloud as the first sampled point. Next, the minimum distance between unselected and selected points is calculated, and the farthest unselected point is chosen as the next sampled point. This process repeats until the required number of sampled points is achieved.
In our experiments, a voxel size of 2 mm was used, retaining up to 3 points per voxel in the stem region and 1 point in the leaf region, maintaining a 3:1 sampling ratio. This configuration, based on experimental validation and prior research, balances sampling efficiency and geometric detail retention, significantly enhancing subsequent processing effectiveness.

3.3. Stem Segmentation

The proposed algorithm consists of four stages: (1) Key geometric features are extracted through an octree-based voxelization process, forming a strong foundation for segmentation. (2) Preliminary structural partitioning is performed using a region-growing algorithm with fitted cylinder parameters. (3) Rapid refinement is applied to points similar to known planes, while detailed geometric analysis is used for complex points, enhancing coarse segmentation outcomes. (4) Stem structure segmentation is refined through central axis fitting, followed by geometric feature-based filtering for optimization. Figure 3 illustrates the segmentation process, highlighting the key steps and logic of each stage.

3.3.1. Octree-Based Voxelization

The maize plant point cloud is voxelized using an octree structure to efficiently manage large data volumes and uneven distributions. The octree recursively divides 3D space into hierarchical voxels, improving the efficiency of large-scale data processing. This structure effectively manages complex geometric features, enabling quick identification of stems and leaves and improving segmentation precision. The hierarchical design of the octree accommodates the diverse morphology of maize plants, enabling precise localization and classification during segmentation. Compared to KD-trees, octrees are more efficient for processing 3D spatial data because they avoid traversing back to the root node for neighborhood queries. This makes octrees ideal for efficiently segmenting 3D geometric features in point clouds. Figure 4 shows the octree structure.

3.3.2. Coarse Segmentation

In the coarse segmentation stage, region growing guided by fitted cylinder parameters segments the point cloud into clusters with similar geometric features. Each cluster represents a locally consistent geometric region on the surface of maize plant stems or leaves. The process uses the cylinder’s radius r, height h, and axis direction a to ensure accurate growth direction and consistent neighborhood point selection. The growth process is controlled by the neighborhood search radius r 1 and the normal vector angle threshold θ 0 . Increasing r 1 expands the neighborhood range and accelerates growth but reduces precision, while decreasing θ 0 improves boundary clarity but may increase segmentation time.
1. Selection of Initial Seed Points and Initialization
Two initial seed points, s 0 and s n , are chosen from the point cloud to serve as the starting point and termination condition for the growth process. These points are identified based on the height information z and the geometric distribution of the point cloud. The process starts by initializing an empty cluster region C = and a seed point sequence Q = . The seed point s 0 is added to Q, and the initial cylinder parameters (radius r 0 , height h 0 , and axis direction a 0 ) are fitted.
r = 1 n i = 1 n ( p i s 0 ) × a a , h = z max z min , a = s n s 0 s n s 0
2. Region-Growing Iteration
The following steps are iteratively performed until the sequence Q becomes empty:
(1) Current Seed Point Selection
The current seed point s k is extracted from Q and subsequently removed from the sequence.
(2) Neighborhood Search and Feature Matching
A spherical neighborhood search is performed around the seed point s k with a radius r 1 , producing a neighboring point set N k = p i P | p i s k | r 1 , where P is the input point cloud and | p i s k | is the Euclidean distance. The following conditions are checked sequentially:
Directional Consistency Matching: To align the normal vector n i of a neighboring point p i with the reference vector d , a maximum angle threshold θ is defined. The matching condition is met when the angle between n i and d is within θ , expressed as follows:
n i · d n i d cos ( θ )
When d = n k and θ = θ 0 , normal vector consistency ensures alignment between neighboring points’ normal vectors and the seed point’s direction. Conversely, when d = a k and θ = θ a , axial consistency ensures alignment between neighboring points and the cylinder axis.
Cylinder Parameter Fitting Matching: Geometric consistency requires the fitted cylinder radius r i and height h i of neighboring points to fall within specified tolerances ϵ r and ϵ h relative to the seed point’s radius r k and height h k :
| r i r k | ϵ r , | h i h k | ϵ h
Neighboring points p i that meet the conditions are added to the growth region, ensuring their directional and geometric features align with the fitted cylindrical structure.
(3) Growth Direction Calculation and Update
Axial and radial position computation: The projection p i of a neighboring point p i onto the cylinder axis a k , and the perpendicular distance d axis , are calculated as follows:
p i = ( p i s k ) · a k a k 2 a k , d axis = ( p i s k ) × a k a k
To ensure p i conforms to cylindrical geometric characteristics, the deviation d axis from the radius r k must stay within the tolerance ϵ r , expressed as | d axis r k | ϵ r .
Growth Direction Calculation: The growth direction v k is calculated by combining the axial component with the radial correction component, ensuring alignment with the axis and proximity to the cylindrical surface:
v k = p i + λ · p i s k p i p i s k p i
where λ represents the step size, controlling the precision of the growth process.
Seed Point Update: The seed point’s updated position is s k + 1 = s k + v k , enabling continuous axial growth while maintaining geometric consistency with neighboring points.
Dynamic Step Size Adjustment: The step size λ is adaptively tuned to balance precision and growth speed, based on the cylinder’s radius r k and height h k :
λ = α · r k + β · h k
where α and β represent control weights, empirically set to α = 0.9 and β = 0.3 in this study.
(4) Cluster Region Update
The current seed point s k is incorporated into the cluster region C.
(5) Refinement
After completing the region-growing process, a median filter is applied to the segmented point cloud clusters to suppress noise and reduce over-segmentation artifacts. The median filter calculates the median in a local neighborhood, replacing each point with the median of its neighbors:
p new = median { p i p i p current r 2 }
where r 2 denotes the radius of the neighborhood used for median filtering.
(6) Termination Condition
The region-growing process ends when the sequence Q is empty, resulting in segmented point cloud clusters. Governed by geometric feature matching and growth path constraints, this process ensures the resulting clusters align with the stems and leaves of the maize plant.

3.3.3. Refined Segmentation

The refined segmentation stage addresses misclassified points from the coarse segmentation, focusing on detailed and boundary regions of the maize plant. Unlike coarse segmentation, which depends on the geometric properties of fitted cylinders, refined segmentation improves accuracy by using more detailed geometric features. To reduce errors in normal vector estimation caused by noise and uneven sampling, we preprocess all point normal vectors before the refinement phase. Specifically, each point’s local neighborhood is defined by either a fixed radius or a predetermined number of neighbors. Then, the normal vectors in each neighborhood are processed with mean filtering and normalization, resulting in a set of smoothed normal vectors. This stage consists of two approaches: (1) Rapid Refinement: Unassigned points are grouped with the fitted plane if their orthogonal distance to the plane and the angle between their normal vector and the plane’s normal vector meet predefined thresholds. (2) General Refinement: Points that do not meet the rapid refinement criteria are treated as new seed points, and further segmentation is conducted using a region-growing strategy. These approaches are detailed in the following sections.
1. Rapid Refinement
Rapid refinement assigns unallocated points, caused by occlusion or noise during coarse segmentation, to known planes (e.g., stem surfaces) based on geometric similarity. The process uses orthogonal distance threshold ϵ d and normal vector angle threshold ϵ θ to enable efficient geometric feature matching through neighborhood expansion.
(1) Boundary Point Selection: Boundary points p b and their neighbors on the plane Π i , fitted in the coarse segmentation stage, are used as the initial input. The neighborhood includes points within a radius r:
N b = { p n P p n p b < r }
(2) Neighborhood Check and Feature Matching: Unallocated points p u in the neighborhood N b are evaluated for geometric similarity based on orthogonal distance d and normal vector angle deviation θ :
d = | n i · ( p u p b ) | n i ϵ d , θ = arccos n i · n u n i n u ϵ θ
Here, ϵ d controls the point-to-plane distance, and ϵ θ constrains normal vector alignment to ensure accurate classification. Experiments show that the best segmentation performance is achieved with ϵ d = 0.02 m and ϵ θ = 10 .
(3) Boundary Point List Expansion: Points p u that meet the criteria are considered geometrically similar to plane Π i and added to the boundary point list L i . Neighborhood checks are iteratively performed on points in L i , enabling efficient boundary expansion.
(4) Plane Update: If a significant number of points expand, the qualified points are integrated into Π i , and the plane is re-fitted to enhance segmentation accuracy.
2. General Refinement
If unallocated points fail to meet the rapid refinement criteria, a general refinement method is applied. Each newly selected seed point initiates an independent region-growing process. By leveraging a region-growing algorithm that integrates complex geometric attributes—such as normal vectors, curvature differences, and fitting residuals—this method achieves precise segmentation of complex structures (e.g., maize plant leaves), thereby enhancing segmentation accuracy and reducing under-segmentation.
(1) Seed Point and Neighborhood Selection: Boundary points p b from the coarse segmentation stage are used as the initial seed point s 0 , initializing the point set C. A neighborhood N g = { p u | p u s 0 | < r g } , defined by radius r g , is used for region growing.
(2) Feature Matching: Candidate points p u in the neighborhood N g are evaluated for geometric consistency with the seed point s 0 . The matching criteria include normal vector angle deviation, curvature difference, and fitting residuals, as defined by the following conditions:
θ = arccos n s · n u n s n u ϵ θ , | κ s κ u | ϵ κ , | r s r u | ϵ r
Here, ϵ θ , ϵ κ , and ϵ r represent the thresholds for angle, curvature, and residuals, ensuring geometric compatibility between neighborhood points and the seed point. Points that meet these criteria are added to the point set C, enabling region expansion.
(3) Region-Growing and Termination Conditions: The process iterates over each qualifying neighboring point during region growing. The process terminates if any of the following conditions are met: no qualifying neighboring points remain, the point set C reaches the maximum size C max , or the iteration count exceeds the upper limit T max :
| N g | < τ , | C | C max , t T max .
Here, τ is the minimum number of matched points, while C max and T max regulate region size and iteration limits, balancing computational efficiency and segmentation precision.

3.3.4. Post-Processing Stage

The post-processing stage refines the segmentation results from the coarse and refined segmentation stages, improving precision and consistency. This stage includes boundary smoothing, stalk central axis fitting, and segmentation operations to align the results with the geometric features of maize plants while maintaining computational efficiency.
1. Boundary Smoothing
To improve segmentation boundary smoothness and uniformity, a smoothing algorithm optimizes boundary points using median filtering within their neighborhoods:
p i , new = 1 | N i | p j N i p j
where N i is the neighborhood of point p i . This method effectively reduces sharp edges and boundary discontinuities, ensuring smoother transitions.
2. Stalk Central Axis Fitting and Segmentation
To accurately characterize the stalk’s geometric structure, a RANSAC-based central axis fitting and segmentation method is used. First, the stalk point set C stalk undergoes central axis fitting to determine the optimal axis a while minimizing noise effects. The distance of each point to the axis is then calculated as follows:
a = argmin p i C stalk d axis ( p i , a )
where d axis is the distance from point p i to axis a . This fitting process establishes a geometric foundation for precisely determining stalk morphological parameters, including length and diameter.
Based on this, the fitted central axis is further segmented to distinguish the connection regions between the stalk and leaves. Structural changes are detected by calculating the distance variation Δ d and angular variation Δ θ between adjacent axis points:
Δ d = | d i d i + 1 | , Δ θ = | θ i θ i + 1 |
where Δ d and Δ θ denote the distance and angle changes between successive points. This approach clearly delineates the boundaries between the stalk and leaf regions, improving segmentation accuracy and maintaining geometric consistency in complex structures.

3.4. Evaluation Metrics

To evaluate the segmentation quality of maize plant point clouds, the ground truth was manually annotated using CloudCompare software, and the model’s segmentation results were compared against it. The evaluation metrics included Precision (P), Recall (R), F 1 -score, Intersection over Union (IoU), and Overall Accuracy (Accuracy).
P = T P T P + F P R = T P T P + F N F 1 - score = 2 × P × R P + R IoU = T P T P + F P + F N Accuracy = T P + T N T P + T N + F P + F N
where T P s (true positives) denotes the number of points correctly classified as the target class; F P s (false positives) denotes the number of points incorrectly classified as the target class; F N s (false negatives) denotes the number of points misclassified as the non-target class; and T N s (true negatives) denotes the number of points correctly classified as the non-target class. These metrics range from 0 to 1, where higher values indicate better segmentation performance.

4. Results

4.1. Parameter Sensitivity Analysis

The neighborhood search radius r 1 and normal vector angle threshold θ 0 are key parameters that greatly impact the accuracy and processing efficiency of maize plant point cloud segmentation. Increasing r 1 accelerates region growth but reduces accuracy, while decreasing θ 0 enhances boundary clarity but increases processing time. Using point cloud data from different growth stages in the Pheno4D dataset, this study systematically examines how these parameters influence segmentation performance, assessed by overall accuracy (IoU) and segmentation time.
Figure 5 illustrates how different r 1 and θ 0 configurations affect segmentation accuracy and processing time. In Figure 5a, darker shades represent higher accuracy, while in Figure 5b, they indicate shorter processing times. The findings show that smaller r 1 values (e.g., 0.5 cm) and θ 0 improve segmentation accuracy but increase processing time. The findings show that smaller r 1 values (e.g., 0.5 cm) and θ 0 improve segmentation accuracy but increase processing time. Conversely, larger r 1 and θ 0 values (e.g., 1.5 cm) significantly shorten processing time but reduce boundary precision, highlighting the critical trade-off between segmentation accuracy and efficiency. A comprehensive analysis determines that r 1 = 1.0 cm and θ 0 = 10 form the optimal parameter combination, balancing segmentation accuracy and efficiency across different growth stages. This setting achieves an IoU of 83.17% and completes segmentation in 4.8 s.

4.2. Segmentation Results

This study employs a multi-stage region-growing algorithm for stem and leaf segmentation in maize plant point clouds. Figure 6 illustrates the segmentation results of the proposed algorithm, where red regions denote segmented stems and green regions represent leaves. As shown, the algorithm effectively extracts stem regions, accurately segmenting most stem points. Minor misclassifications occur at some stem-leaf junctions due to geometric complexities, but overall segmentation performance remains robust.
To evaluate the segmentation performance quantitatively, multiple standard metrics were computed. To ensure representative results, point cloud data across all growth stages in the dataset were tested. Table 1 presents the segmentation results for maize plant point clouds. The results indicate that the algorithm performs exceptionally well in maize stem segmentation, achieving 98.15% accuracy and an IoU of 83.17%. The results indicate that the algorithm performs exceptionally well in maize stem segmentation, achieving 98.15% accuracy and an IoU of 83.17%.

4.3. Ablation Analysis

To assess the efficacy of our fine segmentation strategy in resolving complex geometrical configurations at the stem-leaf junction, we conducted an ablation study comparing the performance of using only a coarse segmentation approach with that of integrating an additional fine segmentation technique. These experiments quantitatively assess the fine segmentation strategy’s ability to improve boundary clarity, reduce misclassification and omission rates, and enhance overall segmentation accuracy, while also evaluating its impact on segmentation efficiency. Experimental data were obtained from the Pheno4D dataset, which contains point clouds of corn plants exhibiting typical complex structural features. Table 2 presents the outcomes of the ablation experiments.
Experimental results indicate that incorporating the fine segmentation strategy yielded an approximate 13.1% improvement in overall accuracy, a 17.41% increase in IoU, and notable enhancements in the F1 score and other evaluation metrics. Although the improved method slightly increased segmentation time, the significant enhancement in segmentation accuracy justifies the additional computational overhead.
Figure 7 presents a comparative illustration of segmentation outcomes in the stem-leaf junction region. The left panel depicts the results obtained using only the coarse segmentation approach, characterized by blurred boundaries and evident misclassifications. The right panel illustrates the outcomes following the integration of the fine segmentation strategy, exhibiting distinctly defined boundaries and a precise delineation of the transition between the stem and the leaf.

4.4. Segmentation Efficiency

The proposed algorithm achieves high segmentation efficiency due to several key factors: (1) The octree structure significantly enhances spatial indexing and neighborhood search efficiency. (2) The multi-stage processing strategy, transitioning from coarse to fine segmentation, effectively identifies primary structures and refines them incrementally, reducing computational redundancy. (3) The rapid refinement mechanism selects optimal refinement methods based on point features, greatly accelerating detailed processing.
To evaluate the algorithm’s efficiency quantitatively, point cloud data from five maize plant growth stages in the Pheno4D dataset were selected, and segmentation times were recorded for each stage. Table 3 presents the segmentation time results. This evaluation assesses the algorithm’s stability and performance across different plant structure complexities and scales. All experiments were conducted on a standard desktop computer with an Intel Core i7-10700K CPU and 32 GB of RAM.

4.5. Comparison with Other Methods

To thoroughly evaluate the segmentation performance of the proposed multi-stage region-growing algorithm, we compared it with several mainstream point cloud segmentation methods, including the PCL region-growing algorithm, PointNet [37], PointNet++ [38], and DGCNN [39]. The PCL region-growing algorithm, a traditional unsupervised method, is similar to our approach as it does not require labeled data. In contrast, PointNet, PointNet++, and DGCNN are representative deep learning models: PointNet directly processes unordered point sets, PointNet++ employs a hierarchical structure to capture local features, and DGCNN uses dynamic graph structures to enhance relational modeling between points.
The deep learning models were trained on the Pheno4D dataset, which includes point cloud data from 50 maize plants at various growth stages, with 40 used for training and 10 for validation. Each model was trained with the recommended parameter settings from their original papers on an Nvidia GeForce 3090 GPU. Training was conducted for 200 epochs with a learning rate of 0.001 to ensure convergence. Testing was performed on a point cloud of a single maize plant at the VT growth stage, initially containing approximately 3.8 million points, downsampled to 760,000. Table 4 summarizes the segmentation performance of each method on this test sample, reporting overall accuracy (OA), Intersection over Union (IoU), and segmentation time.
Table 4 shows that the proposed multi-stage region-growing algorithm outperforms other methods in key metrics, achieving an overall accuracy of 98.95%, precision of 90.00%, and recall of 91.70%. While its Intersection over Union (IoU) score (83.17%) is slightly lower than DGCNN’s (84.81%), the proposed algorithm achieves a better balance between segmentation accuracy and processing efficiency. Notably, it completes segmentation in only 4.8 s, significantly outperforming PointNet’s 21.3 s and other deep learning methods. These findings highlight the proposed algorithm’s ability to achieve high accuracy and efficiency without requiring labeled data or complex training, making it ideal for rapid and effective plant phenotyping analysis.

5. Discussion

5.1. Performance on Other Plants

To evaluate the generalizability and robustness of the multi-stage region-growing algorithm, it was applied to tomato plant stem-leaf segmentation. Unlike maize, tomato plants have more intricate structures, with higher leaf density and closely packed stem-leaf junctions, making segmentation more challenging [33]. The test data consisted of high-resolution tomato point clouds from different growth stages in the Pheno4D dataset. Figure 8 shows the stem-leaf segmentation results of the proposed algorithm on tomato plants.
The results indicate that the algorithm effectively differentiates stems in most regions and performs particularly well in segmenting primary plant structures. However, segmentation errors occur in geometrically complex regions, such as stem-leaf junctions and branch points, especially in blurred transitions between emerging leaves and stems. Overall, the experimental results highlight the algorithm’s strong generalization capability, allowing effective adaptation to stem-leaf segmentation across various plant types. Although challenges persist in localized complex regions, the algorithm exhibits strong overall performance, highlighting its potential for broad applications in plant phenotyping analysis. To quantitatively evaluate our algorithm’s segmentation performance on tomato plants, we thoroughly analyzed the experimental data and systematically compiled the point cloud segmentation results, as shown in Table 5.
Experimental results show an overall accuracy of 95.31%, a precision of 87.50%, and an Intersection-over-Union (IoU) of 81.15%, which confirms the algorithm’s robust generalization capabilities. Although segmentation performance on tomato plants is slightly lower than on maize plants, the proposed approach remains highly effective in segmenting both stems and leaves across diverse plant species.

5.2. Adaptability and Robustness of Algorithm Parameters

The proposed octree-based multi-stage region-growing algorithm demonstrates high precision and efficiency in stem-leaf segmentation. However, its performance is constrained by dependence on environmental and data-specific parameters. Key parameters, including the neighborhood search radius r 1 and normal vector angle threshold θ 0 , are critical for balancing segmentation accuracy and efficiency; however, their optimal values differ across plant structures and environmental conditions. Fixed parameter settings restrict the algorithm’s adaptability to different crop types and growth stages. Future research should explore adaptive parameter adjustment mechanisms based on data characteristics, such as leveraging machine learning models to dynamically optimize parameters, enhancing the algorithm’s robustness and scalability across diverse scenarios [40].

5.3. Segmentation Accuracy in Complex Structures

Although the algorithm achieves high segmentation accuracy in most regions, some complex geometric areas, such as stem-leaf junctions and branch points, are still prone to misclassification. This limitation arises from insufficient geometric consistency criteria in these regions. In overlapping stem-leaf regions, ambiguous geometric features and constraints of fitted cylinder parameters (e.g., radius and axial alignment) hinder precise differentiation between stems and leaves. Future improvements may include integrating additional geometric features, such as local curvature or boundary characteristics, or leveraging deep learning for structural feature recognition. Additionally, multi-scale segmentation approaches could be explored to enhance differentiation of complex structures by analyzing features at different scales [41].

5.4. Efficiency in Processing High-Density Point Clouds

Although the algorithm has demonstrated efficiency on the Pheno4D dataset, it may encounter scalability challenges with higher-density point cloud datasets. In scenarios with densely packed leaves or complex growth stages, computational demands increase significantly. Future research could investigate integrating parallel computing techniques or optimizing the octree voxelization strategy. For example, adaptive voxel resolution, which increases resolution in complex regions while reducing it in simpler areas, could enhance efficiency. Additionally, integrating GPU acceleration and distributed processing frameworks could significantly enhance the algorithm’s processing capacity, enabling real-time analysis of high-density datasets while maintaining segmentation accuracy [42].

5.5. Impact of Algorithm on Phenotypic Parameter Extraction

The accuracy of stem-leaf segmentation is crucial for ensuring the quality of phenotypic parameter extraction, including stem length and diameter. Although the proposed algorithm performs well in this context, further optimization is necessary for high-precision applications, such as advanced breeding analysis. Future research should focus on enhancing segmentation accuracy to reduce error propagation in phenotypic calculations. Furthermore, extending the algorithm’s application to multi-temporal phenotypic monitoring could evaluate its stability and reliability across various growth stages, providing robust data support for crop monitoring and precision breeding [43].

5.6. Advantages of Deep Learning Models and Future Application Prospects

In recent years, deep neural network (DNN)-based point cloud segmentation methods have demonstrated exceptional flexibility and robustness in processing large-scale, heterogeneous plant datasets. By leveraging end-to-end training, DNNs automatically extract multi-scale features that effectively address complex structural challenges, such as stem-leaf intersections and branching [44,45]. Their performance generally surpasses that of conventional geometric approaches. Furthermore, some studies have integrated attention mechanisms and multi-branch network architectures to efficiently fuse local and global information, reducing classification errors and eliminating the need for laborious parameter tuning across different plant types [46,47].
In contrast, although the algorithm proposed in this paper—based on octree and multi-stage region growing—has achieved high segmentation accuracy in most tests, it still faces missegmentation challenges when processing complex plant structures, such as stem-leaf intersections and branching regions. Moreover, error mitigation measures that rely on geometric consistency checks and region-growing strategies often require structure-specific parameter adjustments, which limits their general applicability.
Therefore, future research should explore integrating traditional geometric methods with deep learning techniques, harnessing the strengths of both: geometric approaches for efficient spatial segmentation and DNNs for advanced feature extraction and adaptive learning. This integration promises to further enhance segmentation precision and robustness in complex regions, ultimately providing a more comprehensive and effective technical framework for plant phenotyping in precision agriculture [48].

6. Conclusions

This study presents a multi-stage region-growing algorithm that utilizes octree-based voxelization to efficiently segment stems and leaves in maize plant point clouds. Experimental results confirm the algorithm’s high accuracy and efficiency in segmenting complex plant structures, demonstrating notable robustness and adaptability in regions with intertwined stems and leaves. On the Pheno4D dataset, the algorithm achieved an average precision of 98.15% and an IoU of 83.17%, enhancing segmentation efficiency fourfold compared to traditional methods, with each segmentation process completed in 4.8 s. By employing octree voxelization and a multi-stage region-growing approach, the algorithm reduces reliance on high-quality labeled data, demonstrating versatility and broad applicability across diverse plant structures. Validation across different maize growth stages and other plants, such as tomatoes, highlights its broad potential in complex agricultural environments.
In conclusion, this study proposes an innovative approach for efficiently segmenting plant point cloud data, providing significant value for the segmentation and analysis of complex plant structures. The algorithm has strong potential in precision agriculture, with promising applications in high-throughput phenotyping, precision fertilization, automated breeding selection, and other agricultural processes [49,50]. By enabling rapid data acquisition and comprehensive analysis, the algorithm provides strong technical support for modern agricultural research. As an efficient and adaptive plant point cloud segmentation solution, it is anticipated to play a crucial role in advancing agricultural digitization, improving crop management efficiency, and promoting the adoption of intelligent field management technologies, ultimately supporting the digital transformation of modern agriculture [50,51].

Author Contributions

Conceptualization, Q.Z. and M.Y.; methodology, Q.Z.; software, Q.Z.; validation, Q.Z. and M.Y.; formal analysis, Q.Z.; investigation, Q.Z.; resources, Q.Z.; data curation, Q.Z.; writing—original draft preparation, Q.Z. and M.Y.; writing—review and editing, Q.Z.; visualization, Q.Z.; supervision, M.Y.; funding acquisition, M.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by: the Key Research and Development Program Heilongjiang Province of China (Grant No. 2023ZX01A23).

Data Availability Statement

The data used in this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sun, B.; Zain, M.; Zhang, L.; Han, D.; Sun, C. Stem-Leaf Segmentation and Morphological Traits Extraction in Rapeseed Seedlings Using a Three-Dimensional Point Cloud. Agronomy 2025, 15, 276. [Google Scholar] [CrossRef]
  2. Husin, N.A.; Khairunniza-Bejo, S.; Abdullah, A.F.; Kassim, M.S.M.; Ahmad, D.; Aziz, M.H.A. Classification of basal stem rot disease in oil palm plantations using terrestrial laser scanning data and machine learning. Agronomy 2020, 10, 1624. [Google Scholar] [CrossRef]
  3. Young, T.J.; Chiranjeevi, S.; Elango, D.; Sarkar, S.; Singh, A.K.; Singh, A.; Ganapathysubramanian, B.; Jubery, T.Z. Soybean Canopy Stress Classification Using 3D Point Cloud Data. Agronomy 2024, 14, 1181. [Google Scholar] [CrossRef]
  4. Zhang, H.; Liu, N.; Xia, J.; Chen, L.; Chen, S. Plant Height Estimation in Corn Fields Based on Column Space Segmentation Algorithm. Agriculture 2025, 15, 236. [Google Scholar] [CrossRef]
  5. Wang, Y.; Zhang, Z. Segment Any Leaf 3D: A Zero-Shot 3D Leaf Instance Segmentation Method Based on Multi-View Images. Sensors 2025, 25, 526. [Google Scholar] [CrossRef]
  6. Cui, D.; Liu, P.; Liu, Y.; Zhao, Z.; Feng, J. Automated Phenotypic Analysis of Mature Soybean Using Multi-View Stereo 3D Reconstruction and Point Cloud Segmentation. Agriculture 2025, 15, 175. [Google Scholar] [CrossRef]
  7. Miao, Y.; Wang, L.; Peng, C.; Li, H.; Zhang, M. Single plant segmentation and growth parameters measurement of maize seedling stage based on point cloud intensity. Smart Agric. Technol. 2024, 9, 100665. [Google Scholar] [CrossRef]
  8. Zhu, Q.; Bai, M.; Yu, M. Maize Phenotypic Parameters Based on the Constrained Region Point Cloud Phenotyping Algorithm as a Developed Method. Agronomy 2024, 14, 2446. [Google Scholar] [CrossRef]
  9. Yang, X.; Miao, T.; Tian, X.; Wang, D.; Zhao, J.; Lin, L.; Zhu, C.; Yang, T.; Xu, T. Maize stem–leaf segmentation framework based on deformable point clouds. ISPRS J. Photogramm. Remote Sens. 2024, 211, 49–66. [Google Scholar] [CrossRef]
  10. Shamshiri, R.R.; Navas, E.; Käthner, J.; Höfner, N.; Koch, K.; Dworak, V.; Hameed, I.; Paraforos, D.S.; Fernández, R.; Weltzien, C. Agricultural robotics to revolutionize farming: Requirements and challenges. In Mobile Robots for Digital Farming; CRC Press: Boca Raton, FL, USA, 2025; pp. 107–155. [Google Scholar]
  11. Islam, M.M.; Himel, G.M.S.; Moazzam, M.G.; Uddin, M.S. Artificial Intelligence-based Rice Variety Classification: A State-of-the-Art Review and Future Directions. Smart Agric. Technol. 2025, 10, 100788. [Google Scholar] [CrossRef]
  12. Song, H.; Wen, W.; Wu, S.; Guo, X. Comprehensive review on 3D point cloud segmentation in plants. Artif. Intell. Agric. 2025, in press. [Google Scholar] [CrossRef]
  13. Yao, J.; Gong, Y.; Xia, Z.; Nie, P.; Xu, H.; Zhang, H.; Chen, Y.; Li, X.; Li, Z.; Li, Y. Facility of tomato plant organ segmentation and phenotypic trait extraction via deep learning. Comput. Electron. Agric. 2025, 231, 109957. [Google Scholar] [CrossRef]
  14. Zhang, L.; Shi, S.; Zain, M.; Sun, B.; Han, D.; Sun, C. Evaluation of Rapeseed Leave Segmentation Accuracy Using Binocular Stereo Vision 3D Point Clouds. Agronomy 2025, 15, 245. [Google Scholar] [CrossRef]
  15. Liang, X.; Yu, W.; Qin, L.; Wang, J.; Jia, P.; Liu, Q.; Lei, X.; Yang, M. Stem and Leaf Segmentation and Phenotypic Parameter Extraction of Tomato Seedlings Based on 3D Point. Agronomy 2025, 15, 120. [Google Scholar] [CrossRef]
  16. Navone, A.; Martini, M.; Ambrosio, M.; Ostuni, A.; Angarano, S.; Chiaberge, M. GPS-free autonomous navigation in cluttered tree rows with deep semantic segmentation. Robot. Auton. Syst. 2025, 183, 104854. [Google Scholar] [CrossRef]
  17. Zhang, W.; Dang, L.M.; Nguyen, L.Q.; Alam, N.; Bui, N.D.; Park, H.Y.; Moon, H. Adapting the Segment Anything Model for Plant Recognition and Automated Phenotypic Parameter Measurement. Horticulturae 2024, 10, 398. [Google Scholar] [CrossRef]
  18. Bhatti, M.A.; Zeeshan, Z.; Syam, M.S.; Bhatti, U.A.; Khan, A.; Ghadi, Y.Y.; Alsenan, S.; Li, Y.; Asif, M.; Afzal, T. Advanced plant disease segmentation in precision agriculture using optimal dimensionality reduction with fuzzy c-means clustering and deep learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 18264–18277. [Google Scholar] [CrossRef]
  19. Wang, H.; Zhang, W.; Yang, G.; Lei, L.; Han, S.; Xu, W.; Chen, R.; Zhang, C.; Yang, H. Maize ear height and ear–plant height ratio estimation with LiDAR data and vertical leaf area profile. Remote Sens. 2023, 15, 964. [Google Scholar] [CrossRef]
  20. Miao, Y.; Li, S.; Wang, L.; Li, H.; Qiu, R.; Zhang, M. A single plant segmentation method of maize point cloud based on Euclidean clustering and K-means clustering. Comput. Electron. Agric. 2023, 210, 107951. [Google Scholar] [CrossRef]
  21. Gao, R.; Cui, S.; Xu, H.; Kong, Q.; Su, Z.; Li, J. A method for obtaining maize phenotypic parameters based on improved QuickShift algorithm. Comput. Electron. Agric. 2023, 214, 108341. [Google Scholar] [CrossRef]
  22. Wen, W.; Guo, X.; Tao, Y.; Zhao, D.; Teng, M.; Zhu, H.; Dong, C. Point cloud segmentation method of maize ear. J. Syst. Simul. 2020, 29, 3030–3035. [Google Scholar]
  23. Chao, Z.; Wu, W.; Liu, C.; Zhao, J.; Lin, L.; Tian, X.; Miao, T. Tassel segmentation of maize point cloud based on super voxels clustering and local features. Smart Agric. 2021, 3, 75. [Google Scholar]
  24. Zhu, C.; Miao, T.; Xu, T.; Yang, T.; Li, N. Stem-leaf segmentation and phenotypic trait extraction of maize shoots from three-dimensional point cloud. arXiv 2020, arXiv:2009.03108. [Google Scholar]
  25. Miao, T.; Zhu, C.; Xu, T.; Yang, T.; Li, N.; Zhou, Y.; Deng, H. Automatic stem-leaf segmentation of maize shoots using three-dimensional point cloud. Comput. Electron. Agric. 2021, 187, 106310. [Google Scholar] [CrossRef]
  26. Li, Y.; Liu, J.; Zhang, B.; Wang, Y.; Yao, J.; Zhang, X.; Fan, B.; Li, X.; Hai, Y.; Fan, X. Three-dimensional reconstruction and phenotype measurement of maize seedlings based on multi-view image sequences. Front. Plant Sci. 2022, 13, 974339. [Google Scholar] [CrossRef]
  27. Wang, D.; Song, Z.; Miao, T.; Zhu, C.; Yang, X.; Yang, T.; Zhou, Y.; Den, H.; Xu, T. DFSP: A fast and automatic distance field-based stem-leaf segmentation pipeline for point cloud of maize shoot. Front. Plant Sci. 2023, 14, 1109314. [Google Scholar] [CrossRef]
  28. Zhang, W.; Wu, S.; Wen, W.; Lu, X.; Wang, C.; Gou, W.; Li, Y.; Guo, X.; Zhao, C. Three-dimensional branch segmentation and phenotype extraction of maize tassel based on deep learning. Plant Methods 2023, 19, 76. [Google Scholar] [CrossRef]
  29. Ao, Z.; Wu, F.; Hu, S.; Sun, Y.; Su, Y.; Guo, Q.; Xin, Q. Automatic segmentation of stem and leaf components and individual maize plants in field terrestrial LiDAR data using convolutional neural networks. Crop J. 2022, 10, 1239–1250. [Google Scholar] [CrossRef]
  30. Herrero-Huerta, M.; Gonzalez-Aguilera, D.; Yang, Y. Structural component phenotypic traits from individual maize skeletonization by UAS-based structure-from-motion photogrammetry. Drones 2023, 7, 108. [Google Scholar] [CrossRef]
  31. Guo, X.; Sun, Y.; Yang, H. FF-Net: Feature-Fusion-Based Network for Semantic Segmentation of 3D Plant Point Cloud. Plants 2023, 12, 1867. [Google Scholar] [CrossRef]
  32. Fan, Z.; Sun, N.; Qiu, Q.; Li, T.; Feng, Q.; Zhao, C. In situ measuring stem diameters of maize crops with a high-throughput phenotyping robot. Remote Sens. 2022, 14, 1030. [Google Scholar] [CrossRef]
  33. Sun, Y.; Guo, X.; Yang, H. Win-Former: Window-Based Transformer for Maize Plant Point Cloud Semantic Segmentation. Agronomy 2023, 13, 2723. [Google Scholar] [CrossRef]
  34. Luo, L.; Jiang, X.; Yang, Y.; Samy, E.R.A.; Lefsrud, M.; Hoyos-Villegas, V.; Sun, S. Eff-3dpseg: 3d organ-level plant shoot segmentation using annotation-efficient deep learning. Plant Phenomics 2023, 5, 0080. [Google Scholar] [CrossRef]
  35. Yan, J.; Wang, X. Unsupervised and semi-supervised learning: The next frontier in machine learning for plant systems biology. Plant J. 2022, 111, 1527–1538. [Google Scholar] [CrossRef]
  36. Schunck, D.; Magistri, F.; Rosu, R.A.; Cornelißen, A.; Chebrolu, N.; Paulus, S.; Léon, J.; Behnke, S.; Stachniss, C.; Kuhlmann, H.; et al. Pheno4D: A spatio-temporal dataset of maize and tomato plant point clouds for phenotyping and advanced plant analysis. PLoS ONE 2021, 16, e0256340. [Google Scholar] [CrossRef]
  37. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
  38. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
  39. Phan, A.V.; Le Nguyen, M.; Nguyen, Y.L.H.; Bui, L.T. Dgcnn: A convolutional neural network over large-scale labeled graphs. Neural Netw. 2018, 108, 533–543. [Google Scholar] [CrossRef]
  40. Qin, Y.; Chen, J.; Jin, L.; Yao, R.; Gong, Z. Task offloading optimization in mobile edge computing based on a deep reinforcement learning algorithm using density clustering and ensemble learning. Sci. Rep. 2025, 15, 211. [Google Scholar] [CrossRef]
  41. Cui, J.; Tan, F.; Bai, N.; Fu, Y. Improving U-net network for semantic segmentation of corns and weeds during corn seedling stage in field. Front. Plant Sci. 2024, 15, 1344958. [Google Scholar] [CrossRef]
  42. Luo, Y.; Han, T.; Liu, Y.; Su, J.; Chen, Y.; Li, J.; Wu, Y.; Cai, G. CSFNet: Cross-modal Semantic Focus Network for Sematic Segmentation of Large-Scale Point Clouds. IEEE Trans. Geosci. Remote Sens. 2025, 63, 1–15. [Google Scholar] [CrossRef]
  43. Yang, H.-C.; Zhou, J.-P.; Zheng, C.; Wu, Z.; Li, Y.; Li, L.-G. PhenologyNet: A fine-grained approach for crop-phenology classification fusing convolutional neural network and phenotypic similarity. Comput. Electron. Agric. 2025, 229, 109728. [Google Scholar] [CrossRef]
  44. Thapa, S.; Zhu, F.; Walia, H.; Yu, H.; Ge, Y. A novel LiDAR-based instrument for high-throughput, 3D measurement of morphological traits in maize and sorghum. Sensors 2018, 18, 1187. [Google Scholar] [CrossRef] [PubMed]
  45. Zermas, D.; Morellas, V.; Mulla, D.; Papanikolopoulos, N. Extracting phenotypic characteristics of corn crops through the use of reconstructed 3D models. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2008; IEEE: Piscataway, NJ, USA, 2018; pp. 8247–8254. [Google Scholar]
  46. Hämmerle, M.; Höfle, B. Mobile low-cost 3D camera maize crop height measurements under field conditions. Precis. Agric. 2018, 19, 630–647. [Google Scholar] [CrossRef]
  47. Vázquez-Arellano, M.; Reiser, D.; Paraforos, D.S.; Garrido-Izard, M.; Burce, M.E.C.; Griepentrog, H.W. 3-D reconstruction of maize plants using a time-of-flight camera. Comput. Electron. Agric. 2018, 145, 235–247. [Google Scholar] [CrossRef]
  48. Meiyan, S.; Mengyuan, S.; Qizhou, D.; Xiaohong, Y.; Baoguo, L.; Yuntao, M. Estimating the maize above-ground biomass by constructing the tridimensional concept model based on UAV-based digital and multi-spectral images. Field Crop. Res. 2022, 282, 108491. [Google Scholar] [CrossRef]
  49. Wang, D.; Zhao, M.; Li, Z.; Xu, S.; Wu, X.; Ma, X.; Liu, X. A survey of unmanned aerial vehicles and deep learning in precision agriculture. Eur. J. Agron. 2025, 164, 127477. [Google Scholar] [CrossRef]
  50. Filippi, P.; Han, S.Y.; Bishop, T.F.A. On crop yield modelling, predicting, and forecasting and addressing the common issues in published studies. Precis. Agric. 2025, 26, 1–19. [Google Scholar] [CrossRef]
  51. Li, Y.; Gao, G.; Wen, J.; Zhao, N.; Du, G.; Stanny, M. The measurement of agricultural disaster vulnerability in China and implications for land-supported agricultural resilience building. Land Use Policy 2025, 148, 107400. [Google Scholar] [CrossRef]
Figure 1. Examples of 3D point clouds of maize plants at different growth stages. The sequential illustration, arranged from left to right, depicts the growth trajectory of maize plants from the initial V1 stage to V5. It shows that an increase in collar count is accompanied by gradual improvements in both plant height and leaf number.
Figure 1. Examples of 3D point clouds of maize plants at different growth stages. The sequential illustration, arranged from left to right, depicts the growth trajectory of maize plants from the initial V1 stage to V5. It shows that an increase in collar count is accompanied by gradual improvements in both plant height and leaf number.
Agronomy 15 00740 g001
Figure 2. Schematic of the point cloud sampling method.
Figure 2. Schematic of the point cloud sampling method.
Agronomy 15 00740 g002
Figure 3. Schematic of the segmentation process.
Figure 3. Schematic of the segmentation process.
Agronomy 15 00740 g003
Figure 4. Schematic of the octree structure.
Figure 4. Schematic of the octree structure.
Agronomy 15 00740 g004
Figure 5. (a) The effect of search radius r 1 and angle threshold θ on segmentation accuracy (IoU). (b) The effect of search radius r 1 and angle threshold θ on segmentation time.
Figure 5. (a) The effect of search radius r 1 and angle threshold θ on segmentation accuracy (IoU). (b) The effect of search radius r 1 and angle threshold θ on segmentation time.
Agronomy 15 00740 g005
Figure 6. Segmentation results of the maize stem region.
Figure 6. Segmentation results of the maize stem region.
Agronomy 15 00740 g006
Figure 7. A comparative analysis of segmentation outcomes with and without the refined segmentation stage.
Figure 7. A comparative analysis of segmentation outcomes with and without the refined segmentation stage.
Agronomy 15 00740 g007
Figure 8. Schematic of stem-leaf segmentation on tomato plants.
Figure 8. Schematic of stem-leaf segmentation on tomato plants.
Agronomy 15 00740 g008
Table 1. Segmentation results of the maize stem point cloud.
Table 1. Segmentation results of the maize stem point cloud.
Evaluation MetricSegmentation Result (%)
Overall Accuracy (OA)98.15
Precision (P)90.00
Recall (R)91.70
F1 Score90.80
IoU83.17
Table 2. Ablation analysis of coarse and fine segmentation strategies.
Table 2. Ablation analysis of coarse and fine segmentation strategies.
MethodOA (%)P (%)R (%)F1 (%)IoU (%)Time (s)
Coarse Segmentation Only81.3575.4172.7173.5062.893.5
Coarse Segmentation + Refined Segmentation94.4591.9193.3492.1580.305.1
Table 3. Segmentation efficiency of corn plant point clouds at different growth stages.
Table 3. Segmentation efficiency of corn plant point clouds at different growth stages.
Growth Stage (Leaf Count)Number of Input PointsNumber of Downsampled PointsSegmentation Time (s)
V3 (3 leaves)500,000100,0001.8
V6 (6 leaves)1,200,000240,0004.3
V9 (9 leaves)2,500,000500,0008.9
VT (Tasseling)3,800,000760,00013.6
R1 (Maturity)5,000,0001,200,00017.8
Table 4. Performance comparison of different segmentation methods.
Table 4. Performance comparison of different segmentation methods.
MethodOA (%)P (%)R (%)F1 (%)IoU (%)Time (s)
PCL Region Growing89.6782.3183.5082.9075.806.2
PointNet92.5688.4290.0289.2180.3721.3
PointNet++93.8489.1191.4390.2582.4011.9
DGCNN95.6792.7993.8593.3284.8110.7
Our Method98.1590.0091.7090.8083.174.8
Table 5. Segmentation outcomes for tomato plant stem point clouds.
Table 5. Segmentation outcomes for tomato plant stem point clouds.
Evaluation MetricSegmentation Result (%)
Overall Accuracy (OA)95.31
Precision (P)87.50
Recall (R)89.20
F1 Score88.35
IoU81.15
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, Q.; Yu, M. A Corn Point Cloud Stem-Leaf Segmentation Method Based on Octree Voxelization and Region Growing. Agronomy 2025, 15, 740. https://doi.org/10.3390/agronomy15030740

AMA Style

Zhu Q, Yu M. A Corn Point Cloud Stem-Leaf Segmentation Method Based on Octree Voxelization and Region Growing. Agronomy. 2025; 15(3):740. https://doi.org/10.3390/agronomy15030740

Chicago/Turabian Style

Zhu, Qinzhe, and Ming Yu. 2025. "A Corn Point Cloud Stem-Leaf Segmentation Method Based on Octree Voxelization and Region Growing" Agronomy 15, no. 3: 740. https://doi.org/10.3390/agronomy15030740

APA Style

Zhu, Q., & Yu, M. (2025). A Corn Point Cloud Stem-Leaf Segmentation Method Based on Octree Voxelization and Region Growing. Agronomy, 15(3), 740. https://doi.org/10.3390/agronomy15030740

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop