1. Introduction
Rice (
Oryza sativa L.) is one of the most significant staple crops, serving as a primary food source for over half of the global population [
1,
2]. Over 50% of the population of the world consumes rice as a staple food [
3]. Effective field management is essential for maximizing crop yield and precise monitoring of rice vegetative growth for understanding phenological development and informing data-driven agronomic decisions. It is crucial for understanding plant responses to environmental changes at different growth stages. Measurement of key vegetative growth parameters such as canopy height (CH), canopy volume, row distance, and plant spacing is vital for optimizing field management and improving yield predictions [
4,
5]. Geometric plant features are widely recognized for their influence on rice yield potential [
6,
7]. Proximal sensing technologies refer to non-contact sensing methods that collect detailed information about crops from a short distance, typically within a few meters. Advancements in proximal sensing technologies have contributed to precision agriculture by identifying optimal methods for crop-specific applications [
8]. These technologies are often mounted on ground-based platforms such as tractors, handheld rigs, tripods, or robots and are used directly in the field.
Crop geometry is the arrangement of plants in different rows and columns in an area to efficiently utilize natural resources [
9]. Traditionally, crop geometric characterization has relied on manual measurements [
10,
11], which, despite their continued use, are prone to inaccuracies, labor-intensive procedures, and time constraints [
12]. Furthermore, manual plant feature characterization requires skilled labor [
13,
14]. In large-scale fields, manual sampling is often performed on a small subset of plants, leading to potential errors when extrapolating results to entire fields [
15]. Selecting a precise and reliable sensing technology is crucial for accurate vegetative growth monitoring. Although high-resolution sensor data can enhance measurement accuracy, the associated costs and complex data processing requirements may outweigh the benefits for certain agricultural applications [
14]. Therefore, sensor selection should align with specific objectives in precision agriculture while balancing accuracy, cost, and data-processing efficiency.
Sensor selection in unstructured environments is particularly challenging due to the complexity of plant and crop geometry [
8]. In precision agricultural operations, appropriate sensing techniques are necessary to ensure reliable and accurate measurements. The rapid development of three-dimensional (3D) sensing technologies has improved high-throughput height measurements [
5]. Crop height is a fundamental indicator of growth status, typically measured from the soil surface to the top of the plants, and can serve as an alternative method for biomass estimation [
16]. Traditional destructive methods for determining the leaf area index (LAI) are labor-intensive and impractical for large-scale sampling. Although non-destructive LAI estimation methods have been developed, they remain prone to operator-induced measurement errors, and no standardized technique exists for non-destructive crop LAI estimation throughout the entire growing season [
17]. Additionally, the relationship between LAI and vegetation height varies across plant types and growth stages [
18,
19]. The integration of multiple sensors and techniques is often required to achieve accurate measurements of plant geometric characteristics, while a single sensor cannot comprehensively capture all plant geometric characteristics.
LiDAR is a prominent sensor for high-precision measurements, enabling non-invasive 3D data acquisition using narrow laser beams. This makes it highly suitable for crop phenotyping [
20] and detailed canopy structure analysis [
21]. While UAV-based photogrammetry can also generate 3D point clouds comparable to those obtained from LiDAR [
22,
23], the point cloud density from photogrammetry depends on image resolution and overlap. While photogrammetry is generally more cost-effective than LiDAR [
24], ground-mounted LiDAR typically produces denser 3D point clouds compared to UAV-derived datasets [
25]. UAV imagery has demonstrated reliability in plant canopy characterization for crop growth monitoring [
26,
27]. However, combinations of sensing technologies, including RGB cameras [
28], ultrasonic sensors [
29], multispectral and hyperspectral sensors [
30], and laser scanners [
31], have been used to characterize crop canopies [
32].
Stereo vision has also been utilized to capture detailed 3D plant structures using multiview images [
33]. However, these methods face challenges such as stereo matching errors under varying illumination, incomplete reconstructions due to occlusions, and trade-offs between accuracy and efficiency [
34]. Unlike RGB-D cameras, stereo cameras passively determine depth by identifying common pixels in consecutive frames and measuring their displacement, which is inversely proportional to the distance of the objects from the sensor [
35]. These systems perform well in outdoor conditions, providing high-resolution depth measurements that remain robust under varying lighting conditions. However, they also present challenges such as correspondence errors and high computational demands [
36]. Although depth cameras have improved in resolution, they still struggle in brightly lit environments [
37,
38,
39,
40]. Compared to expensive laser-based systems and gaming sensors such as the Kinect (Microsoft, Redmond, WA, USA), which have limited range and perform poorly under high illumination [
41], stereo vision remains a cost-effective alternative for generating high-resolution 3D point clouds. The integration of 3D spatial data with color information enhances segmentation accuracy during point cloud preprocessing.
Proximal sensing technologies, including LiDAR, enable the capture of high-resolution images with spatial resolutions below 1 cm, making them suitable for distinguishing individual plant organs [
42]. While aerial remote sensing faces challenges in achieving this level of detail, recent advancements have improved the ability to detect fine plant features [
43]. Ultrasonic sensors are cost-effective and simple to operate but suffer from reduced measurement accuracy and susceptibility to environmental interference. Spectral imaging provides valuable texture and reflectance information but lacks the structural detail necessary for comprehensive plant morphology assessment [
44]. Optical imaging methods can extract 3D features effectively in controlled settings but struggle with full plant architecture reconstruction in outdoor environments [
44]. Recent advances in LiDAR technology have facilitated the acquisition of high-resolution 3D structural data for agricultural applications. Consequently, LiDAR has gained increasing interest in plant geometric features characterization [
31]. LiDAR mounted on UAVs or ground sensing platforms has been used to assess the geometric characterization of crops such as peanuts, maize, and fruit trees [
45,
46,
47]. However, rice crops, which become denser in the middle to late growth stages, present challenges for measuring row distance and plant spacing due to leaf overlap [
44]. Additionally, rice is often cultivated in flooded paddies, posing accessibility challenges for conventional ground sensing platforms, while water-filled channels and uneven terrain can interfere with LiDAR operation [
44]. Furthermore, UAV-generated airflow can disturb the canopy structure, affecting data consistency [
44]. Compared to airborne LiDAR, terrestrial laser scanning (TLS) offers a cost-effective and user-friendly solution for acquiring high-density, accurate, and repeatable data. This makes it particularly suitable for monitoring specific crops and has led to widespread use in the precise extraction of geometric features of individual crops [
48]. Considering these complexities, mobile terrestrial laser scanning (MTLS) techniques have been increasingly adopted due to their improved plant geometric characterization as well as measurement reliability of vegetative growth monitoring parameters.
Based on recent advancements in viticulture research [
12,
13], the advantages of proximal LiDAR sensing are emphasized, including the ability to accurately capture plant structure in complex field conditions. Although LiDAR has been used for high-resolution 3D canopy characterization in other crops, such as vineyards [
12,
13], at high resolutions, the use of it in rice cultivation across different growth stages remains limited. Specific gaps include the lack of studies capturing the structural dynamics of rice throughout the early, middle, and late growth stages; the challenges posed by dense leaf overlap and flooded paddies; the absence of standardized high-precision protocols for extracting key geometric traits such as canopy height, canopy volume, row distance, and plant spacing; and the lack of comparative assessments between LiDAR-derived metrics and traditional manual methods for rice growth monitoring. To address these limitations, this study focused on enhancing the accuracy and consistency of vegetative growth monitoring in rice cultivation across different growth stages by introducing the systematic use of terrestrial LiDAR for extracting high-resolution geometrical characteristics throughout the growth cycle. Therefore, the objective was to characterize geometric features of rice plants using the proximal LiDAR sensing for monitoring vegetative growth at three vegetative growth stages.
2. Materials and Methods
2.1. LiDAR Sensor Selection and Rice Field Selection
In this study, a commercial LiDAR (model: VPL−16, Velodyne Lidar, San Jose, CA, USA) was used for data collection, and the detailed specifications are shown in
Table 1. The LiDAR sensor is capable of scanning distances up to 100 m, providing sufficient coverage for the rice field. It is designed for efficiency with low power usage, a lightweight design, a compact size, and single and dual return functionality. It is equipped with 16 scanning channels, enabling it to collect approximately 300,000 data points per second. This high point density is critical for achieving accurate and detailed measurements of plant height, canopy volume, and other geometric features of rice plants. It provides a 360° horizontal field of view and a 30° vertical field of view, with a ±15° vertical tilt, allowing for a diverse scanning range, which is important for capturing data from different angles in a rice field, especially when the plants are densely packed. It enables the collection of comprehensive data over a wide area. It is highly suitable for several applications such as autonomous vehicles, robotics, and terrestrial 3D mapping in precision agriculture for the characterization of plant features. Despite having visible rotating parts, it demonstrates strong durability and reliable performance over a wide range of temperatures. The sensor provides detailed and high-resolution 3D data of the surrounding environment. The data was collected in a rice field located at Jeollabukdo Agriculture Research and Extension Services, Iksan, Republic of Korea.
Figure 1 shows the rice field for LiDAR data collection. This site was selected as a representative rice field condition, providing an ideal setting for testing and validating the use of LiDAR sensors in estimating and analyzing the geometric features of rice plants. LiDAR data was collected from a selected rice plot that was 12 m long and 8 m wide in size.
2.2. Sampling Strategy
To capture the variability in rice plant structures, a systematic sampling strategy was employed. Twenty data frames were selected from the acquired data frames exhibiting diverse plant heights, shapes, and sizes. For geometric feature characterization, these data frames were captured from six consecutive rice plant rows, exhibiting different sizes and shapes to represent the variability of the rice plant geometric features. This allowed for comprehensive analysis and ensured that the results were representative of the diversity under field conditions. The plant rows selected for data collection were aligned with the field layout, and a region of interest (ROI) of 1 m by 0.9 m was segmented from each data frame for further analysis.
No experimental treatments, such as different rice varieties or fertilization regimes, were implemented in this study, as the primary goal was to validate LiDAR sensor technology across rice plants in a natural, unmodified field setting. The selected sampling method aimed to capture multiple growth stages (early, middle, and late) of the rice plants to assess the performance of LiDAR in monitoring plant development over time.
2.3. Data Collection with Customized Structural Platform
Point cloud data were collected using a proximal LiDAR sensor. The LiDAR was mounted on a custom aluminum frame designed for stable movement along rails between field plots. All necessary components such as LiDAR sensor, terminal box, GPS, battery, microcontroller, display monitor, mounts, and wheels are detailed in the schematic provided in
Figure 2. The sensor height was kept at 80 cm above the crop canopy, adjustable according to growth stage. During scanning, the structure was manually moved at walking speed to maintain data quality. A small region of interest (ROI) of 1 m by 0.9 m was segmented from each frame for data processing and measurements. The LiDAR, aligned vertically, captured 360° point cloud data to assess plant height, canopy volume, row distance, and plant spacing. Data acquisition covered six rows, and Python was used for analysis.
Figure 3 illustrates the sensor setup used to collect high-resolution 3D point cloud data of rice plants through proximal LiDAR sensing. A commercial LiDAR sensor was mounted on a customized mobile frame and integrated with essential components for field operation. The sensor was connected to a terminal box, which distributed power and enabled communication between the LiDAR and a compact data processing unit. Two 12 V batteries powered the entire system, ensuring portability and continuous operation in field conditions. The terminal box also stabilized power supply and managed data transmission via a high-speed Ethernet connection. Real-time visualization of LiDAR scans was possible through an attached display monitor, and a USB drive was used to store the collected data. This configuration allowed close-range scanning of rice crop geometry with precise spatial detail, enabling accurate measurement of plant structure under actual field conditions. Additionally, an external GPS unit (model: GPS18x LVC, Garmin, Martinez, CA, USA) was used for accurate geolocation and data synchronization.
Commercial software (model: Veloview, Ver 5.1.0, Kitware, Inc., Clifton Park, NY, USA) was used for data acquisition, visualization, and analysis of LiDAR data. Visualization of distance measurements as point cloud data was allowed by the software, which offered customizable color maps for variables consisting of laser ID, intensity of return, dual return type, azimuth, time, and distance. The software also supports data export in CSV format (x, y, z coordinates). In combination with Python, it facilitated comprehensive 3D point cloud processing and analysis. The software facilitated further data visualization, analysis, and measurement, as described by Zulkifli et al. [
50].
Figure 3 presents a schematic diagram of this configuration, detailing how each component was integrated to optimize data collection and system performance.
Manual measurement in the field is shown in
Figure 4. A measuring ruler was used for collecting manually measured plant height, canopy volume, row distance, and plant spacing, as shown in
Figure 5. The measurement started from the soil surface at the base of the stem to the highest point of the canopy, usually the tip of the flag leaf. The leaves were not lifted or extended. To obtain accurate results, three representative plant hills were selected from each region of interest, and the average manual measurement height was calculated. To ensure a good representation of overall canopy conditions, the plant height was estimated from data points of each synchronized data collection region of interest. For manual measurement, the vertical distance from the ground to the top of the canopy was measured using the measuring tape, as shown in
Figure 4a. The individual height measurements were then recorded for further analysis and to compare with the LiDAR measurement of plant height. Similarly, to obtain accurate measurement results of the canopy volume, representative sample plots were selected within each region of interest, and the average canopy volume was calculated. The canopy volume was calculated by multiplying the average height by the plot area while considering the lateral spread of the canopy and the density, as shown in
Figure 4b. This process involved a combination of direct measurements and visual estimations, where the recorded measurements were used for further analysis and comparison with LiDAR measurement of canopy volume, as shown in
Figure 5. However, no formal record was kept regarding the observer bias during the visual estimation process. While it was likely that some level of bias should have been introduced due to the subjective nature of visual estimations, it was unable to quantify or measure that bias due to the lack of recorded observations. Consequently, it was unable to provide a specific error estimate. Future work could benefit from a more structured approach, including recording observations and assessing potential bias, to better understand and account for estimation errors.
2.4. Plant Geometric Feature Characterization from LiDAR Data
A Python program was developed to manage data processing, visualization, and the characterization of rice plant features for vegetative growth monitoring. The data processing workflow included steps such as data conversion, targeted data frame selection, outlier removal, down-sampling, denoising, ground point removal, voxelization, and the preparation of three-dimensional point cloud density (PCD) maps. Data analysis was performed to estimate and interpret the results. Data acquisition started with a LiDAR sensor, which captured 3D point cloud data from a rice field. The raw point cloud data captured by the LiDAR was stored in a PCAP file format.
Point cloud data processing algorithms for the visualization of the point cloud and extraction of measurement data for vegetative growth monitoring of rice are shown in
Figure 6. The data processing pipeline begins with importing essential Python libraries and defining custom functions for convex hull volume calculation and 3D visualization of LiDAR point cloud data. A LiDAR point cloud (PCD) file is loaded and validated to ensure the presence of valid data. The data is then converted into a structured format suitable for numerical analysis. Spatial dimensions such as height, width, and depth are computed, followed by the calculation of geometric features including average plant spacing and row distances. A convex hull is then computed to estimate the canopy volume. The point cloud data undergoes preprocessing, including outlier removal, noise reduction, and downsampling. Downsampling is performed using a voxel grid technique, where the point cloud is divided into small cubic units, or voxels. For each voxel, a representative point is selected, typically by averaging the points within that voxel. This reduces the overall number of points in the dataset while preserving key features of the plant canopy structure. The downsampling process minimizes computational load, making it more efficient to handle large point cloud datasets while maintaining the necessary resolution for accurate canopy feature extraction. Voxelization segments the data, and a convex hull is computed to visualize the plant canopy.
To generate the convex hull boundary, the QuickHull algorithm was used. The algorithm begins by selecting two extreme points, typically with the minimum and maximum values along one dimension, which define a line segment that forms part of the convex hull. The remaining points were then divided into two subsets, one on each side of the line segment. For each subset, the farthest point from the line or plane was identified, and that point became part of the convex hull. The farthest point, along with the original two points, formed a new triangle or tetrahedron, which was added to the hull. The process continued recursively, checking the points outside the current shape and adding the farthest points to the hull. This iterative process continued until no more points could be added, at which point the algorithm terminated, resulting in a minimal convex shape that enclosed all the points. Subsequently, the convex hull was converted into a triangular mesh and smoothed using Laplacian filtering. The convex hull was smoothed using Laplacian filtering to refine the mesh, which was then visualized over the original point cloud. The smoothed hull was saved and visualized with the original point cloud as an overlay, and the visualizations were exported in PNG format. After that, a voxel grid was created by defining grid boundaries and computing voxel centers. For each voxel center, the nearest point from the original point cloud was extracted to build a representative object point cloud. The extracted objects were then colored, saved as a new point cloud, and exported as both image and CSV files. The entire workflow facilitated an automated, detailed, and accurate characterization of rice canopy structures, enabling further quantitative analysis and detailed growth monitoring.
2.4.1. Plant Height and Canopy Volume
Figure 7 illustrates the workflow for processing LiDAR point cloud data and extracting two key parameters of vegetative growth monitoring of rice, namely plant height and canopy volume. The data procedure begins with data collection and conversion, where the LiDAR sensor captured data in a pcap (packet capture) format that stored the raw laser returns, and the pcap files were then converted into PCD (point cloud data) files, enabling point cloud data preprocessing and allowing each PCD file to be parsed for (x, y, z) coordinates. In the data preprocessing stage, the 3D point clouds were structured into a format suitable for further analysis, with a region of interest (ROI) segmentation removing background objects or untargeted ground points, followed by denoising algorithms to eliminate noise and outliers, radius outlier removal to remove inconsistent points beyond typical density thresholds, and voxel grid downsampling to reduce overall point density while retaining essential structural features. The coordinates of the point cloud data were transformed to adjust the data for further processing in the step of transforming each coordinate of point cloud data. The transformation was involved in aligning the data with a specific reference frame or correcting for any distortions. The points that did not represent the objects of interest (rice plants) were removed from the data, particularly the ground plane during the removal of the untargeted points (ground plane). Through outlier removal, the outlier points were eliminated as they were significantly different from most of the data, potentially due to noise or errors during data acquisition. The ground sampling process involved sampling the ground data to further refine the dataset by focusing on relevant points. Noise within the point cloud data was reduced to improve the accuracy and clarity of the data, particularly focusing on removing irrelevant data points in the denoising step. In the ground removal step, the ground points were hidden or excluded from the dataset to focus on the rice plants by hiding the ground. For the extraction of plant data, data points were selected from the region of interest of plants only without the ground through hiding the ground and automatically segmented using Python code. After the ground was hidden, the dataset contained only the points that represented the rice plants. The selected points were then used to prepare a 3D point cloud density map, visually representing the density and distribution of the points that prepared the rice plants for visualization.
Using Python codes, the rice plant metrics such as height and canopy volume were estimated. The data was preprocessed to meet the requirements of the intended applications. Plant height estimation was performed by isolating the vertical (z) axis, identifying the minimum (H
min) and maximum (H
max) z-values to represent ground level and the highest point of the canopy, respectively. The plant height (H
plant) was then calculated by subtracting the minimum z-value (H
min) from the maximum z-value (H
max), according to Equation (1) [
49,
50], which also assists in quantifying the canopy elevation. In comparison to existing LiDAR data preprocessing workflow, our data preprocessing pipeline integrates a series of robust preprocessing steps. This customized approach ensures the precise extraction of key plant geometric features while addressing challenges such as canopy occlusion and water interference in rice fields. Unlike other approaches, it maintains high data accuracy across three growth stages and performs well in dense rice canopies, significantly improving the accuracy of plant height and canopy volume measurements.
Canopy volume estimation involved extracting a refined set of inlier points to construct a surface or mesh model. A convex hull was generated to enclose the canopy points and approximate the outer shape, smoothing the initially generated hull to produce a smooth surface and smoothed convex hull. Then, the enclosed volume of the smoothed convex hull was calculated. Steps integrated in
Figure 7 illustrate that the raw LiDAR data were efficiently transformed into meaningful agronomic metrics such as plant height and canopy volume. These metrics are critical for monitoring crop growth in rice fields.
Figure 8a–e demonstrate how LiDAR measurement point cloud data were preprocessed to monitor rice plant growth by estimating both canopy height and volume. As illustrated in
Figure 8, the first step (
Figure 8a) involves identifying and extracting a region of interest (ROI) from the raw LiDAR point cloud, specifically targeting six rows of rice plants (R1–R6), as depicted in
Figure 8b–e. Ground points were removed to isolate the canopy, and any extraneous or noisy points outside the ROI were removed.
Figure 9 exhibits that the refined point cloud underwent additional processing to ensure accuracy and computational efficiency. Downsampling was used to reduce data density while preserving structural detail. Filtering algorithms were then applied to remove outliers and noise. The resulting refined point cloud was subsequently used to generate a canopy mesh, from which a convex hull was created to approximate the outlier of the plant canopy. A Laplacian smoothing step helped to eliminate surface irregularities and produced a more continuous and realistic canopy representation. Finally, the volume enclosed by the convex hull was calculated to estimate canopy volume, while plant height was derived by comparing minimum and maximum vertical coordinates (z-values). Therefore, the plant feature metrics such as canopy height and volume were used to offer quantitative insights into rice plant geometry over time.
2.4.2. Plant Spacing and Row Distance
Raw LiDAR data (pcap format) were converted to PCD files and parsed into three-dimensional (x, y, z) coordinates. The data underwent multiple processing steps such as filtering and outlier removal to eliminate noise, downsampling to reduce point density, and segmentation to isolate the plant rows of interest. Clustering algorithms grouped points into distinct rows or plant clusters, and the centroids of the clusters were calculated to enable center-to-center distance measurements between consecutive points.
Figure 10 provides a visual context comprising the LiDAR sensor setup in the field, a color-coded point cloud, and the final processed point cloud with clearly delineated rows for precise calculation of row spacing and plant distance. The integrated approach yielded quantitative metrics crucial for agronomic decision-making, including characterization of planting uniformity and canopy structure in the real field.
Python was used as the primary framework for data handling and analysis, enabling the conversion of raw LiDAR data (pcap format data) into a workable point cloud format (PCD). Subsequent filtering and outlier removal processes, as shown in
Figure 10, ensured that only high-quality points remained. Python libraries such as Open3D (version 0.18.0) or NumPy (version 1.24.3) were used for processing. The point cloud was then segmented and clustered into individual plant rows, enabling centroid calculations for each cluster. This approach facilitated accurate row-to-row distance measurements. Furthermore, plant spacing within each row was determined by identifying and measuring the distance between consecutive plant clusters. This Python-oriented workflow provided a robust and flexible platform for automating data preprocessing, segmentation, and distance calculation, ultimately offering reliable metrics for evaluating planting uniformity (plant spacing and row distance) and canopy structure. The raw LiDAR point cloud was initially loaded and subjected to outlier removal (via radius-based filtering) and voxel downsampling, minimizing noise and reducing data density while storing geometric plant features. After that, the ground segmentation using the random sample consensus (RANSAC) plane fitting algorithm isolated the canopy or plant rows, where any remaining irrelevant data points were excluded. Depending on whether row distance (Y-axis) and plant (hill-to-hill) spacing (X-axis) were in the ROI, the algorithm sorted the remaining points along the relevant axis. Using prior knowledge of the experimental layout, such as the total width (or length) occupied by a fixed number of rows, including the number of gaps between them, the average plant spacing was calculated as the total estimated dimension divided by the number of gaps plus one. This approach might be refined with clustering algorithms such as density-based spatial clustering of applications with noise (DBSCAN) algorithm to distinguish individual plant clusters. The result was a robust, semi-automated pipeline in Python that integrated data denoising, ground plane removal, coordinate-based filtering, and geometric calculations of row distance and plant spacing.
2.5. Statistical and Analytical Procedures
The geometric features, including tree height, canopy volume, row distance, and plant spacing, were compared between manual measurement and LiDAR measurement data through linear regression analysis. For better demonstration and understanding of the estimation of geometric features of rice plants, the mean error calculation was performed. The mean error in rice plant feature estimation better reflected the accuracy of the estimation method and accounted for the variability between individual plants of different heights, canopy volumes, plant spacing, and row distance. To assess the accuracy of the developed data processing algorithm, the coefficient of determination (r
2), root mean squared error (RMSE), mean absolute error (MAE), bias (mean difference), confidence interval (CI) with 95% confidence for the mean difference, standard deviation (
), and
t-test statistic, and accuracy (%) were calculated using Equations (2)–(9), respectively, as follows [
5,
51]:
where
and
are the measured and sensor-estimated values, respectively, and
is the average of the sensor estimated values.
and
are the sensor-estimated values for
observations. n is the number of observations. Bias indicates whether the sensor estimated values consistently overestimated (>0) or underestimated (<0) the manual measurement values.
is the mean difference (bias).
indicates standard deviation of the differences
, and
is the critical value of t-distribution with
degrees of freedom at the desired confidence level (95%).
Equation (9) determined how closely the LiDAR measurement results were aligned with the measure results or ground truth measurements, where higher accuracy (%) indicated better agreement and lower accuracy (%) indicated deviations during the measurements of vegetative growth monitoring parameters due to factors like vegetation density and occlusion.
4. Discussion
This study introduces a customized point cloud processing algorithm integrating frame screening, outlier removal, denoising, voxelization, and geometric feature extraction designed specifically for accurate and efficient rice canopy characterization across growth stages, setting it apart from standard LiDAR processing approaches. Due to the complex plant geometry of rice, accurately measuring vegetative growth parameters has long been challenging, often requiring specialized instruments like those for projected leaf area. LiDAR effectively captures vertical structural features, enabling detailed reconstruction of plant geometry. In this study, a rice-specific geometric feature extraction algorithm was developed that efficiently acquired high-resolution point cloud data, including features that are difficult to measure with traditional methods. Plant geometric features such as plant height, canopy volume, row distance, and plant spacing were extracted across three different growth stages, demonstrating strong correlation and high reliability [
52]. The performance of LiDAR sensing for precise plant height measurement is also proved by numerous studies; for example, a commercial LiDAR was used for plant height estimation and showed that the number of laser pulses reaching the ground surface decreases as rice growth progressed. Their study also mentioned that since paddy fields are typically filled with water, laser pulses often struggle to reach the ground surface, as certain laser wavelengths tend to be fully or partially absorbed by water with more than 4 cm of clear water depth [
53]. Furthermore, it also demonstrated that due to the vegetation coverage being 20%, more than 50% of laser pulses cannot reach the ground surface and are absorbed by the water. This is one of the reasons behind the error of plant height estimation in early growth stages of rice. Since, in this critical stage, most of the time there was standing water, as it is an early vegetative growth stage, LiDAR measurement may struggle with this limitation. In this study, there was also standing water with around 4–5 cm water depth during the data acquisition, which might have some effects on height measurement. Rice plant height was estimated using LiDAR measurement and reported estimation errors of 14, nearly 10, and 5 cm, respectively [
54,
55,
56], whereas the error of 2–3 cm was obtained during the measurement of LiDAR measurement of plant height in this study.
Figure 18 effectively illustrates the dynamics of rice plant growth and highlights the differences between the growth stages.
Figure 18a,b represent the growth curves of rice plant height and canopy volume, respectively, across the early, middle, and late growth stages, comparing LiDAR measurements with manual measurements. Both parameters showed a consistent upward trend from the early to the middle stage, followed by a slight decline at the late stage. The LiDAR and manual measurements exhibited similar patterns, indicating strong agreement. However, manual measurements tended to slightly overestimate plant height in the late stage, while LiDAR consistently reported higher canopy volume. These results demonstrated that LiDAR sensing is a reliable, nondestructive alternative for monitoring rice growth dynamics with comparable accuracy to manual methods.
However, canopy volume estimation needs further refinement, as LiDAR tends to slightly underestimate the actual volume. Future improvements, such as multi-angle LiDAR scanning or sensor adjustments, could enhance accuracy, particularly for canopy structure estimations. Jing et al. [
44] investigated the use of LiDAR for estimating rice canopy height, emphasizing the necessity of accurate ground elevation data for precise measurements where LiDAR was fixed above the canopy, maintaining a consistent nominal distance from the ground in that study. It also supports the LiDAR setup with the data acquisition structure, which was followed in this study during the data acquisition in the rice field. In this study, the results exhibited that during the early growth stage, LiDAR measurement results closely matched the manual measurements, with accuracy ranging from 95% to 98% across the vegetative growth monitoring parameters of rice, such as plant height, canopy volume, row distance, and plant spacing. In the middle growth stage, the errors increased in canopy volume and row distance measurement due to greater plant density and occlusion effects. By the late growth stage, accuracy declined to 85.85% for canopy volume, 94.34% for row distance, and 88.15% for plant spacing, as the denser canopy limited the ability of LiDAR to capture precise measurements. This finding is also supported by the findings of the study by Jing et al. [
44]; for example, when rice plant density increased to later growth stages, it significantly limited laser beam penetration, causing most LiDAR points to originate from the canopy top rather than the ground. The study also mentioned that height accuracy was affected even in a smaller area (12 × 6 m), with errors ranging from 0.24% to 12.98%.
Sensitivity analysis in this study confirmed that LiDAR underestimation increased gradually with growth stage advancement, particularly for canopy volume and plant spacing. The bias analysis further revealed that measurement discrepancies became more pronounced in the middle and late growth stages, indicating a decline in the effectiveness of LiDAR, particularly in measuring plant spacing and row distance, as plant density increases. In this study, variation was found between manual measurement results and the LiDAR measurement results, which is clarified by the underestimation and overestimation of the parameters between manual measurement and sensor-estimated results. This is also supported by a study that indicated that LiDAR is more accurate than manual inspection [
57].
LiDAR has growing practical significance in precision agriculture. As highlighted in recent reviews, LiDAR supports a wide range of agricultural applications from crop growth estimation and disease detection to weed control and plant health evaluation. In the context of rice production, high-resolution LiDAR data can aid plant breeding programs by enabling the selection of desirable traits such as canopy architecture and plant spacing, which influence light interception and yield [
58]. This study may also contribute to serving these purposes for rice cultivation. For real-time growth monitoring, LiDAR allows non-destructive, repeatable measurements that help track vegetative development and detect stress conditions, which is another practical application of LiDAR aligned with this study. This study’s results can play a vital role in providing an inexpensive way of enhancing site-specific monitoring of rice, which is supported by the study’s findings of crop monitoring in agriculture [
59,
60,
61].
Moreover, LiDAR measurements are valuable in site-specific fertilizer and pesticide applications for judicial use of agricultural inputs such as pesticide and fertilizers, which will assist in improving efficiency and reducing the environmental impact. As a part of the practical application of this study, in fertilizer and pesticide application planning, the measurement results obtained from LiDAR measurement can help identify areas of uneven canopy growth or plant density, allowing for targeted input application, which may enhance resource efficiency and reduce the environmental impact [
62]. Based on these measurements, LiDAR may be integrated into automated crop modeling systems and may also enhance predictions of crop growth and yield under varying field conditions. These will not only contribute as a critical tool to phenotyping but also to broader agronomic decision-making. It might be supported by its expanding use in precision agriculture platforms, including autonomous field systems for rice cultivation. This technique may also be used for the growth monitoring of other similar types of crops (e.g., wheat, maize, pulse and oilseed crops, etc.), which may help in pruning/thinning, spraying, and other intercultural operations [
63].
However, the use of LiDAR in agriculture, particularly in rice fields, presents several challenges. Despite being a powerful sensor for non-contact growth monitoring, LiDAR-estimated feature characterization faced challenges, particularly in the late growth stages of rice due to canopy occlusion, measurement biases, and sensor positioning limitations, when dense rice canopies may block LiDAR scans. Among the challenging issues, canopy occlusion and leaf overlapping were major issues, which obstructed LiDAR beams and led to the underestimation of plant spacing and row distance, especially in the middle to late growth stages [
44]. Similarly, plant height measurements could be underestimated due to leaf bending, while canopy volume fluctuated between overestimation (due to multiple reflections from overlapping leaves) and underestimation (caused by occluded lower canopy layers). This occlusion results in incomplete point cloud data, which can reduce the accuracy of measurements. Environmental factors such as wind-induced plant movement and ground reflection effects introduced additional variability, particularly in early growth stages, when the canopy was sparse. Furthermore, preprocessing of LiDAR-scanned point clouds requires computational power, such as filtering, noise, segmenting plant features, and correcting measurement errors, which demand advanced algorithms. The recorded measurements were used for additional analysis based on visual estimations in this study, which was not formally recorded regarding the observer bias during the visual estimation process. Due to the subjective nature of visual estimations, some levels of bias may have been introduced that were unable to quantify or measure the bias due to the lack of recorded observations. Consequently, it was unable to provide a specific error estimate, which was one of the limitations of the study. Therefore, it was recommended that future work could benefit from a more structured approach, including recording observations and assessing potential bias, to better understand an account for the estimation errors.
Future Research Directions and Limitations
While LiDAR is an effective tool for monitoring rice plant growth, there are still some limitations, particularly related to canopy occlusion. As rice plants grow, their dense canopies can obstruct LiDAR scans, leading to incomplete data in the later growth stages. The challenges, including water reflection and variable plant architecture, reduce the LiDAR measurement accuracy, which was also a challenge in this study, especially in the middle to late growth stages. To overcome this challenge, future research could explore the integration of complementary sensors, such as multispectral cameras or hyperspectral sensors, which might provide additional data to fill in gaps caused by canopy occlusion. Future research should explore integrating LiDAR with other sensing techniques (e.g., remote sensing using RGB, ultrasonic, depth camera, and multispectral camera), using multi-angle scanning setups, and improving point cloud filtering and occlusion correction algorithms.
Additionally, refining the ground-point removal algorithms and canopy height extraction methods could further improve the precision of LiDAR measurements, particularly for dense rice canopies. Future studies should also explore automatic algorithms for volume estimation and investigate ways to reduce the computational load while maintaining high data resolution.
Therefore, adapting LiDAR systems to various rice cultivation practices and working environments may further enhance the application of LiDAR in smart farming systems, particularly in the cultivation of rice.
5. Conclusions
This study evaluated the performance of LiDAR in estimating rice plant geometric features for vegetative growth monitoring across three different growth stages, namely the early, middle, and late growth stages. The LiDAR-estimated results of plant height, canopy volume, row distance, and plant spacing measurement were compared with manual measurement results using statistical metrics such as RMSE, r2, MAE, bias, t-statistic, p-value, and 95% CI. The results demonstrated that LiDAR estimated the plant height accurately across the three growth stages, with a strong correlation (r2 > 0.95) and a minimal measurement bias ranging from −0.02 m to 0.003 m. However, canopy volume, row distance, and plant spacing estimations exhibited deviation depending on the growth stage. The canopy volume showed an underestimation of up to −0.11 m3 in the early growth stage and −0.112 m3 in the middle growth stage and an overestimation of up to 0.25 m3 in the late growth stage. The row distance estimation exhibited an underestimation of up to −0.025 m in the early growth stage, −0.030 m in the middle growth stage, and −0.05 m in the late stage. Plant spacing estimation showed an underestimation of up to −0.018 m in the early growth stage, −0.03 m in the middle growth stage, and −0.040 m in the late stage.
In the early growth stage, LiDAR estimated results closely reached the manual measurements, with accuracy around 95% to 98% for all parameters. The middle growth stage showed slightly increased errors in canopy volume and row distance, primarily due to increased plant density and occlusion effects. In the late growth stage, accuracy declined for canopy volume (85.85%), row distance (94.34%), and plant spacing (88.15%) as dense canopy structures interfered with the ability of LiDAR to capture precise measurements. Sensitivity analysis confirmed that LiDAR underestimation increased gradually with growth stage advancement, particularly for canopy volume and plant spacing. Bias analysis further revealed that measurement discrepancies became more pronounced in the middle and late growth stages, indicating a decline in the effectiveness of LiDAR, particularly in measuring plant spacing and row distance, as plant density increases. The study confirms that LiDAR is a reliable tool for monitoring plant height and canopy volume in the early and middle growth stages. However, as canopy complexity increases, modifications in scanning strategies, such as multi-angle scanning or enhanced data processing techniques, may improve the accuracy of LiDAR estimated measurements.