Next Article in Journal
Parental Origin Influences Seed Quality and Seedling Establishment in Kiwifruit Cultivars
Previous Article in Journal
Maize Yield Prediction via Multi-Branch Feature Extraction and Cross-Attention Enhanced Multimodal Data Fusion
Previous Article in Special Issue
Optimizing the LED Light Spectrum for Enhanced Seed Germination of Lettuce cv. ‘Lollo Bionda’ in Controlled-Environment Agriculture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving the Quality of LiDAR Point Cloud Data in Greenhouse Environments

1
Department of Food, Agricultural and Biological Engineering, The Ohio State University, Columbus, OH 43210, USA
2
Department of Food, Agricultural and Biological Engineering, The Ohio State University, Wooster, OH 44691, USA
3
USDA-ARS Application Technology Research Unit, Wooster, OH 44691, USA
*
Author to whom correspondence should be addressed.
Agronomy 2025, 15(9), 2200; https://doi.org/10.3390/agronomy15092200
Submission received: 11 August 2025 / Revised: 8 September 2025 / Accepted: 12 September 2025 / Published: 16 September 2025

Abstract

Automated crop monitoring in controlled environments is imperative for enhancing crop productivity. The availability of small unmanned aerial systems (sUAS) and cost-effective LiDAR sensors present an opportunity to conveniently gather high-quality data for crop monitoring. The LiDAR-collected point cloud data, however, often encounter challenges such as occlusions and low point density that can be addressed by acquiring additional data from multiple flight paths. This study evaluated the performance of using an Iterative Closest Point (ICP)-based algorithm for registering sUAS-based LiDAR point clouds collected in the greenhouse environment. To address the issue of objects that may cause ICP or local feature-based registration to mismatch correspondences, this study developed a robust registration pipeline. First, the geometric centroid of the ground floor boundary was leveraged to improve the initial alignment, and then piecewise ICP was implemented to achieve fine registration. The evaluation of point cloud registration performance included visualization, root mean square error (RMSE), volume estimation of reference objects, and the distribution of point cloud density. The best RMSE dropped from 20.4 cm to 2.4 cm, and point cloud density improved after registration, and the volume-estimation error for reference objects dropped from 72% (single view) to 6% (post-registration). This study presents a promising approach to point cloud registration that outperforms conventional ICP in greenhouse layouts while eliminating the need for artificial reference objects.

1. Introduction

The escalating demand for food, fiber, and fuel due to population growth, coupled with challenges from climate change and diminishing arable land and water resources, presents significant challenges to crop production and protection. Overcoming these challenges requires advanced, timely, reliable, and cost-effective crop monitoring technologies that enable efficient and sustainable high-volume food production [1]. Controlled Environment Agriculture (CEA) is key for efficiently producing high-quality and high-quantity crops more sustainably compared to traditional open-field farming systems [2]. Nevertheless, efficient monitoring of plant growth, nutrients, water, and other stress factors in CEA requires substantial labor inputs, making automation crucial for precise and economically viable plant management [3].
While plant monitoring techniques in CEA environments have evolved over the past 30 years, traditional methods often rely on a single sensor set placed in a “representative” area, lacking the ability to capture high spatial variability within a greenhouse. Despite the advancement of wireless sensor networks, their widespread application is hindered by the high costs associated with their acquisition and maintenance [4]. Therefore, manual or semi-automatic plant monitoring approaches are still prevalent.
The emergence of small unmanned aerial systems (sUAS) presents a cost-effective opportunity for plant health monitoring. Their benefits include small size, affordability, portability, and the potential of providing high spatial resolution information about plant canopies [5]. Researchers have explored various potential applications of sUAS-collected data [6], such as barley biomass estimation [7], soybean yield prediction [8], and vineyard management [9]. Compared to unmanned ground vehicles (UGVs), sUAS are more suitable for greenhouse environments due to their ability to navigate above crops without requiring floor space, which is often maximized for planting. Mechanical arms or irrigation booms are alternative solutions but require significant investment for their permanent installation. sUAS offer greater flexibility as they do not require installation, can be used in multiple greenhouse compartments, and operate autonomously, making them a practical and cost-effective choice.
Spatial, structural, and spectral information is critical in assessing crop growth and health. Light Detection and Ranging (LiDAR) instruments have been used for assessing canopy density and modeling crop structures [10]. However, LiDAR point cloud data collected from a single viewpoint often encounters occlusions and uneven point density, necessitating data acquisition from multiple viewpoints [11,12]. Point cloud registration is the process of aligning data from multiple views into a common coordinate system.
Iterative Closest Point (ICP) [13] is one of the most well-known algorithms to register two point clouds and has been widely used in numerous real-world applications. It iteratively estimates the rigid-body transformation (rotation and translation) by optimizing the Euclidean distance between nearest-neighbor correspondences. While ICP is fundamental and effective, it can converge to local minima when the initial alignment is poor [14]. Additionally, its performance relies on forming correct correspondences [15]. Its performance in greenhouse environments with repetitive objects/structures or poor initial alignment remains underexplored.
Beyond local refinement, globally optimal registration such as Go-ICP explores entire 3D motion space with a branch-and-bound scheme. It performs well even with poor initial alignment but can be computationally heavy [14]. In contrast, Fast Global Registration (FGR) optimizes a global robust objective over candidate correspondences from Fast Point Feature Histogram (FPFH) [15,16], so while it does not need a good initial pose, it inherits descriptor ambiguity in repetitive scenes. In greenhouses, repetitive objects make feature correspondences less distinctive.
Feature-based registration methods are another way to avoid trapping into local minima. Generally, a feature-based approach consists of two steps: global alignment (coarse registration) using key local feature points, followed by a refined alignment. Compared with conventional ICP, which defines correspondences purely by nearest distance, feature-based methods select correspondences using feature similarity and distance. For example, geometric features such as curvature and normal vectors have been incorporated into ICP-based methods to improve performance under poor initial alignment [17]. Reference objects like calibration balls have been used to enhance registration for forestry crops [18]. Additionally, conical surface fitting combined with ICP has been proposed for maize plant registration [19], demonstrating that customized features can be crucial for accurate registration.
Key features can be hand-crafted or learned from data. Deep learning-based approaches, such as Deep Closest Point (DCP) [20], PointNetLK [21], Geometric Transformer (GeoTransformer) [22], and Prior-Embedded Explicit Attention Learning (PEAL) [23], learn local/global features to establish robust correspondences. For example, GeoTransformer learns geometric features from a dataset and then estimates transformation in a closed form by using weighted singular value decomposition. Following GeoTransformer, PEAL incorporates priors to improve registration under low overlap. However, in environments with highly repetitive geometry, such methods can still struggle to find correct correspondences. Moreover, deep learning-based approaches typically require extensive labeled training data and are prone to overfitting, which is an expensive operation for most small-scale greenhouses.
As a non-ICP based method, probabilistic approaches, such as a Normal Distributions Transform (NDT), model point clouds as probability distributions and align them by minimizing the difference between corresponding distributions [24]. While robust to noise, these methods require careful parameter tuning and computational resources, which may not be practical for real-time applications in greenhouse crop monitoring.
A key distinction in these studies is that they primarily focus on registering individual objects, whereas our study focuses on a larger, greenhouse scale. In a controlled single-plant setting, the distinctiveness of a canopy structure provides sufficient information for registration, but in greenhouse environments where plants are highly repetitive, such features fail to provide reliable correspondence pairs. This distinction underscores the necessity of identifying a global feature descriptor that remains effective across the entire greenhouse environment.
To address these challenges, we proposed a novel and straightforward coarse registration method specifically designed for greenhouse environments. Unlike local feature descriptors, we leveraged the mass centroid of the point cloud as a global feature descriptor for coarse registration. Preliminary tests demonstrated that direct use of ICP, NDT, and feature-based ICP methods (FPFH-ICP) resulted in failed registration in our environment. This failure highlights the need for a more generalized global descriptor rather than relying solely on local features.
The goal of this research is to enhance the quality of LiDAR point cloud data collected in greenhouse environments using sUAS. Specific objectives include the following: (1) improve the quality of point clouds through preprocessing, (2) design a pipeline of registering point clouds from multiple viewpoints in the greenhouse environment, and (3) evaluate the quality of post-registration point clouds in terms of estimated volume and point cloud density.

2. Materials and Methods

2.1. Experimental Design

2.1.1. Experimental Setup

The experiment was conducted in a commercial greenhouse at Cedar Lane Farms in West Salem, Ohio, December 2021. The greenhouse, structured as a hoop house, measured 7.9 m × 46 m with a truss height of approximately 2.4 m. The raw point cloud data was collected using a LiDAR (Model UST-10LX, Hokuyo Automatic Co., Ltd., Osaka, Japan) mounted on a sUAS (Model DJI Matrice 100, SZ DJI Technology Co., Ltd., Shenzhen, China). The LiDAR had a 120° field of view and angular resolution of 0.25°, resulting in 480 measurement points in a single scan every 25 milliseconds. The operating range is from 0.06 m to 10 m, with an accuracy of 40 mm. The range of operating ambient temperature and humidity of LiDAR is −10 °C to +50 °C and below 85% relative humidity. The greenhouse environment was maintained in the LiDAR’s operating range to assure high quality signals. To maintain homogeneity across all scans, a sUAS was manually flown at a constant speed of 0.6 m/s. X-axis was designated as the sUAS travel direction and Y-axis as the LiDAR scanning direction. Seven basketball-on-pots (volume of each is 4837 cm 3 ) and six traffic cones (volume of each is 5957 cm 3 ) were arranged within the greenhouse, forming four distinct layout patterns.

2.1.2. Greenhouse Layout and Data Collection

This experiment was designed to evaluate how different greenhouse layouts may impact point cloud registration. Four distinct layout patterns (P1–P4) were created, each varying in structural complexity and presence of artificial objects (Figure 1):
  • Pattern 1 (P1): No plants were removed, and no artificial objects were added. This represents an undisturbed standard greenhouse layout, where registration relies primarily on the natural floor boundary and structural elements.
    Pattern 2 (P2): Selected plants were removed from the growing area, creating a more complex boundary shape with additional notches and vertices. Since the proposed registration method utilizes the floor boundary as a key feature, this pattern was designed to evaluate whether increased geometric complexity improves registration accuracy by providing more distinct feature points.
    Pattern 3 (P3): No plants were removed; artificial objects (traffic cones and basketballs) were added to the growing area instead.
    Pattern 4 (P4): Both plant removal and artificial objects were introduced. This pattern combined the complex boundary shape from P2 and the artificial object placement from P3 to assess whether integrating these two strategies could further improve the registration.
Each pattern was scanned three times using different flight paths: right-side crop area (R), middle walkway (M), and left-side crop area (L). The overlap ratio for consecutive scan (L+M or R+M) is about 72% and it is about 37% for nonconsecutive scan (L+R). This setup resulted in 12 different point cloud combinations for registration testing. However, the P4-L (the left-side crop area of Pattern 4) dataset was excluded due to an unexpected deviation in the sUAS flight trajectory, leaving 10 valid dataset combinations for analysis.
By comparing registration accuracy across these four patterns, this study aims to determine which modifications provide the most reliable improvements for greenhouse point cloud registration.
Patterns P2 and P4 introduced additional notches and vertices in the ground boundary due to plant removal. These features were expected to improve registration performance by providing unique feature points in the greenhouse environment. In practical applications, local farmers may not want to introduce artificial objects but could achieve better registration by selectively removing plants—a low-cost and straightforward approach.
Patterns P3 and P4 included known-volume objects (traffic cones and basketballs). Since plants in the greenhouse were in an early growth stage and their canopy volume was difficult to measure manually, these objects provided a measurable reference for evaluating how registration improved volume estimation.

2.2. Point Cloud Registration

A pipeline for greenhouse crop point cloud registration was constructed, incorporating preprocessing, coarse registration, and fine registration stages (Figure 2). The Point Cloud Library (PCL, version 1.12.1), available on GitHub 3.0.22 as an open-source library (link: https://github.com/PointCloudLibrary/pcl (accessed on 1 December 2021)), was utilized for the pipeline construction. First, the preprocessing steps, such as filtering outliers, correcting yaw errors and selecting the Region of Interest (ROI), were performed to improve dataset quality. Subsequently, a subset of points was selected and extracted. The transformation matrix generated from coarse registration was used to transform the original source point cloud (S) to improve the initial alignment between the target cloud (T) and S. Finally, the ICP algorithm was applied for fine registration of T and the transformed S. were performed to improve dataset quality. Subsequently, a subset of points was selected and extracted. The transformation matrix generated from coarse registration was used to transform the original source point cloud (S) to improve the initial alignment between the target cloud (T) and S. Finally, the ICP algorithm was applied for fine registration of T and the transformed S.

2.3. Preprocessing Data

A three-step preprocessing procedure was implemented to enhance data quality before registration: outlier removal, yaw error correction, and ROI selection. Outliers were identified and removed using a Statistical Outlier (SO) filter available in the PCL library, with parameters set based on mean point density and a standard deviation threshold of 3.0. The threshold is an empirical value under the defined sUAS flight attitude.
Since the sUAS was flown without GPS information, yaw errors occurred, causing misalignment in some datasets, such as P4-R, where the greenhouse walls were not parallel with the Z-axis. To correct this, two manually selected anchor points—one at the entry and one at the exit of the experimental area—were used to estimate the yaw correction angle (θ). These points were positioned along the same walkway boundary, assuming that their connecting line should be parallel to the X-axis. A 3 × 3 rotation matrix (Equation (1)) was applied to align the dataset uniformly, ensuring a consistent initial condition where the Z-axis always represents the sUAS travel direction.
R o t a t i o n   M a t r i x   a b o u t   Z   a x i s = c o s ( Ɵ ) s i n ( Ɵ ) 0 s i n ( Ɵ ) c o s ( Ɵ ) 0 0 0 1
The yaw error correction approach would standardize alignment across datasets for better registration performance even though minor inconsistencies were expected due to the subjective manual point selection process. Given that three point clouds were collected from different flight paths, each covered distinct greenhouse sections while maintaining some overlap. However, as the LiDAR off-nadir angle increases, point density decreases, making it necessary to extract a well-defined Region of Interest (ROI) for improved registration.
The ROI was defined as the overlapped section of the middle walkway and extracted using a Conditional Filter (CF) based on predefined X and Y boundaries. The X-axis limits were set at the sUAS entry and exit positions. The Y-axis limits were determined by (1) the Y coordinate of the middle walkway center point (Yc) and (2) the greenhouse walkway width, measured at 1.5 m. To include both the walkway and part of the crop-growing area, the Y boundaries were set at Yc ± 1.6 m for all datasets.

2.4. Coarse Registration

A two-step coarse registration process was initiated after extracting the ground floor boundary. First, a mass centroid alignment was applied to optimize the initial alignment between the target and source key subsets in Euclidean space. This was followed by an Iterative Closest Point (ICP) algorithm [13] to refine the alignment. The greenhouse ground floor boundary points were selected as the key subset for generating a transformation matrix due to their distinct and stable properties. The boundary extraction process involved four steps: (a) clustering the point cloud using a Region Growing (RG) algorithm, which clusters points based on local surface similarity; (b) selecting the ground floor cluster based on the comparison of mean altitude values; (c) removing noise points through a point density threshold; and (d) extracting the ground floor boundary using the BoundaryExtraction function in PCL, which identifies boundary points by computing differences in surface normal angles between a given point and its neighbors.
The selection of the ground floor boundary as the key subset for registration was based on its unique geometric properties compared to other potential candidates, such as crops. In the greenhouse environment of this study, two primary objects could serve as candidates: points representing crops and ground floor boundary (including the boundary of the middle walkway, the floor between crop blocks, and the floor between individual crops). The ground floor boundary was chosen for several reasons. First, compared to crops, the ground floor boundary shape is more distinguishable and non-repetitive. Given that repetitive patterns introduce ambiguity in correspondence pair matching, crop-based registration would likely result in mismatches due to similar geometric characteristics (e.g., curvature, size, shape, and surface normal vector) among crops within the greenhouse. To further ensure the uniqueness of the ground floor boundary, different greenhouse layout patterns were designed to create distinct boundary shapes for each configuration. Second, the ground floor boundary is well-defined, simple, and has stable contours that remain temporally invariant. In contrast, crops may experience movement due to the down-draft wind from the sUAS, particularly those with softer stems. Additionally, the complex shape of crops presents greater challenges in feature estimation, making them less suitable as reference objects for coarse registration. Based on these considerations, the ground floor boundary points were selected as the key subset for segmentation from the entire point cloud to facilitate robust registration.
The ground floor boundary points were divided into six equal-length subsections along the sUAS flight path for piecewise registration. This was an attempt to mitigate the effects of accumulated local distortions along the long flight path on the registration performance. By dividing the point cloud into smaller sections, each section is aligned independently for better registration accuracy, thus improving the overall registration performance.
The number of subsections was determined by requiring that each segment contain at least one notch. A notch is a vertex of the floor boundary whose interior angle exceeds π [25], providing a distinctive anchor for coarse alignment and reducing drift along long, nearly straight spans. Under our greenhouse layout, six is the largest value that still satisfies this criterion.
The 3D centroid of each subsection was computed as the mass centroid, which was then aligned between the target and source clouds to determine a rigid-body transformation matrix (M1) for each subsection. The alignment was refined by shifting the centroid in both X and Y directions using adjustment factors (F1 and F2) within a range from −1 to 1, with an incremental step size of 0.05, ensuring controlled centroid alignment in the XY plane (Equations (2)–(6)). The [−1, 1] bounds reflect the known partial overlap between the two point clouds and accommodate the unknown shift direction.
Since point cloud distribution varies due to factors such as geometric distortions, occlusions, and varying resolutions during data acquisition, the centroid of the same object may differ when viewed from different perspectives. Therefore, directly aligning centroids without compensation can result in misregistration. Figure 3 conceptually illustrates this issue: it is not a real point cloud but a simplified schematic to demonstrate the rationale behind centroid adjustment. Specifically, Figure 3a,b represent the original source and target clouds captured from different viewpoints: (c) shows the misalignment due to the natural shift in centroids, and (d) demonstrates how applying a centroid shift helps achieve coarse alignment. This visual aid emphasizes why centroid-based adjustment is a necessary step for achieving reliable initial registration in greenhouse environments.
No Z-adjustment was applied due to the minimal altitude variance observed in floor boundary points (Equations (2)–(6)). The optimal adjustment factors were determined by evaluating the root mean square error (RMSE) (Equation (7)) between the two point clouds post-alignment. The combination yielding the lowest RMSE was selected. With an improved initial alignment achieved through mass centroid alignment, the ICP algorithm was then applied to each key subsection to compute the transformation matrix (M2) and refine the registration precision.
X m o v e = F 1 ( x c s x c t )
Y m o v e = F 2 ( Y c s Y c t )
Z m o v e = Z c s Z c t
F 1 , F 2 1,1
M 1 = 1 0 0 X m o v e 0 1 0 Y m o v e 0 0 1 Z m o v e 0 0 0 1
where X c s ,   Y c s ,   Z c s and x c t , Y c t , Z c t are the XYZ coordinate of the centroid of source (cs) and target (ct) point clouds, respectively; F is the adjustment factor, ranging from −1 to 1 and a step size of 0.05 for the Xmove and Ymove determination.
R M S E = i = 1 N d i 2 N
where d i 2 is the squared distance between the i-th aligned source point and its closest neighbor point in the target, and N is the number of points in the source cloud.

2.5. Fine Registration

Following the completion of the coarse registration using the ground floor boundary points, the next step involved fine registration. In this stage, the ICP algorithm was reapplied, but this time to the entire point cloud. This dataset encompassed ground floor points, ground floor boundary points, and crop points, providing a more comprehensive input for the ICP algorithm compared to the coarse registration solely on the ground floor boundary points. The execution of the fine registration for each subsection followed a three-step sequence:
The initial step involved the transformation of the original source cloud using a matrix denoted as M3 (M3 = M1·M2), where M1 and M2 matrices were determined from the coarse registration operation.
Following this, the ICP algorithm was applied to the selected ROI within the transformed source cloud and target cloud. The purpose of this step was to derive a new matrix, termed M4. The computation of M4 was instrumental for capturing precise rigid transformations, optimizing the alignment between the source and target clouds. This transformation further refined the alignment of the source cloud with the target cloud before proceeding to the final registration step. The final step in the fine registration involved the transformation of the original source cloud using matrix M5 (M5 = M3·M4).

2.6. Evaluation

The quality of registered point clouds was evaluated via quantitative and qualitative metrics. The quantitative evaluation included calculating the RMSE of the Euclidean distances between registered correspondence pairs at three stages: pre-registration, post-coarse registration, and post-fine registration. Point cloud quality was further assessed through the point density distribution along a horizontal scanning line, pre- and post-registration. Lastly, volumes of artificial objects (the traffic cones and the basketball-on-pot) in the right-side crop area, from single and multiple view perspectives, were estimated using a 3D convex hull computation method [26] and then compared with the ground truth (actual volume). Additionally, the complexity of the boundary shape of each pattern was quantified by the number of notches in a polygon.
In the qualitative evaluation, visual inspection of the registered point clouds was used as a primary measure. The inspections included several key aspects: consistency in space between crop blocks, point cloud deformations, gaps in crop blocks, alignment of artificial objects, correct superimposition of the source cloud on the target cloud, and rectification of the occlusion area post-registration. These manual inspections were crucial, as RMSE evaluations were only relevant when two clouds were correctly registered. Notably, a low RMSE does not always signify well-aligned point clouds, as it does not consider the quality of correspondence pairs.

3. Results and Discussion

3.1. Preprocessing

After preprocessing, the point cloud of the greenhouse wall was aligned parallel to the X-axis, which corresponds to the sUAS flight direction. This resolved the initial issue of the unparallel appearance of the greenhouse wall, which was caused by dynamic heading errors of the sUAS. Additionally, as Figure 4 demonstrates, outliers at the edge of the crop area were identified and removed. The percentage of points recognized as outliers fell within a range of 1.1% to 1.7%, with an average of 1.6%.
Without the ROI selection, the point clouds primarily captured the plant area, with a small portion representing the middle walkway. Figure 5a shows that the point cloud of the greenhouse wall fragmented into six misaligned segments. Moreover, the point clouds representing the crops and reference objects (i.e., traffic cones, and basketball-on-pot) also exhibited poor alignment. By selecting a ROI to shift the focus predominantly to the middle walkway, on the other hand, it led to a registered point cloud with a straightened wall and more accurate layouts of plants and reference objects (Figure 5b). It seems like the lack of strong unique features in the broad searching area, predominantly consisting of homogeneous plant canopies, likely leads to poor corresponding pairs. By narrowing down the search area through the ROI selection, the process can focus on areas with a higher percentage of high-quality corresponding pairs, improving the overall correspondence quality. The center walkway boundary, being unique and clearly defined, serves as an excellent feature to enhance the quality of corresponding pairs, contributing to more accurate and reliable results in the registration process. Consequently, the ROI selection proved to be crucial for successful registration. It ensured the improved pairing of correspondence points between the two point clouds, serving as a prerequisite for the ICP-based registration process.

3.2. Coarse Registration

Figure 6 illustrates the process and outcome of a coarse registration of P1-M and P1-R in both XY and XZ plane views. Utilizing the RG algorithm, ground floor clusters were effectively isolated, all crop clusters were excluded from the ROI, and boundary points were successfully extracted. For ease of comparison, the target boundary is depicted in white, while the source boundary is color-coded, with red, green, dark blue, yellow, light blue, and purple for subsections 1, 2, 3, 4, 5, and 6, respectively.
In Figure 6a, prior to coarse registration, boundary subsections of both the target and source were misaligned in the XY and XZ planes. Figure 6b shows that post-mass centroid alignment, each source subsection coarsely matches its corresponding target subsection, particularly in the XZ plane. Figure 6c demonstrates the result of applying ICP to the boundary points, further enhancing the coarse registration. When compared to Figure 6b, the source boundary points in the XY plane were more precisely aligned with the target in both the XY and XZ planes. These improvements were quantified using the RMSE between the target and source clouds, as detailed in Table 1 and discussed later. It was also revealed that directly applying the ICP algorithm to boundary points without the coarse registration, in most cases, could yield inaccurate results that were primarily due to suboptimal initial positions. Although there was a case, e.g., P2(M+R), of achieving good registration results, the coarse registration step did provide a more reliable approach for accurate registration.

3.3. Fine Registration

3.3.1. Qualitative Performance

Figure 7 demonstrates the fine registration results, using P4(M+R) as an example. It was color-coded with different heights. Yellow-like color indicates a higher Z coordinate. In Figure 7a, a red box highlights the occlusion issue that arises from a single right-viewpoint scan, capturing only a partial view of the greenhouse environment. The view has fewer points in the left-side crop area due to the large off-nadir angle when the sUAS flew on the right side. In contrast, Figure 7b shows that while it failed to capture the greenhouse wall on the right-side, it did collect more points in the left-side crop area. This scan did not exhibit the same occlusion issue as shown in Figure 7a due to a different LiDAR viewpoint. Figure 7c reveals a post-registration result that the point cloud captured a more comprehensive view, encompassing both the right-side and left-side crop areas.
To further emphasize the improvements in point cloud quality achieved through the registration, Figure 8 compares the point clouds of the traffic cones, used as reference objects, with and without registration. In Figure 8a, only five out of six traffic cones were captured, with only the left-side information of cone #1 and some ground floor points being highlighted. Figure 8b shows an increase in the number of captured traffic cones and ground floor points as the Y coordinate increases, correcting the number of cone clusters to six. However, as indicated by the red circle, Figure 8b shows only right-side information of cone #1 was captured. Figure 8c shows that, after the registration of two single viewpoint scans, both sides of cone #1 were captured, as highlighted. Nonetheless, cones #3, #4, #5, and #6 still exhibited one-sided information, as the LiDAR sensor was unable to collect data from the sides of the objects that were occluded.
These visual results demonstrated the enhanced quality of the registered point clouds collected within the greenhouse environment. Prior to registration, the point clouds contained only single-sided information. Through the registration, the occlusion areas were revealed, leading to the recovery of hidden information. More quantitative comparison and evaluations of the registered point clouds are shown in the next section.

3.3.2. Quantitative Performance

RMSE
Table 1 displays the RMSE values for each dataset combination, along with the outcomes of visual inspections. Higher RMSE values indicate poorer registration performance and greater distances between corresponding points of the two registered point clouds. The RMSE values before registration varied significantly due to randomness in the local coordinate frames between target and source point clouds. The average RMSEs were 27.6 cm, 15.5 cm, 16.9 cm, and 16.7 cm for P1, P2, P3, and P4, respectively. The coarse registration improved these RMSE values, enabling successful fine registration. Figure 9 demonstrates a failure of point cloud registration without using proposed coarse registration. These reductions indicate a closer alignment between point clouds, but a low RMSE alone did not guarantee accurate point alignment, necessitating manual visual inspection. In the preliminary test, traditional ICP, FPFH-ICP, and NDT did not work in our case, and therefore their results are not included in Table 1.
Visual inspection revealed that not all registered datasets with lower RMSE passed the quality check (Table 1). The primary causes of failed registrations were insufficient overlap, low ground floor boundary complexity (few notches), and local rotational distortion.
For P1, the failures of (M+R) and (M+L) are attributed to low boundary complexity, which is quantified by the notch number. The notches provide distinctive anchors that are reliable correspondences. P1 failed with 15 notches, while P2 succeeded with 26 notches. Likewise, P4 succeeded with 19 notches. These comparisons indicate that a more complex floor boundary improves registration reliability. However, this study does not suggest a universal notch-count threshold for success; additional work is needed to determine different thresholds for other layouts.
For P2, the failure of (L+R) is due to low overlap relative to (M+R) and (M+L). Successful registration depends on sufficient shared regions to form correct correspondences, and the overlap ratio bounds the number of reliable matches. In this dataset, (M+R) and (M+L) had ~70% overlap, whereas (L+R) had only ~37%. When the overlap dropped from ~70% to ~37%, registration failed.
For P3, failure is driven by local geometric distortion. The P3-M scan exhibited noticeable distortion: the basketball-on-pots row, expected to be parallel to the Y-axis, rotated, and crop blocks, expected to be rectangular, were skewed toward a diamond shape (Figure S1). This distortion degraded (M+R) and (M+L).
The number of subsections used in the coarse registration also influenced registration accuracy. Table 1 presents the RMSE results of the registered point clouds when using three sections versus six sections. Across all datasets, the RMSE values after the fine registration were consistently lower when using six subsections. The improvement demonstrates that a finer division of the point cloud into six sections provided better alignment by mitigating the impact of localized distortions and reducing the propagation of registration errors.
To improve registration under greenhouse conditions, it is recommended to maintain overlap near ~70% where feasible (using M as a bridge between R and L), increase boundary complexity when possible (e.g., selective plant removal near walkways) to raise notch number, and correct local distortions before registration. Future studies should also conduct sensitivity analyses to quantify the effects of overlap ratio, boundary complexity, and geometric distortion on registration performance. Implementing these strategies can reduce misalignment and improve data usability for downstream applications.
Point Density Distribution
Figure 10 illustrates the distribution of point density, measured as the number of points per 737 cm3, along the Y-axis (note: 737 cm3 corresponds to 45 in3, which corresponds to the single pot size used in this study). The graph confirms that point density from a single viewpoint, represented by the blue and yellow lines, decreased as the off-nadir angle of the LiDAR sensor increased. After the registration, the point density in the overlapped area, represented by the red line, shows an increase in alignments with the blue line outside the area of overlap.
Volume of Artificial Objects
Table 2 and Table 3 display estimated volumes and their differences compared to the actual volumes (Ground Truth, GT) of artificial objects—specifically, basketballs-on-pots and traffic cones—positioned in area P4. These volumes were compared among single viewpoint scans (R and M) and a registered scan (M+R). The differences were errors in relation to the ground truth, expressed as a percentage difference (Difference with GT). As none of the point cloud combinations for P3 passed visual inspection, the volumes of artificial objects in P3 were not evaluated.
The “Distance (cm)” parameter in Table 2 and Table 3 was the horizontal distance between the object and the LiDAR sensor at the zero off-nadir angle point. A smaller distance value corresponds to a smaller off-nadir angle. This distance was crucial because the error percentage (“Difference with GT (%)”) was positively correlated with it. As the off-nadir angle increased, both the point density and the 2D LiDAR measurement accuracy decreased.
Table 2 shows that object #3 in Scan R and object #1 in Scan M that had the smallest distances exhibited the least error. Conversely, the volume of object #7 in scan M was not computable due to a sparse point distribution caused by a high off-nadir angle. Single-view scans (M and R) were prone to occlusion issues, affecting the accuracy of volume estimation. However, this limitation was substantially mitigated in the multiple-view scan (M+R), where the occlusion areas were mitigated. For example, the difference with GT at #1 position for R scan was 72% and dropped to 6% for (M+R) scan. The average difference with GT of pre-registration was 57% and 75% for R and M scan. After the registration, the average difference was 32%.
Table 3 also shows similar trends in Difference with GT for traffic cones as Table 2 does for the basketballs-on-pots. Overall, the error in volume measurement tended to increase as the distance from the LiDAR sensor increased. In addition, Table 3 further reveals that scan M missed additional objects (#5 and #6) due to the taller traffic cones, which created larger occlusion areas compared to the basketballs-on-pots. With exclusion of objects #5 and #6, the average Difference with GT of pre-registration was 38% and 47% for R and M scans, respectively; however, it dropped to 19% after fine registration with the combination (R+M). Therefore, fine registration for the point cloud datasets was necessary to increase the accuracy of measuring artificial object volumes with LiDAR mounted on the sUAS.
Although the overall error decreased, basketball-on-pot #7 and traffic cone #6 still exhibit large deviations. Both objects are located at the edge of the crop zone, adjacent to the greenhouse wall. Only the interior-facing sides are scanned by LiDAR, and the wall-facing sides are never scanned. Capturing the complementary side would require placing the LiDAR outside the wall, which was not feasible in our setup. Consequently, registration cannot recover geometry that was never observed, so residual errors persist even after registration.

3.4. Limitations and Future

While the proposed registration method has been validated in our greenhouse environment, its applicability to other facilities remains to be determined, e.g., some greenhouses may lack a well-defined, visible floor boundary for feature extraction. In densely packed greenhouses where floor visibility is limited, or when crop rearrangement alters floor patterns over time, the registration performance may degrade. Future work will explore more general greenhouse objects such as greenhouse walls, benches, and pre-existing structures (e.g., irrigation pipes, hydroponic gutters/rails, fixed posts) as alternative features, especially where floor space is limited or occluded. Because these objects are common across commercial greenhouses, substituting or augmenting the floor boundary with such structures has the potential to make the registration pipeline more adaptable to diverse layouts. If neither the floor boundary nor other structural objects are available, a deep learning approach could be used to learn a global feature descriptor directly from the point cloud.
Overlap is another critical factor. This study did not test the proposed method under low-overlap conditions, and the recommended overlap in this study is approximately 70%. Future work should evaluate performance across a range of overlap ratios and investigate learned feature descriptors that work well in low-overlap scenarios in non-agricultural environments [21], while noting that their effectiveness in greenhouse environments remains unknown.
The current pipeline is semi-automated and requires manual input (e.g., selecting an anchor point for yaw correction and defining the ROI). Future work can integrate pattern-recognition tools to automatically detect structural anchors and extract the ROI, moving toward a fully automated workflow.

4. Conclusions

This method contributes to improving automation pipelines in greenhouses by offering a low-cost, scalable registration approach for multi-view LiDAR data. A data processing pipeline was developed to register point cloud data gathered in a greenhouse setting using a LiDAR sensor mounted on a sUAS. The pipeline consisted of three main steps: (1) point cloud preprocessing, (2) coarse registration, and (3) fine registration. This methodology enhanced data quality, which is critical for effective plant monitoring within greenhouse environments. The varying orientations and altitudes of the LiDAR sensor during flights influenced the rigid-body assumption of the ICP algorithm, making a piecewise registration approach more suitable. Successful registration was achieved by focusing on well-defined subsections, with the ground floor boundary serving as a robust feature due to its unique and stable geometric properties. The implemented registration method resulted in improved precision, with RMSE values decreasing from 7.3 to 2.7 cm, 20.4 to 2.4 cm, and 16.7 to 2.5 cm for three successfully registered datasets.
This study identified that an effective key subset for point cloud registration in a greenhouse environment should be rich in distinctive features, such as a higher notch number, to improve correspondence pair quality. Additionally, the degree of overlap be-tween point clouds emerged as a critical factor influencing registration success. The six-section piecewise registration consistently outperformed the three-section approach, demonstrating that dividing the point cloud into smaller subsections mitigates localized distortions caused by sUAS flight instability and airflow variations, ultimately improving registration accuracy.
An ICP-based registration method was successfully implemented in a greenhouse environment which had characteristics of repetitive structures and low feature distinctiveness. By leveraging the greenhouse ground floor as a stable reference, this approach improved initial alignment, enhancing fine registration accuracy without requiring artificial markers. Future works could involve the use of adaptive registration strategies that leverage other stable features such as posts, benches, or pre-existing structures alongside pattern recognition techniques for their detection to enable robust registration, thereby making LiDAR-based monitoring more adaptable for improved crop monitoring in greenhouse environments. Consequently, the registered point cloud can be used to improve plant counting, crop uniformity assessment, and early yield forecasting based on volume and height surrogates.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/agronomy15092200/s1: Figure S1: Local geometric distortion in P3.

Author Contributions

Conceptualization, P.L. and S.K.; methodology, G.S.; validation, G.S.; formal analysis, G.S.; writing—original draft preparation, G.S.; writing—review and editing, P.L., G.S., S.K. and H.Z.; supervision, P.L. and S.K.; project administration, P.L.; funding acquisition, P.L. and S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by USDA-ARS #58-5082-4-045 and Food, Agricultural and Biological Engineering Department, The Ohio State University.

Data Availability Statement

Data will be made available upon request.

Acknowledgments

We would like to express our gratitude to Aditya Raj for his assistance with data collection and for providing the software for creating point clouds. His contributions were instrumental to the success of this research. We also express our sincere thanks to Jingyue Zhang for his expertise and assistance in programming, which significantly enhanced our research process. Their support and collaboration were pivotal in bringing this project to fruition.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Bauer, J.; Aschenbruck, N. Design and implementation of an agricultural monitoring system for smart farming. In Proceedings of the 2018 IoT Vertical and Topical Summit on Agriculture—Tuscany (IOT Tuscany), Tuscany, Italy, 8 May 2018; pp. 1–6. [Google Scholar]
  2. Ragaveena, S.; Shirly Edward, A.; Surendran, U. Smart controlled environment agriculture methods: A holistic review. Rev. Environ. Sci. Bio/Technol. 2021, 20, 887–913. [Google Scholar] [CrossRef]
  3. Oliveira, J.; Boaventura-Cunha, J.; Oliveira, P.M. Automation and control in greenhouses: State-of-the-art and future trends. In CONTROLO 2016: Proceedings of the 12th Portuguese Conference on Automatic Control; Springer International Publishing: Cham, Switzerland, 2017; pp. 597–606. [Google Scholar]
  4. Bai, X.; Wang, Z.; Sheng, L.; Wang, Z. Reliable Data Fusion of Hierarchical Wireless Sensor Networks With Asynchronous Measurement for Greenhouse Monitoring. IEEE Trans. Control Syst. Technol. 2019, 27, 1036–1046. [Google Scholar] [CrossRef]
  5. Rokhmana, C.A. The potential of UAV-based remote sensing for supporting precision agriculture in Indonesia. Procedia Environ. Sci. 2015, 24, 245–253. [Google Scholar] [CrossRef]
  6. Dey, B.; Ahmed, R. A Comprehensive Review of AI-Driven Plant Stress Monitoring and Embedded Sensor Technology: Agriculture 5.0. J. Ind. Inf. Integr. 2025, 47, 100931. [Google Scholar] [CrossRef]
  7. Bendig, J.; Bolten, A.; Bennertz, S.; Broscheit, J.; Eichfuss, S.; Bareth, G. Estimating biomass of barley using crop surface models (CSMs) derived from UAV-based RGB imaging. Remote Sens. 2014, 6, 10395–10412. [Google Scholar]
  8. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Daloye, A.; Erkbol, H.; Fritschi, F. Crop monitoring using satellite/UAV data fusion and machine learning. Remote Sens. 2020, 12, 1357. [Google Scholar] [CrossRef]
  9. Campos, J.; Llop, J.; Gallart, M.; García-Ruiz, F.; Gras, A.; Salcedo, R.; Gil, E. Development of canopy vigour maps using UAV for site-specific management during vineyard spraying process. Precis. Agric. 2019, 20, 1136–1156. [Google Scholar] [CrossRef]
  10. Yan, T.; Zhu, H.; Sun, L.; Wang, X.; Ling, P. Detection of 3-D objects with a 2-D laser scanning sensor for greenhouse spray applications. Comput. Electron. Agric. 2018, 152, 363–374. [Google Scholar] [CrossRef]
  11. Rosell, J.R.; Llorens, J.; Sanz, R.; Arnó, J.; Ribes-Dasi, M.; Masip, J.; Escolà, A.; Camp, F.; Solanelles, F.; Gràcia, F.; et al. Obtaining the three-dimensional structure of tree orchards from remote 2D terrestrial LIDAR scanning. Agric. For. Meteorol. 2009, 149, 1505–1515. [Google Scholar] [CrossRef]
  12. Arnó, J.; Escolà, A.; Vallès, J.M.; Llorens, J.; Sanz, R.; Masip, J.; Palacín, J.; Rosell-Polo, J.R. Leaf area index estimation in vineyards using a ground-based LiDAR scanner. Precis. Agric. 2013, 14, 290–306. [Google Scholar] [CrossRef]
  13. Chen, Y.; Medioni, G. Object modelling by registration of multiple range images. Image Vis. Comput. 1992, 10, 145–155. [Google Scholar] [CrossRef]
  14. Yang, J.; Li, H.; Campbell, D.; Jia, Y. Go-ICP: A globally optimal solution to 3D ICP point-set registration. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 2241–2254. [Google Scholar] [CrossRef] [PubMed]
  15. Zhou, Q.Y.; Park, J.; Koltun, V. Fast global registration. In European Conference on Computer Vision; Springer International Publishing: Cham, Switzerland, 2016; pp. 766–782. [Google Scholar]
  16. Rusu, R.B. Semantic 3D object maps for everyday manipulation in human living environments. KI-Künstliche Intell. 2010, 24, 345–348. [Google Scholar] [CrossRef]
  17. He, Y.; Liang, B.; Yang, J.; Li, S.; He, J. An iterative closest points algorithm for registration of 3D laser scanner point clouds with geometric features. Sensors 2017, 17, 1862. [Google Scholar] [CrossRef] [PubMed]
  18. Zhou, S.; Kang, F.; Li, W.; Kan, J.; Zheng, Y. Point cloud registration for agriculture and forestry crops based on calibration balls using Kinect V2. Int. J. Agric. Biol. Eng. 2020, 13, 198–205. [Google Scholar] [CrossRef]
  19. Zhang, K.; Chen, H.; Wu, H.; Zhao, X.; Zhou, C. Point cloud registration method for maize plants based on conical surface fitting—ICP. Sci. Rep. 2022, 12, 6852. [Google Scholar] [CrossRef] [PubMed]
  20. Wang, Y.; Solomon, J.M. Deep closest point: Learning representations for point cloud registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3523–3532. [Google Scholar]
  21. Aoki, Y.; Goforth, H.; Srivatsan, R.A.; Lucey, S. Pointnetlk: Robust & efficient point cloud registration using pointnet. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7163–7172. [Google Scholar]
  22. Qin, Z.; Yu, H.; Wang, C.; Guo, Y.; Peng, Y.; Ilic, S.; Hu, D.; Xu, K. Geotransformer: Fast and robust point cloud registration with geometric transformer. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 9806–9821. [Google Scholar] [CrossRef] [PubMed]
  23. Yu, J.; Ren, L.; Zhang, Y.; Zhou, W.; Lin, L.; Dai, G. PEAL: Prior-embedded explicit attention learning for low-overlap point cloud registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 17702–17711. [Google Scholar]
  24. Magnusson, M.; Lilienthal, A.; Duckett, T. Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Robot. 2007, 24, 803–827. [Google Scholar]
  25. Brinkhoff, T.; Kriegel, H.-P.; Schneider, R.; Braun, A. Measuring the Complexity of Polygonal Objects. In Proceedings of the ACM-GIS, Baltimore, MD, USA, 1–2 December 1995; Volume 109. [Google Scholar]
  26. Nair, U.; Ling, P.P.; Zhu, H. Improved canopy characterization with laser scanning sensor for greenhouse spray applications. Trans. ASABE 2021, 64, 2125–2136. [Google Scholar] [CrossRef]
Figure 1. Four greenhouse crop layout patterns for tests. (a) P1—No crops were removed, and one basketball-on-pot was placed in the walkway; (b) P2—selected plants were removed; (c) P3—no crops were removed. One row of basketballs on pots and one row of traffic cones was placed in the right-side crop area; (d) P4—selected plants were removed from the growing area, and the rows of basketballs and traffic cones were kept in the same place as (c). Each layout pattern was scanned three times by the LiDAR with three sUAS flight paths (right, middle, and left views) shown as dotted blue lines in the graphs.
Figure 1. Four greenhouse crop layout patterns for tests. (a) P1—No crops were removed, and one basketball-on-pot was placed in the walkway; (b) P2—selected plants were removed; (c) P3—no crops were removed. One row of basketballs on pots and one row of traffic cones was placed in the right-side crop area; (d) P4—selected plants were removed from the growing area, and the rows of basketballs and traffic cones were kept in the same place as (c). Each layout pattern was scanned three times by the LiDAR with three sUAS flight paths (right, middle, and left views) shown as dotted blue lines in the graphs.
Agronomy 15 02200 g001
Figure 2. Flow chart of the designed pipeline for the proposed point cloud registration. Major steps of the operation consist of data quality improvement (preprocessing) and a two-step registration.
Figure 2. Flow chart of the designed pipeline for the proposed point cloud registration. Major steps of the operation consist of data quality improvement (preprocessing) and a two-step registration.
Agronomy 15 02200 g002
Figure 3. Conceptual illustration of the necessity of centroid shift during coarse registration. (a) Source point cloud (blue dots) viewed from one direction. (b) Target point cloud (yellow dots) viewed from a different direction. (c) Misalignment when mass centroids (red dots) are naively aligned without adjustment. (d) Improved alignment after applying centroid shift using adjustment factors. Note: this figure is a schematic representation intended to demonstrate the concept of centroid misalignment and correction; it does not represent actual point cloud data.
Figure 3. Conceptual illustration of the necessity of centroid shift during coarse registration. (a) Source point cloud (blue dots) viewed from one direction. (b) Target point cloud (yellow dots) viewed from a different direction. (c) Misalignment when mass centroids (red dots) are naively aligned without adjustment. (d) Improved alignment after applying centroid shift using adjustment factors. Note: this figure is a schematic representation intended to demonstrate the concept of centroid misalignment and correction; it does not represent actual point cloud data.
Agronomy 15 02200 g003
Figure 4. A comparison of the outlier removal. The flight direction of sUAS is indicated by the red arrow. (a) is the right scan of greenhouse layout pattern 3 (P3 R). (b) is the zoomed point cloud within the red box. (c) is the same region after removing outliers. It is clearly observed that, as highlighted in red circle, (b) has some sparse points in the red circle, but in (c) there are no sparse points in the highlighted area. Note: the outliers are manually highlighted in red since the original color is hard to observe with the black background. This figure was color-coded by different height. Yellow-like color indicates a higher Z coordinate.
Figure 4. A comparison of the outlier removal. The flight direction of sUAS is indicated by the red arrow. (a) is the right scan of greenhouse layout pattern 3 (P3 R). (b) is the zoomed point cloud within the red box. (c) is the same region after removing outliers. It is clearly observed that, as highlighted in red circle, (b) has some sparse points in the red circle, but in (c) there are no sparse points in the highlighted area. Note: the outliers are manually highlighted in red since the original color is hard to observe with the black background. This figure was color-coded by different height. Yellow-like color indicates a higher Z coordinate.
Agronomy 15 02200 g004
Figure 5. A comparison between the registration with/without selecting ROI. (a) A failed registration without selecting an ROI. The greenhouse wall is split (highlighted in oval), and the crop, traffic cone, and basketball−on−pot are not aligned well (highlight in red boxes). (b) A successful registration with a ROI selection. The flight direction of sUAS is indicated by the red arrow in (b). Note: this figure was color-coded by different height. Yellow-like color indicates a higher Z coordinate.
Figure 5. A comparison between the registration with/without selecting ROI. (a) A failed registration without selecting an ROI. The greenhouse wall is split (highlighted in oval), and the crop, traffic cone, and basketball−on−pot are not aligned well (highlight in red boxes). (b) A successful registration with a ROI selection. The flight direction of sUAS is indicated by the red arrow in (b). Note: this figure was color-coded by different height. Yellow-like color indicates a higher Z coordinate.
Agronomy 15 02200 g005
Figure 6. Visualizing results of a two-step coarse registration. (a) The boundary points before coarse registration. The top figure shows a view from the XY plane, and the bottom figure shows a view from the XZ plane. Both target and source boundary points are not aligned well in both XZ and XY planes. (b) The boundary points after mass centroid alignment. Each subsection was aligned with its target subsection. Compared with (a), the alignment between target and source boundary was improved, especially in the XZ plane. (c) After mass centroid alignment, the ICP algorithm was applied to boundary points to further improve the registration. Compared with (b), in the XY plane, the source boundary points are better aligned with the target in both XY and XZ planes.
Figure 6. Visualizing results of a two-step coarse registration. (a) The boundary points before coarse registration. The top figure shows a view from the XY plane, and the bottom figure shows a view from the XZ plane. Both target and source boundary points are not aligned well in both XZ and XY planes. (b) The boundary points after mass centroid alignment. Each subsection was aligned with its target subsection. Compared with (a), the alignment between target and source boundary was improved, especially in the XZ plane. (c) After mass centroid alignment, the ICP algorithm was applied to boundary points to further improve the registration. Compared with (b), in the XY plane, the source boundary points are better aligned with the target in both XY and XZ planes.
Agronomy 15 02200 g006
Figure 7. Comparison between registered and single−view point clouds. The red arrow indicates the flight direction. (a) Dataset R P4. As highlighted in the red box, from the right view of LiDAR, occlusion occurred due to the larger off−nadir angle of LiDAR and blockage of crops. Additionally, the dataset R P4 only captured partial greenhouse environment. Less information was collected in the left-side crop area. (b) Dataset M P4. Compared with (a), it did not capture the greenhouse wall, but it had more points in the left-side crop area. It did not have an occlusion area in the same position of (a), since the viewpoint of LiDAR was different from that of (a). (c) Dataset (M+R) P4 after fine registration. It captured more points in both right-side and left-side crop areas and did not have an occlusion area, as highlighted with the red box. Note: this figure was color-coded by different height. Yellow-like color indicates a higher Z coordinate.
Figure 7. Comparison between registered and single−view point clouds. The red arrow indicates the flight direction. (a) Dataset R P4. As highlighted in the red box, from the right view of LiDAR, occlusion occurred due to the larger off−nadir angle of LiDAR and blockage of crops. Additionally, the dataset R P4 only captured partial greenhouse environment. Less information was collected in the left-side crop area. (b) Dataset M P4. Compared with (a), it did not capture the greenhouse wall, but it had more points in the left-side crop area. It did not have an occlusion area in the same position of (a), since the viewpoint of LiDAR was different from that of (a). (c) Dataset (M+R) P4 after fine registration. It captured more points in both right-side and left-side crop areas and did not have an occlusion area, as highlighted with the red box. Note: this figure was color-coded by different height. Yellow-like color indicates a higher Z coordinate.
Agronomy 15 02200 g007
Figure 8. Visual comparison of six traffic cones (#1, #2, #3, #4, #5 and #6) in P4 with/without registration. (a) Row of traffic cones in the dataset M P4. It is observed that there were only five traffic cones visible out of the six total, and as the Y coordinate (horizontal axis) increased, fewer points were captured. It only captured the left side information of the #1 traffic cone, shown as the red-highlighted oval. Furthermore, it only shows a partial ground floor. (b) Row of traffic cones in the dataset R P4. It captured more points of traffic cones and ground floor as the Y coordinate increased, and the number of traffic cone clusters was corrected (=6). However, as highlighted in the red circle, (b) only captured right−side information of the #1 traffic cone. That is also the main reason for registration, since the scan from a single view only captured one−sided information of objects due to occlusion. (c) Result of the registered point cloud. Compared to both (a,b), it captured both sides of #1 traffic cone, as highlighted by red oval, and the number of traffic cones was also correct (=6). In addition, (c) captured more ground points compared to (a).
Figure 8. Visual comparison of six traffic cones (#1, #2, #3, #4, #5 and #6) in P4 with/without registration. (a) Row of traffic cones in the dataset M P4. It is observed that there were only five traffic cones visible out of the six total, and as the Y coordinate (horizontal axis) increased, fewer points were captured. It only captured the left side information of the #1 traffic cone, shown as the red-highlighted oval. Furthermore, it only shows a partial ground floor. (b) Row of traffic cones in the dataset R P4. It captured more points of traffic cones and ground floor as the Y coordinate increased, and the number of traffic cone clusters was corrected (=6). However, as highlighted in the red circle, (b) only captured right−side information of the #1 traffic cone. That is also the main reason for registration, since the scan from a single view only captured one−sided information of objects due to occlusion. (c) Result of the registered point cloud. Compared to both (a,b), it captured both sides of #1 traffic cone, as highlighted by red oval, and the number of traffic cones was also correct (=6). In addition, (c) captured more ground points compared to (a).
Agronomy 15 02200 g008
Figure 9. Without the coarse registration, the initial condition is poor, and the registration result is incorrect.
Figure 9. Without the coarse registration, the initial condition is poor, and the registration result is incorrect.
Agronomy 15 02200 g009
Figure 10. P4 point density distribution along Y-axis. The point cloud density (PCD) of single-view is affected by off-nadir angles (OAs). For example, M Scan has the highest PCD at Y coordinate ~0 where OA is ~0°. The PCD degrades as OA increases as the absolute value of Y coordinate increases. Registration (M+R) increases overall PCD, especially in the overlap.
Figure 10. P4 point density distribution along Y-axis. The point cloud density (PCD) of single-view is affected by off-nadir angles (OAs). For example, M Scan has the highest PCD at Y coordinate ~0 where OA is ~0°. The PCD degrades as OA increases as the absolute value of Y coordinate increases. Registration (M+R) increases overall PCD, especially in the overlap.
Agronomy 15 02200 g010
Table 1. The RMSE value of all combinations of each greenhouse layout pattern before and after registrations, and whether it passed the visual inspection.
Table 1. The RMSE value of all combinations of each greenhouse layout pattern before and after registrations, and whether it passed the visual inspection.
Layout PatternNotch NumberScanPass Visual Inspection (Yes/No)RMSE (cm)
Before Registration After Coarse RegistrationAfter Fine Registration
3 Subsections6 Subsections3 Subsections6 Subsections
15M+RNo36.24.13.73.73.3
P1M+LNo25.93.33.83.02.6
L+RNo20.75.45.35.14.7
26M+RYes7.33.43.43.42.7
P2M+LYes20.42.92.72.72.4
L+RNo18.85.73.95.33.2
15M+RNo12.24.34.23.63.0
P3M+LNo25.23.63.33.02.7
L+RNo13.26.86.95.95.1
P419M+RYes16.73.63.32.72.5
Table 2. Comparison of estimated basketball-on-pot volume with ground truth, before and after point cloud registration in P4.
Table 2. Comparison of estimated basketball-on-pot volume with ground truth, before and after point cloud registration in P4.
ScanCalculated Parameters *Order of Basketball-On-Pots
#1#2#3#4#5#6#7
REstimated volume ( c m 3 )1348216531012378223020011273
Distance (cm)101501364120175238
Difference with GT (%)72563651545974
MEstimated volume ( c m 3 )195119171564148237864N/A **
Distance (cm)70130193244300350N/A
Difference with GT (%)606068699299N/A
M+REstimated volume ( c m 3 )4559508043153559261220801273
Difference with GT (%)641127465774
Difference with GT ( c m 3 )2921955361316224227783606
* GT—The ground truth volume = 4873 c m 3 . Distance from the sUAS flight path. ** N/A means the value is not computable, since the objects are not detected in current view perspectives, or the detected points are not enough to estimate the volume.
Table 3. Comparison of estimated traffic cone volume with ground truth before and after point cloud registration in P4.
Table 3. Comparison of estimated traffic cone volume with ground truth before and after point cloud registration in P4.
ScanCalculated Parameters *Order of Traffic Cone
#1#2#3#4#5#6
REstimated volume ( c m 3 )321454323343284726651854
Distance (cm)53566124184242
Difference with GT (%)46944525569
MEstimated volume ( c m 3 )4331364631031482N/A **N/A
Distance (cm)13182530N/AN/A
Difference with GT (%)27394875N/AN/A
M+REstimated volume ( c m 3 )682963874527422430571780
Difference with GT (%)15724294970
Difference with GT ( c m 3 )8944171430172829194170
* GT—The ground truth volume = 5957 c m 3 . Distance from the sUAS flight path. ** N/A means the value is not computable, since the objects are not detected in current view perspectives, or the detected points are not enough to estimate the volume.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Si, G.; Ling, P.; Khanal, S.; Zhu, H. Improving the Quality of LiDAR Point Cloud Data in Greenhouse Environments. Agronomy 2025, 15, 2200. https://doi.org/10.3390/agronomy15092200

AMA Style

Si G, Ling P, Khanal S, Zhu H. Improving the Quality of LiDAR Point Cloud Data in Greenhouse Environments. Agronomy. 2025; 15(9):2200. https://doi.org/10.3390/agronomy15092200

Chicago/Turabian Style

Si, Gaoshoutong, Peter Ling, Sami Khanal, and Heping Zhu. 2025. "Improving the Quality of LiDAR Point Cloud Data in Greenhouse Environments" Agronomy 15, no. 9: 2200. https://doi.org/10.3390/agronomy15092200

APA Style

Si, G., Ling, P., Khanal, S., & Zhu, H. (2025). Improving the Quality of LiDAR Point Cloud Data in Greenhouse Environments. Agronomy, 15(9), 2200. https://doi.org/10.3390/agronomy15092200

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop