1. Introduction
The escalating demand for food, fiber, and fuel due to population growth, coupled with challenges from climate change and diminishing arable land and water resources, presents significant challenges to crop production and protection. Overcoming these challenges requires advanced, timely, reliable, and cost-effective crop monitoring technologies that enable efficient and sustainable high-volume food production [
1]. Controlled Environment Agriculture (CEA) is key for efficiently producing high-quality and high-quantity crops more sustainably compared to traditional open-field farming systems [
2]. Nevertheless, efficient monitoring of plant growth, nutrients, water, and other stress factors in CEA requires substantial labor inputs, making automation crucial for precise and economically viable plant management [
3].
While plant monitoring techniques in CEA environments have evolved over the past 30 years, traditional methods often rely on a single sensor set placed in a “representative” area, lacking the ability to capture high spatial variability within a greenhouse. Despite the advancement of wireless sensor networks, their widespread application is hindered by the high costs associated with their acquisition and maintenance [
4]. Therefore, manual or semi-automatic plant monitoring approaches are still prevalent.
The emergence of small unmanned aerial systems (sUAS) presents a cost-effective opportunity for plant health monitoring. Their benefits include small size, affordability, portability, and the potential of providing high spatial resolution information about plant canopies [
5]. Researchers have explored various potential applications of sUAS-collected data [
6], such as barley biomass estimation [
7], soybean yield prediction [
8], and vineyard management [
9]. Compared to unmanned ground vehicles (UGVs), sUAS are more suitable for greenhouse environments due to their ability to navigate above crops without requiring floor space, which is often maximized for planting. Mechanical arms or irrigation booms are alternative solutions but require significant investment for their permanent installation. sUAS offer greater flexibility as they do not require installation, can be used in multiple greenhouse compartments, and operate autonomously, making them a practical and cost-effective choice.
Spatial, structural, and spectral information is critical in assessing crop growth and health. Light Detection and Ranging (LiDAR) instruments have been used for assessing canopy density and modeling crop structures [
10]. However, LiDAR point cloud data collected from a single viewpoint often encounters occlusions and uneven point density, necessitating data acquisition from multiple viewpoints [
11,
12]. Point cloud registration is the process of aligning data from multiple views into a common coordinate system.
Iterative Closest Point (ICP) [
13] is one of the most well-known algorithms to register two point clouds and has been widely used in numerous real-world applications. It iteratively estimates the rigid-body transformation (rotation and translation) by optimizing the Euclidean distance between nearest-neighbor correspondences. While ICP is fundamental and effective, it can converge to local minima when the initial alignment is poor [
14]. Additionally, its performance relies on forming correct correspondences [
15]. Its performance in greenhouse environments with repetitive objects/structures or poor initial alignment remains underexplored.
Beyond local refinement, globally optimal registration such as Go-ICP explores entire 3D motion space with a branch-and-bound scheme. It performs well even with poor initial alignment but can be computationally heavy [
14]. In contrast, Fast Global Registration (FGR) optimizes a global robust objective over candidate correspondences from Fast Point Feature Histogram (FPFH) [
15,
16], so while it does not need a good initial pose, it inherits descriptor ambiguity in repetitive scenes. In greenhouses, repetitive objects make feature correspondences less distinctive.
Feature-based registration methods are another way to avoid trapping into local minima. Generally, a feature-based approach consists of two steps: global alignment (coarse registration) using key local feature points, followed by a refined alignment. Compared with conventional ICP, which defines correspondences purely by nearest distance, feature-based methods select correspondences using feature similarity and distance. For example, geometric features such as curvature and normal vectors have been incorporated into ICP-based methods to improve performance under poor initial alignment [
17]. Reference objects like calibration balls have been used to enhance registration for forestry crops [
18]. Additionally, conical surface fitting combined with ICP has been proposed for maize plant registration [
19], demonstrating that customized features can be crucial for accurate registration.
Key features can be hand-crafted or learned from data. Deep learning-based approaches, such as Deep Closest Point (DCP) [
20], PointNetLK [
21], Geometric Transformer (GeoTransformer) [
22], and Prior-Embedded Explicit Attention Learning (PEAL) [
23], learn local/global features to establish robust correspondences. For example, GeoTransformer learns geometric features from a dataset and then estimates transformation in a closed form by using weighted singular value decomposition. Following GeoTransformer, PEAL incorporates priors to improve registration under low overlap. However, in environments with highly repetitive geometry, such methods can still struggle to find correct correspondences. Moreover, deep learning-based approaches typically require extensive labeled training data and are prone to overfitting, which is an expensive operation for most small-scale greenhouses.
As a non-ICP based method, probabilistic approaches, such as a Normal Distributions Transform (NDT), model point clouds as probability distributions and align them by minimizing the difference between corresponding distributions [
24]. While robust to noise, these methods require careful parameter tuning and computational resources, which may not be practical for real-time applications in greenhouse crop monitoring.
A key distinction in these studies is that they primarily focus on registering individual objects, whereas our study focuses on a larger, greenhouse scale. In a controlled single-plant setting, the distinctiveness of a canopy structure provides sufficient information for registration, but in greenhouse environments where plants are highly repetitive, such features fail to provide reliable correspondence pairs. This distinction underscores the necessity of identifying a global feature descriptor that remains effective across the entire greenhouse environment.
To address these challenges, we proposed a novel and straightforward coarse registration method specifically designed for greenhouse environments. Unlike local feature descriptors, we leveraged the mass centroid of the point cloud as a global feature descriptor for coarse registration. Preliminary tests demonstrated that direct use of ICP, NDT, and feature-based ICP methods (FPFH-ICP) resulted in failed registration in our environment. This failure highlights the need for a more generalized global descriptor rather than relying solely on local features.
The goal of this research is to enhance the quality of LiDAR point cloud data collected in greenhouse environments using sUAS. Specific objectives include the following: (1) improve the quality of point clouds through preprocessing, (2) design a pipeline of registering point clouds from multiple viewpoints in the greenhouse environment, and (3) evaluate the quality of post-registration point clouds in terms of estimated volume and point cloud density.
2. Materials and Methods
2.1. Experimental Design
2.1.1. Experimental Setup
The experiment was conducted in a commercial greenhouse at Cedar Lane Farms in West Salem, Ohio, December 2021. The greenhouse, structured as a hoop house, measured 7.9 m × 46 m with a truss height of approximately 2.4 m. The raw point cloud data was collected using a LiDAR (Model UST-10LX, Hokuyo Automatic Co., Ltd., Osaka, Japan) mounted on a sUAS (Model DJI Matrice 100, SZ DJI Technology Co., Ltd., Shenzhen, China). The LiDAR had a 120° field of view and angular resolution of 0.25°, resulting in 480 measurement points in a single scan every 25 milliseconds. The operating range is from 0.06 m to 10 m, with an accuracy of 40 mm. The range of operating ambient temperature and humidity of LiDAR is −10 °C to +50 °C and below 85% relative humidity. The greenhouse environment was maintained in the LiDAR’s operating range to assure high quality signals. To maintain homogeneity across all scans, a sUAS was manually flown at a constant speed of 0.6 m/s. X-axis was designated as the sUAS travel direction and Y-axis as the LiDAR scanning direction. Seven basketball-on-pots (volume of each is 4837 ) and six traffic cones (volume of each is 5957 ) were arranged within the greenhouse, forming four distinct layout patterns.
2.1.2. Greenhouse Layout and Data Collection
This experiment was designed to evaluate how different greenhouse layouts may impact point cloud registration. Four distinct layout patterns (P1–P4) were created, each varying in structural complexity and presence of artificial objects (
Figure 1):
Pattern 1 (P1): No plants were removed, and no artificial objects were added. This represents an undisturbed standard greenhouse layout, where registration relies primarily on the natural floor boundary and structural elements.
Pattern 2 (P2): Selected plants were removed from the growing area, creating a more complex boundary shape with additional notches and vertices. Since the proposed registration method utilizes the floor boundary as a key feature, this pattern was designed to evaluate whether increased geometric complexity improves registration accuracy by providing more distinct feature points.
Pattern 3 (P3): No plants were removed; artificial objects (traffic cones and basketballs) were added to the growing area instead.
Pattern 4 (P4): Both plant removal and artificial objects were introduced. This pattern combined the complex boundary shape from P2 and the artificial object placement from P3 to assess whether integrating these two strategies could further improve the registration.
Each pattern was scanned three times using different flight paths: right-side crop area (R), middle walkway (M), and left-side crop area (L). The overlap ratio for consecutive scan (L+M or R+M) is about 72% and it is about 37% for nonconsecutive scan (L+R). This setup resulted in 12 different point cloud combinations for registration testing. However, the P4-L (the left-side crop area of Pattern 4) dataset was excluded due to an unexpected deviation in the sUAS flight trajectory, leaving 10 valid dataset combinations for analysis.
By comparing registration accuracy across these four patterns, this study aims to determine which modifications provide the most reliable improvements for greenhouse point cloud registration.
Patterns P2 and P4 introduced additional notches and vertices in the ground boundary due to plant removal. These features were expected to improve registration performance by providing unique feature points in the greenhouse environment. In practical applications, local farmers may not want to introduce artificial objects but could achieve better registration by selectively removing plants—a low-cost and straightforward approach.
Patterns P3 and P4 included known-volume objects (traffic cones and basketballs). Since plants in the greenhouse were in an early growth stage and their canopy volume was difficult to measure manually, these objects provided a measurable reference for evaluating how registration improved volume estimation.
2.2. Point Cloud Registration
A pipeline for greenhouse crop point cloud registration was constructed, incorporating preprocessing, coarse registration, and fine registration stages (
Figure 2). The Point Cloud Library (PCL, version 1.12.1), available on GitHub 3.0.22 as an open-source library (link:
https://github.com/PointCloudLibrary/pcl (accessed on 1 December 2021)), was utilized for the pipeline construction. First, the preprocessing steps, such as filtering outliers, correcting yaw errors and selecting the Region of Interest (ROI), were performed to improve dataset quality. Subsequently, a subset of points was selected and extracted. The transformation matrix generated from coarse registration was used to transform the original source point cloud (S) to improve the initial alignment between the target cloud (T) and S. Finally, the ICP algorithm was applied for fine registration of T and the transformed S. were performed to improve dataset quality. Subsequently, a subset of points was selected and extracted. The transformation matrix generated from coarse registration was used to transform the original source point cloud (S) to improve the initial alignment between the target cloud (T) and S. Finally, the ICP algorithm was applied for fine registration of T and the transformed S.
2.3. Preprocessing Data
A three-step preprocessing procedure was implemented to enhance data quality before registration: outlier removal, yaw error correction, and ROI selection. Outliers were identified and removed using a Statistical Outlier (SO) filter available in the PCL library, with parameters set based on mean point density and a standard deviation threshold of 3.0. The threshold is an empirical value under the defined sUAS flight attitude.
Since the sUAS was flown without GPS information, yaw errors occurred, causing misalignment in some datasets, such as P4-R, where the greenhouse walls were not parallel with the Z-axis. To correct this, two manually selected anchor points—one at the entry and one at the exit of the experimental area—were used to estimate the yaw correction angle (θ). These points were positioned along the same walkway boundary, assuming that their connecting line should be parallel to the X-axis. A 3 × 3 rotation matrix (Equation (1)) was applied to align the dataset uniformly, ensuring a consistent initial condition where the Z-axis always represents the sUAS travel direction.
The yaw error correction approach would standardize alignment across datasets for better registration performance even though minor inconsistencies were expected due to the subjective manual point selection process. Given that three point clouds were collected from different flight paths, each covered distinct greenhouse sections while maintaining some overlap. However, as the LiDAR off-nadir angle increases, point density decreases, making it necessary to extract a well-defined Region of Interest (ROI) for improved registration.
The ROI was defined as the overlapped section of the middle walkway and extracted using a Conditional Filter (CF) based on predefined X and Y boundaries. The X-axis limits were set at the sUAS entry and exit positions. The Y-axis limits were determined by (1) the Y coordinate of the middle walkway center point (Yc) and (2) the greenhouse walkway width, measured at 1.5 m. To include both the walkway and part of the crop-growing area, the Y boundaries were set at Yc ± 1.6 m for all datasets.
2.4. Coarse Registration
A two-step coarse registration process was initiated after extracting the ground floor boundary. First, a mass centroid alignment was applied to optimize the initial alignment between the target and source key subsets in Euclidean space. This was followed by an Iterative Closest Point (ICP) algorithm [
13] to refine the alignment. The greenhouse ground floor boundary points were selected as the key subset for generating a transformation matrix due to their distinct and stable properties. The boundary extraction process involved four steps: (a) clustering the point cloud using a Region Growing (RG) algorithm, which clusters points based on local surface similarity; (b) selecting the ground floor cluster based on the comparison of mean altitude values; (c) removing noise points through a point density threshold; and (d) extracting the ground floor boundary using the BoundaryExtraction function in PCL, which identifies boundary points by computing differences in surface normal angles between a given point and its neighbors.
The selection of the ground floor boundary as the key subset for registration was based on its unique geometric properties compared to other potential candidates, such as crops. In the greenhouse environment of this study, two primary objects could serve as candidates: points representing crops and ground floor boundary (including the boundary of the middle walkway, the floor between crop blocks, and the floor between individual crops). The ground floor boundary was chosen for several reasons. First, compared to crops, the ground floor boundary shape is more distinguishable and non-repetitive. Given that repetitive patterns introduce ambiguity in correspondence pair matching, crop-based registration would likely result in mismatches due to similar geometric characteristics (e.g., curvature, size, shape, and surface normal vector) among crops within the greenhouse. To further ensure the uniqueness of the ground floor boundary, different greenhouse layout patterns were designed to create distinct boundary shapes for each configuration. Second, the ground floor boundary is well-defined, simple, and has stable contours that remain temporally invariant. In contrast, crops may experience movement due to the down-draft wind from the sUAS, particularly those with softer stems. Additionally, the complex shape of crops presents greater challenges in feature estimation, making them less suitable as reference objects for coarse registration. Based on these considerations, the ground floor boundary points were selected as the key subset for segmentation from the entire point cloud to facilitate robust registration.
The ground floor boundary points were divided into six equal-length subsections along the sUAS flight path for piecewise registration. This was an attempt to mitigate the effects of accumulated local distortions along the long flight path on the registration performance. By dividing the point cloud into smaller sections, each section is aligned independently for better registration accuracy, thus improving the overall registration performance.
The number of subsections was determined by requiring that each segment contain at least one notch. A notch is a vertex of the floor boundary whose interior angle exceeds π [
25], providing a distinctive anchor for coarse alignment and reducing drift along long, nearly straight spans. Under our greenhouse layout, six is the largest value that still satisfies this criterion.
The 3D centroid of each subsection was computed as the mass centroid, which was then aligned between the target and source clouds to determine a rigid-body transformation matrix (M1) for each subsection. The alignment was refined by shifting the centroid in both X and Y directions using adjustment factors (F1 and F2) within a range from −1 to 1, with an incremental step size of 0.05, ensuring controlled centroid alignment in the XY plane (Equations (2)–(6)). The [−1, 1] bounds reflect the known partial overlap between the two point clouds and accommodate the unknown shift direction.
Since point cloud distribution varies due to factors such as geometric distortions, occlusions, and varying resolutions during data acquisition, the centroid of the same object may differ when viewed from different perspectives. Therefore, directly aligning centroids without compensation can result in misregistration.
Figure 3 conceptually illustrates this issue: it is not a real point cloud but a simplified schematic to demonstrate the rationale behind centroid adjustment. Specifically,
Figure 3a,b represent the original source and target clouds captured from different viewpoints: (c) shows the misalignment due to the natural shift in centroids, and (d) demonstrates how applying a centroid shift helps achieve coarse alignment. This visual aid emphasizes why centroid-based adjustment is a necessary step for achieving reliable initial registration in greenhouse environments.
No Z-adjustment was applied due to the minimal altitude variance observed in floor boundary points (Equations (2)–(6)). The optimal adjustment factors were determined by evaluating the root mean square error (RMSE) (Equation (7)) between the two point clouds post-alignment. The combination yielding the lowest RMSE was selected. With an improved initial alignment achieved through mass centroid alignment, the ICP algorithm was then applied to each key subsection to compute the transformation matrix (M2) and refine the registration precision.
where
and
,
are the XYZ coordinate of the centroid of source (
cs) and target (
ct) point clouds, respectively;
F is the adjustment factor, ranging from −1 to 1 and a step size of 0.05 for the
Xmove and
Ymove determination.
where
is the squared distance between the
i-th aligned source point and its closest neighbor point in the target, and
N is the number of points in the source cloud.
2.5. Fine Registration
Following the completion of the coarse registration using the ground floor boundary points, the next step involved fine registration. In this stage, the ICP algorithm was reapplied, but this time to the entire point cloud. This dataset encompassed ground floor points, ground floor boundary points, and crop points, providing a more comprehensive input for the ICP algorithm compared to the coarse registration solely on the ground floor boundary points. The execution of the fine registration for each subsection followed a three-step sequence:
The initial step involved the transformation of the original source cloud using a matrix denoted as M3 (M3 = M1·M2), where M1 and M2 matrices were determined from the coarse registration operation.
Following this, the ICP algorithm was applied to the selected ROI within the transformed source cloud and target cloud. The purpose of this step was to derive a new matrix, termed M4. The computation of M4 was instrumental for capturing precise rigid transformations, optimizing the alignment between the source and target clouds. This transformation further refined the alignment of the source cloud with the target cloud before proceeding to the final registration step. The final step in the fine registration involved the transformation of the original source cloud using matrix M5 (M5 = M3·M4).
2.6. Evaluation
The quality of registered point clouds was evaluated via quantitative and qualitative metrics. The quantitative evaluation included calculating the RMSE of the Euclidean distances between registered correspondence pairs at three stages: pre-registration, post-coarse registration, and post-fine registration. Point cloud quality was further assessed through the point density distribution along a horizontal scanning line, pre- and post-registration. Lastly, volumes of artificial objects (the traffic cones and the basketball-on-pot) in the right-side crop area, from single and multiple view perspectives, were estimated using a 3D convex hull computation method [
26] and then compared with the ground truth (actual volume). Additionally, the complexity of the boundary shape of each pattern was quantified by the number of notches in a polygon.
In the qualitative evaluation, visual inspection of the registered point clouds was used as a primary measure. The inspections included several key aspects: consistency in space between crop blocks, point cloud deformations, gaps in crop blocks, alignment of artificial objects, correct superimposition of the source cloud on the target cloud, and rectification of the occlusion area post-registration. These manual inspections were crucial, as RMSE evaluations were only relevant when two clouds were correctly registered. Notably, a low RMSE does not always signify well-aligned point clouds, as it does not consider the quality of correspondence pairs.
4. Conclusions
This method contributes to improving automation pipelines in greenhouses by offering a low-cost, scalable registration approach for multi-view LiDAR data. A data processing pipeline was developed to register point cloud data gathered in a greenhouse setting using a LiDAR sensor mounted on a sUAS. The pipeline consisted of three main steps: (1) point cloud preprocessing, (2) coarse registration, and (3) fine registration. This methodology enhanced data quality, which is critical for effective plant monitoring within greenhouse environments. The varying orientations and altitudes of the LiDAR sensor during flights influenced the rigid-body assumption of the ICP algorithm, making a piecewise registration approach more suitable. Successful registration was achieved by focusing on well-defined subsections, with the ground floor boundary serving as a robust feature due to its unique and stable geometric properties. The implemented registration method resulted in improved precision, with RMSE values decreasing from 7.3 to 2.7 cm, 20.4 to 2.4 cm, and 16.7 to 2.5 cm for three successfully registered datasets.
This study identified that an effective key subset for point cloud registration in a greenhouse environment should be rich in distinctive features, such as a higher notch number, to improve correspondence pair quality. Additionally, the degree of overlap be-tween point clouds emerged as a critical factor influencing registration success. The six-section piecewise registration consistently outperformed the three-section approach, demonstrating that dividing the point cloud into smaller subsections mitigates localized distortions caused by sUAS flight instability and airflow variations, ultimately improving registration accuracy.
An ICP-based registration method was successfully implemented in a greenhouse environment which had characteristics of repetitive structures and low feature distinctiveness. By leveraging the greenhouse ground floor as a stable reference, this approach improved initial alignment, enhancing fine registration accuracy without requiring artificial markers. Future works could involve the use of adaptive registration strategies that leverage other stable features such as posts, benches, or pre-existing structures alongside pattern recognition techniques for their detection to enable robust registration, thereby making LiDAR-based monitoring more adaptable for improved crop monitoring in greenhouse environments. Consequently, the registered point cloud can be used to improve plant counting, crop uniformity assessment, and early yield forecasting based on volume and height surrogates.