Next Article in Journal
Consistency and Stability of SNPP ATMS Microwave Observations and COSMIC-2 Radio Occultation over Oceans
Next Article in Special Issue
Optimized Estimation of Leaf Mass per Area with a 3D Matrix of Vegetation Indices
Previous Article in Journal
Monitoring of Wheat Powdery Mildew under Different Nitrogen Input Levels Using Hyperspectral Remote Sensing
Previous Article in Special Issue
Tree Extraction from Airborne Laser Scanning Data in Urban Areas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sensitivity Analysis of Canopy Structural and Radiative Transfer Parameters to Reconstructed Maize Structures Based on Terrestrial LiDAR Data

1
School of Instrument Science and Opto-Electronics Engineering, Beihang University of Aeronautics and Astronautics, Beijing 100191, China
2
School of Remote Sensing and Information Engineering, North China Institute of Aerospace Engineering, Langfang 065000, China
3
College of Land Science and Technology, China Agricultural University, Beijing 100083, China
4
Beijing Institute of Space Mechanics and Electricity, China Academy of Space Technology, Beijing 100094, China
5
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(18), 3751; https://doi.org/10.3390/rs13183751
Submission received: 16 August 2021 / Revised: 11 September 2021 / Accepted: 13 September 2021 / Published: 18 September 2021
(This article belongs to the Special Issue Leaf and Canopy Biochemical and Biophysical Variables Retrieval)

Abstract

:
The maturity and affordability of light detection and ranging (LiDAR) sensors have made possible the quick acquisition of 3D point cloud data to monitor phenotypic traits of vegetation canopies. However, while the majority of studies focused on the retrieval of macro scale parameters of vegetation, there are few studies addressing the reconstruction of explicit 3D structures from terrestrial LiDAR data and the retrieval of fine scale parameters from such structures. A challenging problem that arises from the latter studies is the need for a large amount of data to represent the various components in the actual canopy, which can be time consuming and resource intensive for processing and for further applications. In this study, we present a pipeline to reconstruct the 3D maize structures composed of triangle primitives based on multi-view terrestrial LiDAR measurements. We then study the sensitivity of the details with which the canopy architecture was represented for the computation of leaf angle distribution (LAD), leaf area index (LAI), gap fraction, and directional reflectance factors (DRF). Based on point clouds of a maize field in three stages of growth, we reconstructed the reference structures, which have the maximum number of triangles. To get a compromise between the details of the structure and accuracy reserved for later applications, we carried out a simplified process to have multiple configurations of details based on the decimation rate and the Hausdorff distance. Results show that LAD is not highly sensitive to the details of the structure (or the number of triangles). However, LAI, gap fraction, and DRF are more sensitive, and require a relatively high number of triangles. A choice of 100−500 triangles per leaf while maintaining the overall shapes of the leaves and a low Hausdorff distance is suggested as a good compromise to represent the canopy and give an overall accuracy of 98% for the computation of the various parameters.

1. Introduction

Solar radiation is considered the major direct or indirect driver for most biophysical, hydrological, and biochemical processes occurring in plant ecosystems [1]. Thus, for vegetation canopies, light interception is considered a major ecosystem-scale driver for productivity [2,3]. Plant canopy reflection and absorption of solar radiation are the key factors to quantify the phenotypic traits within plants and to study the light−vegetation interaction. Various physically based models have been developed to describe this interaction [4]. Radiative transfer (RT) models are among the most common methods used to relate the different properties of vegetation to canopy reflectance. Due to the complexity of the vegetation canopies, many simplifications and abstractions are made in models, varying by the scale, leaf or canopy, the complexity of the structure considered, homogeneous or heterogeneous, and by the methods of RT modeling, such as geometrical models, turbid medium models, hybrid models, or computer simulation models [5].
While homogeneous one-dimensional RT models are successful in describing the propagation of light in the turbid medium of phytoelements, the 3D nature of complex and heterogeneous vegetation covers can have a significant effect on light interception [6]. The Monte Carlo (MC) ray-tracing technique is one of the major approaches used to solve the RT equation for realistic 3D architectures. MC methods are based on the tracing of rays (photons) in a realistic 3D canopy architecture that contains a large number of individual objects coupled with their optical properties to compute the canopy reflectance in different viewing angles. The canopy architectures can influence light absorption and scattering within the vegetation. Therefore, a reasonable portrayal of canopy structure is critical to precisely simulate the RT within the canopy [7].
Different types of sensors are employed to obtain the 3D information about vegetation canopies to parameterize RT models. While photogrammetry makes use of 2D images taken from passive sensors, such as an RGB camera, from different angles, to generate the scenes [8], active sensors, such as Light Detection and Ranging (LiDAR), can directly capture the 3D structural information of vegetation objects by generating discretized point clouds [9]. The precision of photogrammetry methods is determined first by the quality of the “stereo” images of the scenes, and second, by their ability to precisely align the corresponding points in an image pair in order to obtain depth information. The process of alignment of images for photogrammetry can be affected by natural factors, making it inaccurate for low textured scenes and computationally expensive for large images. Conversely, LiDAR sensors can offer better accuracy for vegetation, while providing more depth information under difficult lightening scenarios. For terrestrial use, LiDAR instruments measure ranges (distances) by emitting laser signals in the Near Infrared (NIR) region with very high frequency. The laser beams emitted from LiDAR have a high penetration capacity, which allows the acquisition of accurate ranges. Laser ranges are then combined with other parameters (position, orientation, and calibration data) to get the point clouds.
The dense and rich-detail point clouds are used for a variety of terrestrial ecosystems applications. For forest ecosystems, LiDAR data have been widely used to improve systematic data collection practices [10]. Many structural attributes can be retrieved, including the tree diameter [11], tree height [12], Leaf Area Index (LAI) [13], and leaf orientation [14]. For agriculture ecosystems, LiDAR applications are still at an infant stage [10]. Individual plant traits are usually extracted using TLS (Terrestrial Laser Scanning) and mobile LiDAR platforms [15], while large-scale phenotypes are extracted using UAV (Unmanned Aerial Vehicle) and other airborne platforms [16,17]. However, current research mainly focuses on the extraction of statistical parameters of the canopy, such as canopy height and LAI, while ignoring the fine structural characterization of plants and the potential of integrating LiDAR data for the crop growth modeling [10].
The reconstruction of the canopy based on point clouds can be challenging. The discretization of point clouds to realistic 3D scenes can be achieved by converting the points either to voxels or to geometric primitives. Voxel-based modeling consists of transforming the point clouds into a 3D matrix of rectangular parallelepipeds of the same size, where each foliage voxel is a turbid medium that is associated with information about its location and optical properties, and its side length is referred to as resolution or voxel-size [18]. Geometric-based modeling, on the other hand, is based on the description of the scenes by a set of geometric primitives such as triangle, disc, cone, cylinder, and ellipsoid [19]. Among them, the most common method uses triangular meshes. We use geometric-based modeling to refer to triangular modeling henceforth. A number of triangles describe each foliage element, and each triangle is described by three coordinates. So the information about foliage shape and position is preserved, and the individual leaves are resolved. Thus, geometric modeling is more suitable for fine structural characterization of vegetation covers.
RT simulations based on structures from geometric modeling can be time-consuming and resource intensive due to the high number of triangles that can describe the vegetation architecture. Simplifying the structure by reducing the number of triangles for the considered study can help to extend the various applications. Hence, a compromise should be made between the accuracy, time consumption, and memory for geometric-based modeling for LiDAR data. Despite this interest, to the best of our knowledge, there is no study of the sensitivity of canopy structural and RT parameters to the number of details to be considered in the LiDAR-based structure for vegetation.
Previous work to assess the sensitivity of RT models to the accuracy of canopy structure [20] has been limited to the use of a maize model that describes the 3D architecture of fully developed plants based on parametric mathematical expressions [7]. Moreover, this description has a few issues with the portrayal of the top part and the reorientation of leaves. These assumptions can be incorrect for real maize canopies where there is a strong correlation between plants in regard to intraspecific competition [21].
The aim of this paper is to report our study for the sensitivity of various parameters, e.g., leaf angle distribution (LAD), leaf area index (LAI), gap fraction, and the directional reflectance factor (DRF) of the canopy to different levels of details for the reconstructed 3D canopy structures based on LiDAR data. These structures are created from a canopy of maize plants at distinctly different growth stages. The scenes are described by converting the mesh of point clouds to simple geometric primitives (triangles) to reconstruct the leaves, stems and soil. Various configurations of the 3D canopy are chosen depending on the total number of triangles and the Hausdorff distance [22].
We describe the methods to generate the various maize structures along with the calculation methods for the parameters. The paper ends up with a discussion of the sensitivity of the parameters to the choice of details and a conclusion.

2. Material and Methods

2.1. TLS Data Acquisition

Terrestrial laser scanning was conducted using a Focus3D X330 scanner (FARO Technologies, Rugby, U.K.). The TLS scanner employs a 1550 nm laser, which emits a laser beam from a rotating mirror towards the targeted area. The laser beam is distributed at a vertical range of 300° and a horizontal range of 360° (Table 1) and then reflected back to the scanner. The distance and the relative horizontal and vertical angles are then calculated and stored within a memory card for preprocessing using SCENE software provided by FARO. Plot-level TLS scans were performed from six positions: four corners of the canopy and two positions on top of the canopy. The two positions on top of canopy consist of a position on top of a building neighboring to the maize field (7–8 m above the ground level) and another position on stairs of the building (4–5 m above the ground level). The height of the tripod system is around 1.6 m, adding it to the height of the TLS system, the equipment is around 1.8 m above the ground level at the four corners. Four white balls were placed and scanned as registration points (Figure 1a); their positions were subsequently used to permit the point clouds’ co-registration generated from different positions. Figure 1b shows the point clouds after co-registration of 16 maize plants for the second stage.
The study site is located in Zhangjiakou, Hebei, China (latitude 40°21′00.9″N, longitude 115°47′50.9″E) and the maize cultivar used is Zhengdan 985. Within the area of 10.8 m × 10.4 m, 234 plants were planted on flat soil with the adoption of a drip irrigation system to ensure a stable and homogenous supply of water during the growing stages. The plants were firstly planted on 9 August 2019 in the north−south direction for a row spacing of 80 cm and a distance between plants of the same row of 60 cm. Three LiDAR scan measurements were recorded, representing a gradual growth: the first one on 16 September 2019, the second on 25 September 2019, and the last on 8 October 2019.
A prior study suggests that at least 2.5 rows of canopy are representative enough for measuring the intrinsic reflectance of the row canopies [23]. Thus, the choice of 16 plants (4 × 4, 4 rows, 240 cm × 320 cm) is a good compromise for this study.

2.2. Canopy Reconstruction

The reconstruction of the 3D structures from the mesh of point clouds is achieved using the software Geomagic Wrap (Raindrop Geomagic, Morrisville, NC, USA). The software enables users to transform point cloud data to 3D polygon meshes and surface models. Zhu et al. [24] and Hui et al. [25] proved the use of the Geomagic algorithm for the generation of 3D triangular meshes to retrieve canopy parameters from crops (leaf width, leaf length, etc.) for high-throughput 3D phenotyping.
The following steps were followed to reconstruct the canopy for each plant:
  • Separating the plants. The first step was to separate the point cloud related to each plant in a different files by using CloudCompare. Available online: https://www.danielgm.net/cc/ (accessed on 1 August 2021).
  • Filtering. The resulting point cloud also produced a comparatively high level of noise due to the devices’ resolution limit and the fine structure of a plant [9]. Especially for the edgy part at the end of maize leaves and the uppermost leaves because they were close to each other. We detected spurious points and removed them.
  • Filling Holes. In some cases, because of the relatively high number of plants, some holes can exist between two parts of the same leaf. In this step, we carried out a visual inspection of such holes, and filled the voids with ordered points.
  • Reducing scanner error. Minor wind may cause overlaps and offsets in the point clouds created during the 3D scanning process [26]. This results in making the smooth surfaces of the leaves rough. We used the module “Reduce Noise” in Geomagic Wrap, which moves points to statistically correct locations while removing more outliers to get smooth surfaces.
  • Wrapping. The process of converting a point object to a polygon object is commonly called “wrapping”. It can be described as stretching plastic sheeting around a point object and pulling it tight to reveal a polygon surface. 3D polygon meshes were then created while maximizing shape retention.
  • Smoothing the edges and filling holes. After the polygon mesh was generated, the leaves were treated separately. The small voids between the parts of the leaf were filled, and the edges were smoothed. The number of triangles was not defined at this step, which means that the structure will be generated with maximum retention (maximum number of triangles). Naturally, the maximum number of triangles are considered the finest triangulation.
  • Adding stem. In our case, the stem was added manually, using the bottom and top points from the point clouds. The structure of the stem was approximated with a cone of 200 triangles. The base and top perimeter of the stem were chosen according to the stage of growth and the mesh of points.
  • Combining the reconstructed plants. After generating all the plants from the same stage of growth, we combined them using the same coordinates system in one structure.
  • Decimating. At this point, the structures with the finest configuration of the canopy were generated. We referred to these structures by T1 (4 161 209 triangles), T2 (4 981 651 triangles), and T3 (6 834 588 triangles) for the respective stages of growth where T1 is the earliest stage and T3 the latest stage. These structures were used as a reference for the computation of the various parameters. The decimated structures were generated by reducing the high number of triangles without compromising surface detail in respect to the targeted decimation rate and the Hausdorff distance (explained below). To have the same scale of decimating for all leaves, we reduced the total number of triangles for the scenes containing only the leaves without stem and soil. We chose four configurations for each stage of growth. The four configurations include four decimation rates (50%, 0.2%, 0.05%, and 0.01%) and their corresponding Hausdorff distances (Table 2). Figure 2 shows a plant in the third stage of growth in different decimating configurations.
  • Visual assessment using photos of the plants. Deleting the overlapping triangles and control the overall quality of the structures by referring to the photos of the plants taken at the same time with LiDAR measurements. We used the software Geomagic Wrap 2015 from step 2 to step 10.
  • Generating objects files. The last step is the generation of the input files describing the different structures by converting the STL file to an objects file. This was achieved in Visual studio 2013 (C++) by conserving only the coordinates of the triangles and deleting the extra information in the STL file. The object file is the input of the RT model WPS (Weighted Photon Spread, [27]). This process can be time consuming, especially for the finest structures, T1, T2, and T3, which correspond to an average file size of 34.52, 41.87, 57.81 Mega Byte (MB) for one maize plant, respectively.

2.3. Hausdorff Distance

To monitor the differences and quantify the efficiency of decimation algorithms, the Hausdorff distance [22] error metric is often used. Given point sets for two subsets of metric space, we say that the two sets of points are within Hausdorff’s distance (d) from each other if any point of one set is within (d) from some points of the other set. It is, therefore, the most significant distance from one point in one set to the closest point in another set. It is calculated as:
d H ( X , Y ) = max { sup x   ϵ   X inf y ϵ Y d ( x , y ) , sup x   ϵ   Y inf y ϵ X d ( x , y ) }
where sup represents the supremum and inf the infimum. The first part of the right side (sup inf d(x,y)) of Equation (1) represents the maximum distance from points in the set (Y) to the closest points in the set (X) while the second part of the equation represents the maximum distance from points in the set (X) to the closest points in the set (Y).
For the computation of the Hausdorff distance (d), we used the Metro tool [28] that is integrated into the software Meshlab 2020. Available online: https://www.meshlab.net/ (accessed on 1 August 2021). Meshlab is a software for processing, editing, and cleaning of triangular meshes, whereas Metro is a widely used tool by the research community to calculate the differences and errors between surfaces (triangular meshes and their decimations) [29,30,31].

3. Calculation of the Parameters

3.1. LAD and LAI

LAD is a crucial plant structural trait that determines light interception in vegetation canopies [1]. It is useful for understanding photosynthesis, evapotranspiration, and RT processes [32,33]. For single leaves, leaf angles consist of leaf zenith (inclination) and leaf azimuth. Due to the complexity of measurements, leaf inclination is generally approximated using mathematical expressions while leaf azimuth is considered uniformly distributed in the range of [0, 2π] for most species. However, these simplifications fail to represent the spatial and temporal variability of LAD [34], leading to inaccurate simulations of the canopy reflectance.
Leaf zenith and azimuth angles are calculated from the direction of the normal vector of the parametric surfaces of the leaf triangles. Three points describe the triangle i. θi and φi, respectively, are the zenith and azimuth angles for the adaxial surface of the leaf triangle i, described by the direction cosine of the normal vector Ni(ui, vi, wi) (Figure 3), given by:
θ i = acos ( w i )
φ i = atan ( v i u i )
We also calculated the frequencies of zenith and azimuth angles. The frequency was calculated within an interval of 10° from 0° to 180° for zenith, and an interval of 10° from 0° to 360° for azimuth:
F j = i = 1 : n s i S t o t
Where j = {1, 2… 18} for zenith and j = {1, 2… 36} for azimuth, n is the number of triangles with (j − 1) × 10 ≤ θi (or φi) ≤ j × 10, si is the area of the triangle i, and Stot is the area of all triangles in the scene.
The average and standard deviation (SD) of both angles were also calculated, given by:
θ ¯ = i = 1 : N θ i × s i S t o t
φ ¯ = i = 1 : N φ i × s i S t o t
S D   θ = i = 1 : N ( θ i θ ¯ ) 2 × s i S t o t
S D   φ = i = 1 : N ( φ i φ ¯ ) 2 × s i S t o t
where N is the total number of triangles describing the leaves in the scene.
For Ssoil representing the area of the soil, we calculated LAI by dividing the area of all triangles, which represents the one sided area of all the leaves, by the area of the soil:
L A I = S t o t S s o i l  

3.2. Gap Fraction and DRF

The canopy gap fraction is a biophysical parameter defined as the probability of a photon passing through the canopy without being intercepted by foliage, stem, or other plant elements [35]. It is an important indicator to describe the canopy interception of light and the transfer of radiation in vegetation. It is also widely used to infer other structural parameters such as LAI [36] and clumping index [37].
For this study, the reflectance and gap fraction were computed using the most recent version of the WPS model [27,38]. The WPS model is a 3D Monte Carlo RT model that uses the weight reduction concept and the ‘‘Russian roulette” method to reduce execution time when tracing a large number of photons and to achieve hyperspectral reflectance simulations for 3D canopies. The WPS model also includes a sub-module that calculates the probability of gap fraction. The scenes were bombarded with a high number of photons. We then simulate the reflectance and gap fraction in different viewing angles.

4. Results and Discussion

4.1. LAD

We computed the zenith and azimuth angles with their respective frequencies, average, and standard deviation for the different structures for all stages of growth. Our results demonstrated that the range of zenith angle for maize leaves varies in a non-uniform distribution from 0° to 180° (Figure 4a). However the frequencies for zenith angles higher than 90° (investigated below) are low compared to those in the range 0° to 90°. For the azimuth angles, the distribution is not uniform, and the frequencies of angles from 90° to 220° are less frequent (Figure 4b). This is probably due to the intraspecific competition for the light and space between the leaves.
Since the leaves were described with triangles, the inclination angle’s computation is related to the surface area and the normal representing each geometric primitive (Figure 5). Some parts of the leaves, particularly for the lower leaves, tend to bend, creating triangles with zenith angles higher than 90°, and the top leaves tend to be perpendicular to the soil, resulting in a large number of triangles with a zenith angle between 90° and 100°, as shown in the red rectangles in Figure 5.
The frequencies of the zenith angles for the reference structures (maximum number of triangles) follow a skewed distribution for the three stages, while the mode shifts towards higher values for the later stages of growth (from 50° for the first stage to 70° for the last stage of growth). Figure 6 shows the error of estimation, i.e., the difference in frequency (delta frequency), by the decimated structures to the reference structures for an interval of 10°. Structures with fewer numbers of triangles estimated leaf zenith and azimuth angles with higher error compared to structures with high numbers of triangles. The structures with a decimation rate of 0.01% (lowest case) gave a maximum error of 7% and 3.5% for zenith and azimuth angles, respectively, both for the first stage. Moreover, the range of variation for azimuth angles is overall between −1% and +1% for all decimated structures, while it reached 5% for zenith angles. Thus, the computation of leaf zenith angles is more sensitive to the details of the reconstructed structure of the canopy in comparison to the computation of leaf azimuth angles.
The average leaf zenith and azimuth angles of the reference structures and all the decimated structures are close, and the difference is overall less than 6.5° (Table 3) with a maximum relative difference of 5.3%. For the standard deviation, the difference is lower than 4.5° for both zenith and azimuth angles. However, the relative difference reached 16%. Overall, the configurations T1-3, T2-3, and T3-3, which correspond to 0.05% decimation rate and Hausdorff distance lower than 7, give good results for the computation of leaf zenith and azimuth average and standard deviation with a maximum difference of 2° and an accuracy of 98%.
These results show that leaf zenith and azimuth angles are not highly sensitive to the choice of the number of triangles for the decimated structures. These findings substantiate the previous analysis [20] and support the use of structures with less complexity and the number of triangles to calculate leaf zenith and azimuth angles.

4.2. LAI

We computed the LAI using Equation (9). Table 4 shows the variations of LAI and the relative difference for all stages of growth regarding the decimation rate and the Hausdorff distance considered. Lowering the number of triangles decreases the total area of the leaves. This is expected since the maize leaves are not completely flat and straight. Lowering the number of triangles to describe the leaves means that the undulations will not be described with precision. It is the same concept of using fractals to measure distance as in [39]: the more fractals we use, the more accurate distance will be.
The difference is higher in the early stage, where the leaves are not mature (a maximum relative difference of −30% for T1-4) when compared to the two latter stages (T2-4: −27%, T3-4: −24%) (Figure 7). Overall, the higher number of triangles, the lower the error. For an accuracy of 95%, the 0.05% decimation rate is a good compromise in this case. However, for better accuracy, the 0.2% decimation rate offers better results with an accuracy of 99%, which is not so different from the accuracy obtained from the structures of 50% decimation rate despite the significant difference in terms of triangle numbers. For instance, both T3-1 (50% decimation rate) and T3-2 (0.2% decimation rate) give a comparable result with an error lower than 0.1%, while the numbers of triangles are 3 417 294 and 13 669 triangles, respectively, about 250 times more.

4.3. Gap Fraction

We calculated gap fraction for the various structures over 505 viewing angles: zenith in the range 0° to 70° with a step of 5° and azimuth from 0° to 350° with a step of 10°. Results show that the gap fraction for plants in the first stage is relatively lower than the gap fraction of the plants in the second and last stage (Figure 8, Figure 9 and Figure 10). Nonetheless, we detected similar sensitivity results among the various stages. The polar contour diagrams (Figure 8, Figure 9 and Figure 10) show that the structures with a decimation rate of 0.01% overestimate the gap fraction for all stages of growth. Moreover, decimation rates of 50% and 0.2% give relatively comparable results in comparison to the polar contour diagrams for the reference structures: T1, T2, and T3.
To further investigate the difference, we calculated the relative difference for the three stages in two directions; the row’s direction (North) and cross row’s direction (East) as shown in Figure 11 and Figure 12. Results show that the relative difference in the east direction (Figure 12) is higher than the relative difference in the north direction (Figure 11) for all stages. It reaches a maximum of 18% for the structures with a 0.01% decimation rate in the east direction (φ = 90°), while it is lower than 8% for all structures in the row’s direction (φ = 0°). In addition, structures’ sensitivity for the later stages of growth is higher, which is due to the relative high values of LAI in the later stages. Curves of the structures with decimation rates of 0.05% (green lines) and 0.2% (violet lines) are, for the three stages, within the range of accuracy of 95% and 99%, respectively. As such, for an accuracy of 95%, 0.05% decimation rate seems to be a good compromise. However, for an accuracy of 99%, T1-2, T2-2, and T3-2 (decimation rate of 0.2%) gives overall better results.

4.4. DRF

The WPS model was used to investigate the DRF variations for all structures for the three stages of growth. DRF and its components, single scattering and multiple scattering contributions, were calculated over the spectrum range (400–1000 nm) in the solar principal plane for the viewing zenith angles (VZA) from −70° (forward direction) to +70° (backward direction) with an interval of 5°. Sun zenith and azimuth angles are 40° and 140°, respectively. The scene was bombarded by 9 million photons for the direct sun light and 9 million for the diffuse skylight, which gives the uncertainties for DRF and its components lower than 0.15%. Cyclic boundary conditions for the RT simulations are imposed to emulate an infinite maize canopy.
Calculated DRF and its components for the three stages are presented in Figure 13. Through the growing of the maize (from stage 1 to stage 3), the crops show lower reflectance for single scattering (dashed) and higher reflectance for multiple scattering (dashed dot). This is due to the increase and typical low values of LAI (<1) for the three stages.
For the two characteristic wavelengths of vegetative canopies, 670 nm (red region) and 850 nm (NIR region), we found out that the structures with a fewer number of triangles overestimate the total DRF in the red region (Figure 14) and underestimate it in the NIR region (Figure 15). Figure 14 shows that the all the decimated structures give similar inverse-bowl shape curves with a peak for VZA = 40° due to the hot spot effect. Similarly, in Figure 15 all the structures preserved the main features of the DRF (bowl shape and hotpot effect). However, the DRF values for the decimated structures are higher (red region) or lower (NIR region) than the DRF values for the reference structure (the curves in red lines). To quantify this difference we calculated the relative difference for three wavelengths in the red region (650, 670, 700 nm) and three wavelengths in the NIR region (750, 800, 850 nm) for the three stages (Figure 16, Figure 17 and Figure 18).
The comparisons among these results (Figure 16, Figure 17 and Figure 18) confirm that structures with fewer numbers of triangles overestimate the total DRFs in the red region while underestimating them in the NIR region. However, this sensitivity is higher in the red region for all stages. The relative difference in the red region reached a maximum of 18% and 36% for the first and last stages, respectively. In contrast, the relative difference in the NIR region is around 10% for all stages for all the decimation rates. To study this difference, we used the WPS model to calculate the single and multiple scattering contributions to the total DRF for the structures of the third growing stage (Figure 19). We found out that in the red region, multiple scattering accounts for less than 5% of the total DRF, while in the NIR region it has a stronger effect and accounts for around 40%. Since the multiple scattering contribution to the total DRF is higher in the NIR region, we investigated the relative difference of multiple scattering and single scattering for the wavelength 850 nm.
Figure 20 shows that overall, in the NIR region, multiple scattering is underestimated with a scale relatively higher than the scale of single scattering overestimation. As a result, the errors for single and multiple scattering contributions cancel each other out partially and the total DRF is underestimated. Due to this, the sensitivity in the NIR region is lower than that in the red region. In the red region, the main contribution is from single scattering, which is highly overestimated (Figure 21).
The underestimation of multiple scattering in the NIR region is likely due to LAI’s underestimation because multiple scattering intensity is highly correlated with optical objects’ total area. Figure 13 shows that there is a positive correlation between LAI values and multiple scattering contribution in the NIR region. The multiple scattering contribution for the last stage of growth (high LAI) is higher than that for the first stage (low LAI). As such, an underestimation of LAI from 0.84 for the reference structure to 0.64 (Table 4) for T3-4 (0.01% decimation rate) for the last stage of growth induces an underestimation of around 10–14% for the multiple scattering component (Figure 20b).
These findings support that the sensitivity of the DRF is correlated with the number of triangles describing the architecture of the canopy, the wavelength, and the stage of growth. The results show that for an accuracy of 95%, the structures T1-3 (with an average file size of around 44.18 Kilo Byte (KB) for one maize), T2-3 (47.81 KB), and T3-3 (55.37 KB), which correspond to a 0.05% decimation rate, are a good choice for the computation of the DRF. However, for better accuracy, T1-2 (with an average file size of around 96.43 KB for one maize), T2-2 (106.25 KB), and T3-2 (141.62 KB) with decimation rate of 0.2% give better results. A decimation rate of 0.2% for the three stages can correspond to an average number of 100-500 triangles per leaf. Therefore, for the computation of DRF, a total number of 100-500 triangles per leaf (in average) can give us both high accuracy of RT parameters and visual fidelity to the real vegetation architecture.
It should be noted that these results are obtained for maize canopy. They need to be confirmed over other types of vegetation. Moreover, the LAIs of the three stages considered in this study are lower than one, but we should include other cases for higher LAIs. However, within the scope of the use of LiDAR data to parameterize RT models and reconstruct vegetation structures, the current study gives a baseline and an insight into the use of decimated structures for the computation of vegetation structural and RT parameters.

5. Conclusions

Canopy architecture description can influence light absorption and scattering. Therefore, a reasonable canopy structure representation is important to characterize canopy structure and RT parameters. In this paper, we investigated the sensitivity of the description of the 3D architecture of maize canopies to the computation of canopy structural and RT parameters for three distinct growing stages. For this purpose, we used geometric modeling to transform the point cloud from terrestrial LiDAR data to geometric primitives (triangles). This process can generally result in structures with a high number of triangles that are difficult to use for further remote sensing applications. Thus, a proper reduction of the number of triangles of the reconstructed structures is essential to lower the execution time and resources needed to address the 3D RT models and growth models of vegetation.
Structures of maize plants are generated by wrapping the point cloud from multi-view terrestrial LiDAR measurement into different configurations as per the Hausdorff distance and decimation rate. From the results, we conclude that the proper number of triangles for the description of the canopy’s architecture can vary depending on the targeted accuracy (95% or 99%) and parameters (LAD, LAI, gap fraction, and DRF). We demonstrated that the description of the reconstructed structures of maize canopies based on LiDAR data could be critical for the computation of some parameters, such as DRF, LAI, and gap fraction, which require high number of triangles to represent the undulations and curvy parts of the leaves. However, it can be less critical for the computation of LAD. Computation of leaf zenith and azimuth angles require fewer number of triangles. In summary, the simplification process of the structures generated from the point cloud needs to be reasonable, and controlled by the use of Hausdorff distance with the assistance of visual inspection.
A decimation rate of 0.05% and Hausdorff distance of around 7, which corresponds to a number of triangles of 50 to 100 per leaf, can give an overall relative error of 5% for all the parameters mentioned above. However, with a set of triangles ranging from 100 to 500 per leaf, which corresponds to a 0.2% decimation rate and a Hausdorff distance around 5, the accuracy of the computation of the various parameters is within 98%. Less detailed structures (0.01% decimation rate and Hausdorff distance of 13) succeeded in computing certain parameters such a LAD but failed to give good accuracy for LAI, gap fraction, and DRF, which emphasize the difference in sensitivity among the various parameters.
More generally, these primary findings provide a meaningful reference to get some clues and prior information to use 3D LiDAR-based structures for a more in-depth study of vegetation canopies, particularly with the increasing the availability and affordability of LiDAR data to extract phenotypic traits from vegetation.

Author Contributions

Original draft preparation and formal analysis, B.A. and F.Z.; review and editing, B.A., J.L. and F.Z.; methodology, B.A. and F.Z.; software, B.A., Z.L., Q.Z., J.G., L.W., P.T., Y.J., W.S. and Y.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key R&D Program of China (2019YFE0127300), the Chinese Natural Science Foundation (Grant Nos. 41771382, 41401410, 41611530544), and full-time introduced top talents scientific research projects in Hebei Province (2020HBQZYC002). We thank our group members for their help in the field work.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Monteith, J.L.; Ross, J. The Radiation Regime and Architecture of Plant Stands; Junk Publishers: The Hague, The Netherlands, 1981. [Google Scholar]
  2. Lagergren, F.; Eklundh, L.; Grelle, A.; Lundblad, M.; Mölder, M.; Lankreijer, H.; Lindroth, A. Net primary production and light use efficiency in a mixed coniferous forest in Sweden. Plant Cell Environ. 2004, 28, 412–423. [Google Scholar] [CrossRef]
  3. Niinemets, Ü. A review of light interception in plant stands from leaf to canopy in different plant functional types and in species with varying shade tolerance. Ecol. Res. 2010, 25, 693–714. [Google Scholar] [CrossRef]
  4. Kuusk, A. Canopy Radiative Transfer Modeling. In Comprehensive Remote Sensing V. 3 Remote Sensing of Terrestrial Ecosystem; Liang, S., Ed.; Elsvier: Amsterdam, The Netherlands, 2018; pp. 9–22. [Google Scholar] [CrossRef]
  5. Goel, N.S. Models of vegetation canopy reflectance and their use in estimation of biophysical parameters from reflectance data. Remote Sens. Rev. 1988, 4, 1–212. [Google Scholar] [CrossRef]
  6. Levashova, N.; Lukyanenko, D.; Mukhartova, Y.; Olchev, A. Application of a Three-Dimensional Radiative Transfer Model to Retrieve the Species Composition of a Mixed Forest Stand from Canopy Reflected Radiation. Remote Sens. 2018, 10, 1661. [Google Scholar] [CrossRef] [Green Version]
  7. España, M.; Baret, F.; Chelle, M.; Aries, F.; Andrieu, B. A dynamic model of maize 3D architecture: Application to the parameterisation of the clumpiness of the canopy. Agronomie 1998, 18, 609–626. [Google Scholar] [CrossRef]
  8. Remondino, F.; El-Hakim, S. Image-based 3D Modelling: A Review. Photogramm. Rec. 2006, 21, 269–291. [Google Scholar] [CrossRef]
  9. Zhu, F.; Thapa, S.; Gao, T.; Ge, Y.; Walia, H.; Yu, H. 3D Reconstruction of plant leaves for high-throughput phenotyping. In Proceedings of the 2018 IEEE International Conference on Big Data, Seattle, WA, USA, 10–13 December 2018. [Google Scholar]
  10. Guo, Q.; Su, Y.; Hu, T.; Guan, H.; Jin, S.; Zhang, J.; Zhao, X.; Xu, K.; Wei, D.; Kelly, M.; et al. Lidar Boosts 3D Ecological Observations and Modelings: A Review and Perspective. IEEE Geosci. Remote Sens. Mag. 2021, 9, 232–257. [Google Scholar] [CrossRef]
  11. Koreň, M.; Mokroš, M.; Bucha, T. Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods. Int. J. Appl. Earth Obs. Geoinf. 2017, 63, 122–128. [Google Scholar] [CrossRef]
  12. Wang, Y.; Lehtomäki, M.; Liang, X.; Pyörälä, J.; Kukko, A.; Jaakkola, A.; Liu, J.; Feng, Z.; Chen, R.; Hyyppä, J. Is field-measured tree height as reliable as believed—A comparison study of tree height estimates from field measurement, air-borne laser scanning and terrestrial laser scanning in a boreal forest. ISPRS J. Photogramm. Remote Sens. 2019, 147, 132–145. [Google Scholar] [CrossRef]
  13. Li, Y.; Guo, Q.; Su, Y.; Tao, S.; Zhao, K.; Xu, G. Retrieving the gap fraction, element clumping index, and leaf area index of individual trees using single-scan data from a terrestrial laser scanner. ISPRS J. Photogramm. Remote Sens. 2017, 130, 308–316. [Google Scholar] [CrossRef]
  14. Zheng, G.; Moskal, L.M. Leaf Orientation Retrieval from Terrestrial Laser Scanning (TLS) Data. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3970–3979. [Google Scholar] [CrossRef]
  15. Walter, J.D.C.; Edwards, J.; McDonald, G.; Kuchel, H. Estimating Biomass and Canopy Height with LiDAR for Field Crop Breeding. Front. Plant Sci. 2019, 10, 1145. [Google Scholar] [CrossRef] [PubMed]
  16. Harkel, J.T.; Bartholomeus, H.; Kooistra, L. Biomass and Crop Height Estimation of Different Crops Using UAV-Based Lidar. Remote Sens. 2019, 12, 17. [Google Scholar] [CrossRef] [Green Version]
  17. Hu, T.; Sun, X.; Su, Y.; Guan, H.; Sun, Q.; Kelly, M.; Guo, Q. Development and Performance Evaluation of a Very Low-Cost UAV-Lidar System for Forestry Applications. Remote Sens. 2020, 13, 77. [Google Scholar] [CrossRef]
  18. Widlowski, J.-L.; Côté, J.-F.; Béland, M. Abstract tree crowns in 3D radiative transfer models: Impact on simulated open-canopy reflectances. Remote Sens. Environ. 2014, 142, 155–175. [Google Scholar] [CrossRef]
  19. Widlowski, J.-L.; Mio, C.; Disney, M.; Adams, J.; Andredakis, I.; Atzberger, C.; Brennan, J.; Busetto, L.; Chelle, M.; Ceccherini, G.; et al. The fourth phase of the radiative transfer model intercomparison (RAMI) exercise: Actual canopy scenarios and conformity testing. Remote Sens. Environ. 2015, 169, 418–437. [Google Scholar] [CrossRef]
  20. Espana, M.; Baret, F.; Aries, F.; Andrieu, B.; Chelle, M. Radiative transfer sensitivity to the accuracy of canopy description. The case of a maize canopy. Agron. EDP Sci. 1999, 19, 241–254. [Google Scholar] [CrossRef]
  21. Zhai, L.-C.; Xie, R.-Z.; Ming, B.; Li, S.-K.; Ma, D.-L. Evaluation and analysis of intraspecific competition in maize: A case study on plant density experiment. J. Integr. Agric. 2018, 17, 2235–2244. [Google Scholar] [CrossRef] [Green Version]
  22. Rockafellar, R.T.; Roger, J.-B.W. Variational Analysis; Springer: Berlin/Heidelberg, Germany, 2005; p. 117. [Google Scholar]
  23. Zhao, F.; Gu, X.; Verhoef, W.; Wang, Q.; Yu, T.; Liu, Q.; Huang, H.; Qin, W.; Chen, L.; Zhao, H. A spectral directional reflectance model of row crops. Remote Sens. Environ. 2010, 114, 265–285. [Google Scholar] [CrossRef]
  24. Zhu, B.; Liu, F.; Xie, Z.; Guo, Y.; Li, B.; Ma, Y. Quantification of light interception within image-based 3D reconstruction of sole and intercropped canopies over the entire growth season. Ann. Bot. 2020, 126, 701–712. [Google Scholar] [CrossRef] [Green Version]
  25. Hui, F.; Zhu, J.; Hu, P.; Meng, L.; Zhu, B.; Guo, Y.; Li, B.; Ma, Y. Image-based dynamic quantification and high-accuracy 3D evaluation of canopy structure of plant populations. Ann. Bot. 2018, 121, 1079–1088. [Google Scholar] [CrossRef] [PubMed]
  26. Wu, S.; Wen, W.; Xiao, B.; Guo, X.; Du, J.; Wang, C.; Wang, Y. An accurate skelton extraction approach from 3D point clouds of maize plants. Front. Plant Sci. 2019, 10, 248. [Google Scholar] [CrossRef] [Green Version]
  27. Zhao, F.; Li, Y.; Dai, X.; Verhoef, W.; Guo, Y.; Shang, H.; Gu, X.; Huang, Y.; Yu, T.; Huang, J. Simulated impact of sensor field of view and distance on field measurements of bidirectional reflectance factors for row crops. Remote Sens. Environ. 2015, 156, 129–142. [Google Scholar] [CrossRef]
  28. Cignoni, P.; Rocchini, C.; Scopigno, R. Metro: Measuring Error on Simplified Surfaces. Comput. Graph. Forum 1998, 17, 167–174. [Google Scholar] [CrossRef] [Green Version]
  29. Hussain, M. Fast and Reliable Decimation of Polygonal Models Based on Volume and Normal Field; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2010; pp. 64–73. [Google Scholar] [CrossRef]
  30. Böök, D. Make It Simpler: Structure-Aware Mesh Decimation of Large-Scale Models. Master’s Thesis, Department of Electrical Engineering, Linköping University, Linköping, Sweden, 2019. [Google Scholar]
  31. Chen, Z.; Zheng, X.; Guan, T. Structure-Preserving Mesh Simplification. KSII Trans. Internet Inf. Syst. 2020, 14, 4463–4482. [Google Scholar]
  32. Myneni, R.; Ross, J.; Asrar, G. A review on the theory of photon transport in leaf canopies. Agric. For. Meteorol. 1989, 45, 1–153. [Google Scholar] [CrossRef]
  33. Asner, G. Biophysical and Biochemical Sources of Variability in Canopy Reflectance. Remote Sens. Environ. 1998, 64, 234–253. [Google Scholar] [CrossRef]
  34. Liu, J.; Skidmore, A.K.; Wang, T.; Zhu, X.; Premier, J.; Heurich, M.; Beudert, B.; Jones, S. Variation of leaf angle distribution quantified by terrestrial LiDAR in natural European beech forest. ISPRS J. Photogramm. Remote Sens. 2019, 148, 208–220. [Google Scholar] [CrossRef]
  35. Li, X.; Strahler, A. Modeling the gap probability of a discontinuous vegetation canopy. IEEE Trans. Geosci. Remote Sens. 1988, 26, 161–170. [Google Scholar] [CrossRef]
  36. Weiss, M.; Baret, F.; Smith, G.J.; Jonckheere, I.; Coppin, P. Review of methods for In Situ leaf area index (LAI) determination. Agric. For. Meteorol. 2004, 121, 37–53. [Google Scholar] [CrossRef]
  37. Kobayashi, H.; Ryu, Y.; Baldocchi, D.D.; Welles, J.M.; Norman, J.M. On the correct estimation of gap fraction: How to remove scattered radiation in gap fraction measurements? Agric. For. Meteorol. 2013, 174–175, 170–183. [Google Scholar] [CrossRef]
  38. Zhao, F.; Dai, X.; Verhoef, W.; Guo, Y.; van der Tol, C.; Li, Y.; Huang, Y. FluorWPS: A Monte Carlo ray-tracing model to compute sun-induced chlorophyll fluorescence of three-dimensional canopy. Remote Sens. Environ. 2016, 187, 385–399. [Google Scholar] [CrossRef]
  39. Mandelbrot, B. How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension. Science 1967, 156, 636–638. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. (a) study area and the registration points, (b) the point clouds of the 16 maize plants.
Figure 1. (a) study area and the registration points, (b) the point clouds of the 16 maize plants.
Remotesensing 13 03751 g001
Figure 2. Representation of one plant in different decimation configurations (last stage of growth).
Figure 2. Representation of one plant in different decimation configurations (last stage of growth).
Remotesensing 13 03751 g002
Figure 3. Computation of LAD (leaf angle distribution) based on normal vector of each triangle.
Figure 3. Computation of LAD (leaf angle distribution) based on normal vector of each triangle.
Remotesensing 13 03751 g003
Figure 4. Frequencies of leaf zenith (a) and azimuth (b) angles for the reference structures. (T1: first stage, T2: second stage, and T3: third stage, this annotation will be used henceforth).
Figure 4. Frequencies of leaf zenith (a) and azimuth (b) angles for the reference structures. (T1: first stage, T2: second stage, and T3: third stage, this annotation will be used henceforth).
Remotesensing 13 03751 g004
Figure 5. Parts of the leaves with zenith angles higher than 90° for the third stage. (a) real photograph of the plant, (b) structure reconstructed from the point cloud.
Figure 5. Parts of the leaves with zenith angles higher than 90° for the third stage. (a) real photograph of the plant, (b) structure reconstructed from the point cloud.
Remotesensing 13 03751 g005
Figure 6. Delta frequencies for leaf zenith angles (first row) and azimuth angles (second row) of the various structures, (a) first stage, (b) second stage, and (c) last stage. For each stage, the last indices refer to the decimation rate (1: 50%, 2: 0.2%, 3: 0.05%, and 4: 0.01%). This annotation will be used henceforth.
Figure 6. Delta frequencies for leaf zenith angles (first row) and azimuth angles (second row) of the various structures, (a) first stage, (b) second stage, and (c) last stage. For each stage, the last indices refer to the decimation rate (1: 50%, 2: 0.2%, 3: 0.05%, and 4: 0.01%). This annotation will be used henceforth.
Remotesensing 13 03751 g006
Figure 7. Relative difference of LAI for the various configurations. The dashed lines show the relative difference of −1% and −5%.
Figure 7. Relative difference of LAI for the various configurations. The dashed lines show the relative difference of −1% and −5%.
Remotesensing 13 03751 g007
Figure 8. Gap fraction polar contour diagrams for the first stage (16 September 2019).
Figure 8. Gap fraction polar contour diagrams for the first stage (16 September 2019).
Remotesensing 13 03751 g008
Figure 9. Gap fraction polar contour diagrams for the second stage (25 September 2019).
Figure 9. Gap fraction polar contour diagrams for the second stage (25 September 2019).
Remotesensing 13 03751 g009
Figure 10. Gap fraction polar contour diagrams for the third stage (25 October 2019).
Figure 10. Gap fraction polar contour diagrams for the third stage (25 October 2019).
Remotesensing 13 03751 g010
Figure 11. Relative difference of gap fraction (North direction), (a) first stage, (b) second stage, and (c) last stage. The dashed lines in red show the relative difference of 1% and 5%.
Figure 11. Relative difference of gap fraction (North direction), (a) first stage, (b) second stage, and (c) last stage. The dashed lines in red show the relative difference of 1% and 5%.
Remotesensing 13 03751 g011
Figure 12. Relative difference of gap fraction (East direction), (a) first stage, (b) second stage, and (c) last stage. The dashed lines in red show the relative difference of 1% and 5%.
Figure 12. Relative difference of gap fraction (East direction), (a) first stage, (b) second stage, and (c) last stage. The dashed lines in red show the relative difference of 1% and 5%.
Remotesensing 13 03751 g012
Figure 13. Reflectance of the three stages in the spectral range 400–1000 nm (dashed line: single scattering, dashed dot: multiple scattering).
Figure 13. Reflectance of the three stages in the spectral range 400–1000 nm (dashed line: single scattering, dashed dot: multiple scattering).
Remotesensing 13 03751 g013
Figure 14. Reflectance at the wavelength 670 nm for different zenith angles, (a) first stage, (b) second stage, and (c) last stage.
Figure 14. Reflectance at the wavelength 670 nm for different zenith angles, (a) first stage, (b) second stage, and (c) last stage.
Remotesensing 13 03751 g014
Figure 15. Reflectance at the wavelength 850 nm for different zenith angles, (a) first stage, (b) second stage, and (c) last stage.
Figure 15. Reflectance at the wavelength 850 nm for different zenith angles, (a) first stage, (b) second stage, and (c) last stage.
Remotesensing 13 03751 g015
Figure 16. Relative difference for the first stage, (a) 650 nm, (b) 670 nm, (c) 700 nm, (d) 750 nm, (e) 800 nm, and (f) 850 nm. The dashed lines in red represent the relative difference of 1% and 5%.
Figure 16. Relative difference for the first stage, (a) 650 nm, (b) 670 nm, (c) 700 nm, (d) 750 nm, (e) 800 nm, and (f) 850 nm. The dashed lines in red represent the relative difference of 1% and 5%.
Remotesensing 13 03751 g016
Figure 17. Relative difference for the second stage, (a) 650 nm, (b) 670 nm, (c) 700 nm, (d) 750 nm, (e) 800 nm, and (f) 850 nm. The dashed lines in red represent the relative difference of 1% and 5%.
Figure 17. Relative difference for the second stage, (a) 650 nm, (b) 670 nm, (c) 700 nm, (d) 750 nm, (e) 800 nm, and (f) 850 nm. The dashed lines in red represent the relative difference of 1% and 5%.
Remotesensing 13 03751 g017
Figure 18. Relative difference for the last stage, (a) 650nm, (b) 670 nm, (c) 700 nm, (d) 750 nm, (e) 800 nm, and (f) 850 nm. The dashed lines in red represent the relative difference of 1% and 5%.
Figure 18. Relative difference for the last stage, (a) 650nm, (b) 670 nm, (c) 700 nm, (d) 750 nm, (e) 800 nm, and (f) 850 nm. The dashed lines in red represent the relative difference of 1% and 5%.
Remotesensing 13 03751 g018
Figure 19. Contribution of single and multiple scattering to the DRF. (a) 670 nm, (b) 850 nm (third stage of growth).
Figure 19. Contribution of single and multiple scattering to the DRF. (a) 670 nm, (b) 850 nm (third stage of growth).
Remotesensing 13 03751 g019
Figure 20. Relative difference in the NIR region (850 nm) for the third stage of growth (a) single scattering, (b) multiple scattering).
Figure 20. Relative difference in the NIR region (850 nm) for the third stage of growth (a) single scattering, (b) multiple scattering).
Remotesensing 13 03751 g020
Figure 21. Relative difference of single scattering component in the red region 670 nm for the third stage of growth.
Figure 21. Relative difference of single scattering component in the red region 670 nm for the third stage of growth.
Remotesensing 13 03751 g021
Table 1. The parameters of the FARO Focus3D x330 LiDAR (Light Detection and Ranging) surveying instrument.
Table 1. The parameters of the FARO Focus3D x330 LiDAR (Light Detection and Ranging) surveying instrument.
ParameterRange of Values
Scanning distance (m)0.6 to 330
Scanning speed (points/s)122,000 to 976,000
Ranging error (mm)±2
Resolution (pixels)7 × 107
Vertical field of view (°)300
Horizontal field of view (°)360
Laser classClass 1
Wavelength (nm)1550
Global Positioning System (GPS)Integrated GPS receiver
Table 2. The various scenarios chosen in the study.
Table 2. The various scenarios chosen in the study.
ScenarioDecimation RateHausdorff Distance
Stage 1Stage 2Stage 3
T (stage N#)-150%0.290.590.34
T (stage N#)-20.2%2.014.102.44
T (stage N#)-30.05%4.706.975.85
T (stage N#)-40.01%13.1313.7113.95
Table 3. Average and standard deviation for leaf zenith and azimuth angles for the structures of all stages.
Table 3. Average and standard deviation for leaf zenith and azimuth angles for the structures of all stages.
Structure θ ¯ S D   θ φ ¯ S D   φ
T155.77527.163189.291106.310
T1-155.74927.194189.263106.421
T1-255.35227.550188.929106.422
T1-353.82526.951190.392107.134
T1-452.77222.871195.698107.753
T261.52025.317177.123109.123
T2-161.42825.395177.404109.278
T2-261.19425.938178.038109.616
T2-360.34525.549178.684110.242
T2-459.89822.650182.191112.445
T367.47524.496182.264106.617
T3-167.61624.572183.455106.628
T3-267.50325.208183.687106.983
T3-367.12824.964183.036107.974
T3-466.52421.386183.758108.815
Table 4. LAI (leaf area index) and relative difference of the various configurations for all growth stages.
Table 4. LAI (leaf area index) and relative difference of the various configurations for all growth stages.
Decimation RateStage 1 (Relative Difference)Stage 2 (Relative Difference)Stage 3 (Relative Difference)
0.01%0.2589 (−29.84%)0.4216 (−26.98%)0.6386 (−23.97%)
0.05%0.3499 (−5.17%)0.5521 (−4.39%)0.8097 (−3.61%)
0.2%0.3673 (−0.45%)0.5750 (−0.43%)0.8407 (0.08%)
50%0.3692 (0.05%)0.5772 (0.04%)0.8401 (0.01%)
100%0.36900.57740.8399
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ali, B.; Zhao, F.; Li, Z.; Zhao, Q.; Gong, J.; Wang, L.; Tong, P.; Jiang, Y.; Su, W.; Bao, Y.; et al. Sensitivity Analysis of Canopy Structural and Radiative Transfer Parameters to Reconstructed Maize Structures Based on Terrestrial LiDAR Data. Remote Sens. 2021, 13, 3751. https://doi.org/10.3390/rs13183751

AMA Style

Ali B, Zhao F, Li Z, Zhao Q, Gong J, Wang L, Tong P, Jiang Y, Su W, Bao Y, et al. Sensitivity Analysis of Canopy Structural and Radiative Transfer Parameters to Reconstructed Maize Structures Based on Terrestrial LiDAR Data. Remote Sensing. 2021; 13(18):3751. https://doi.org/10.3390/rs13183751

Chicago/Turabian Style

Ali, Bitam, Feng Zhao, Zhenjiang Li, Qichao Zhao, Jiabei Gong, Lin Wang, Peng Tong, Yanhong Jiang, Wei Su, Yunfei Bao, and et al. 2021. "Sensitivity Analysis of Canopy Structural and Radiative Transfer Parameters to Reconstructed Maize Structures Based on Terrestrial LiDAR Data" Remote Sensing 13, no. 18: 3751. https://doi.org/10.3390/rs13183751

APA Style

Ali, B., Zhao, F., Li, Z., Zhao, Q., Gong, J., Wang, L., Tong, P., Jiang, Y., Su, W., Bao, Y., & Li, J. (2021). Sensitivity Analysis of Canopy Structural and Radiative Transfer Parameters to Reconstructed Maize Structures Based on Terrestrial LiDAR Data. Remote Sensing, 13(18), 3751. https://doi.org/10.3390/rs13183751

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop