Next Article in Journal
Improved Anchor-Free Instance Segmentation for Building Extraction from High-Resolution Remote Sensing Images
Next Article in Special Issue
Using Leaf-Off and Leaf-On Multispectral Airborne Laser Scanning Data to Characterize Seedling Stands
Previous Article in Journal
Multi-Site and Multi-Year Remote Records of Operative Temperatures with Biomimetic Loggers Reveal Spatio-Temporal Variability in Mountain Lizard Activity and Persistence Proxy Estimates
Previous Article in Special Issue
Highly Local Model Calibration with a New GEDI LiDAR Asset on Google Earth Engine Reduces Landsat Forest Height Signal Saturation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simulations of Leaf BSDF Effects on Lidar Waveforms

1
Rochester Institute of Technology, Rochester, NY 14623, USA
2
Battelle, NEON Program, Boulder, CO 80301, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(18), 2909; https://doi.org/10.3390/rs12182909
Submission received: 17 July 2020 / Revised: 30 August 2020 / Accepted: 3 September 2020 / Published: 8 September 2020
(This article belongs to the Special Issue Lidar Remote Sensing of Forest Structure, Biomass and Dynamics)

Abstract

:
Establishing linkages between light detection and ranging (lidar) data, produced from interrogating forest canopies, to the highly complex forest structures, composition, and traits that such forests contain, remains an extremely difficult problem. Radiative transfer models have been developed to help solve this problem and test new sensor platforms in a virtual environment. Many forest canopy studies include the major assumption of isotropic (Lambertian) reflecting and transmitting leaves or non-transmitting leaves. Here, we study when these assumptions may be valid and evaluate their associated impacts/effects on the lidar waveform, as well as its dependence on wavelength, lidar footprint, view angle, and leaf angle distribution (LAD), by using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) remote sensing radiative transfer simulation model. The largest effects of Lambertian assumptions on the waveform are observed at visible wavelengths, small footprints, and oblique interrogation angles relative to the mean leaf angle. For example, a 77% increase in return signal was observed with a configuration of a 550 nm wavelength, 10 cm footprint, and 45° interrogation angle to planophile leaves. These effects are attributed to (i) the bidirectional scattering distribution function (BSDF) becoming almost purely specular in the visible, (ii) small footprints having fewer leaf angles to integrate over, and (iii) oblique angles causing diminished backscatter due to forward scattering. Non-transmitting leaf assumptions have the greatest error for large footprints at near-infrared (NIR) wavelengths. Regardless of leaf angle distribution, all simulations with non-transmitting leaves with a 5 m footprint and 1064 nm wavelength saw around a 15% reduction in return signal. We attribute the signal reduction to the increased multiscatter contribution for larger fields of view, and increased transmission at NIR wavelengths. Armed with the knowledge from this study, researchers will be able to select appropriate sensor configurations to account for or limit BSDF effects in forest lidar data.

Graphical Abstract

1. Introduction

1.1. Preface

As ecosystems worldwide become increasingly vulnerable due to threats such as global warming, increased human land use, pollution, invasive species, disease, parasitic insects, storm damage, and fire, monitoring and managing the Earth’s forests arguably has never been more important. Forest management requires assessing growth conditions and taking proper action to ensure sustainable use of this natural resource. Accurately measuring forest health is a vital need, as resources are continually extracted, populations expand land use, and the impacts of climate change grow. Many current assessment methods consist of ground campaigns to measure individual trees, which is time-consuming, costly, and often are a poor representation of the entire forest that needs to be characterized [1]. Emerging technologies in remote sensing and subsequent data processing have begun to fill this gap. Remote sensing is generally defined as detection, recognition, and evaluation at a distance [2]. Airborne and spaceborne remote sensing, in particular, have emerged as important modalities in measuring the condition of forests. These technologies contribute to assessments at multiple spatial scales, with the ability to measure variations over time. However, a focus on improving remote sensing technology must also include turning collected data into interpretive products for decision-making. Franklin [2] outlines some of the important aspects that remote sensing brings to the field of forest management. These include differentiating forest cover types and species; identifying locations of previous treatments of thinning, plantings, or cutovers; locating areas of insect damage, wind thrown, floods, and fire damage; and creating maps with metrics such as stand density, leaf area index (LAI), or biomass. As mentioned above, what is of particular use to forest scientists and practitioners is that remote sensing assessments can be performed at local, regional, and even continental scales.
Remote sensing via airborne and satellite platforms allows for this capability to make environmental assessments at the required scales. Relevant sensing systems and associated data processing algorithms continually are evolving to provide accurate and reliable information. One of the most important tools for advancing remote sensing systems are radiative transfer models (RTMs), which are able to simulate sensor, environmental, and scene characteristics. Numerous RTMs have been used in the development of sensors and data processing algorithms [3,4,5,6,7,8,9,10,11,12]. Radiative transfer models, specifically their use as a “virtual laboratory”, have been used for improving a plethora of remote sensing applications, ranging from geophysics, urban development, and weather forecasting [9]. Traditional applications include simulating canopy reflectance under different sensor and illumination conditions [13], and the use of these models for plant growth predictions via examination of canopy development as a function of incident radiation [14]. RTMs also have been leveraged extensively for the development of sensors, e.g., for simulating the sensor response function of the LANDSAT TM sensor [15]. The use of these models extend past system simulations, however, where biophysical characteristics are often assessed via vegetation indices in RTMs, as well as the impact that environmental conditions, such as topographical changes, can have on such indices [16]. Directional scattering, furthermore, has been explored based on inversion parameters of forest scene characteristics, such as leaf density, angle distributions, and size [17]. More recent uses include synthetic image generation for the improved understanding and identification of constituents within chemical plumes via remote sensing technologies [18]. Another recent application is the creation of a lidar waveform simulator for the prediction of signals from NASA’s Global Ecosystem Dynamics Investigation (GEDI) which contributed to data evaluation and algorithm development prior to launching the satellite [19].
However, simplifying assumptions are often used in simulations to enhance computational efficiency or because exact simulation properties, e.g., sensor, environmental, or scene-related, are not fully known. We contend that by evaluating current assumptions, specifically in terms of radiometric assumptions, an improvement in radiative transfer models can be realized, which will result in the development of better sensors and data processing algorithms.

1.2. Errors Due to Leaf Scattering Assumptions in Optical Remote Sensing Simulations

One application of radiative transfer models has been to study the link between remote sensing data and biophysical elements of forest canopies [4,20,21,22,23,24]. A major assumption in many of these models is perfectly diffuse scattering leaves. This assumption has been evaluated by Yang et al. [25] for medium resolution hyperspectral imagery. The authors evaluated the contribution of leaf specular reflection to the total canopy bidirectional reflectance factor (BRF). Imagery from the EO-1 Hyperion sensor and field measurements were used to identify parameters for the Stochastic Radiative Transfer Model (SRTM) [22,26]. Results showed an approximately 33% normalized root mean squared error (NRMSE) at 550 nm and 660 nm, between a canopy with and without leaf specular components. However, polarization measurements are one possible solution to this challenge, since specular components are largely linearly polarized. Xie et al. [27] executed polarized BRF measurements on corn leaves, along with field measurements and photographic methods, to construct a nearly identical three-dimensional maize canopy. The radiosity–graphics combined model (RGM) [24] was used to perform sensitivity studies toward quantifying the difference between Lambertian leaf assumptions and leaf specular attributes. The study evaluated the dependence of the specular portion of the maize canopy BRF for different leaf angle distributions (LAD), leaf area index (LAI) values, leaf surface properties, and solar angles. LAD and specular component influences on canopy BRF previously has been investigated through Monte Carlo techniques, but efforts lacked computational power [28]. The authors discovered that near horizontal leaves, large solar zenith angles, and wavelengths in the visible spectral domain resulted in the largest contribution of specular reflectance [27]. Walter-Shea [29], in turn, measured corn and soybean leaf optical scattering properties, fit directional scattering properties with exponential curves, and then determined the contribution of the non-Lambertian component to the passive canopy optical reflected radiances, using the 1D radiative transfer model Cupid. The modeled results were compared to field measurements, and showed that the inclusion of the non-Lambertian leaves improved the model prediction, with up to a 7% difference. The author also assessed the contribution of leaf orientation, reporting up to a 20% effect at a nadir view, with horizontal leaves having the highest reflectance. The errors identified in these studies demonstrate the compounding inaccuracy of biophysically derived values from remote sensing data when leaf specular contributions are ignored. Although most vegetation canopy BRDF studies have concentrated on multispectral or hyperspectral remote sensing, lidar data also are not immune to specular influences.

1.3. Normalizing Lidar Intensity Data

Lidar has become popular in the remote sensing community as it adds the third spatial dimension to data. A few examples of airborne ground scanning systems operated by the US government include the NEON AOP (NSF) [30], EAARL (USGS) [31], LVIS (NASA) [32], G-LiHT (NASA) [33], and SLICER (NASA) [34], with several commercial companies now manufacturing sensors, including Leica, Optech, Riegl, and TopEye [35]. Lidar systems have even been placed in space with NASA’s ICESAT [36], ICESAT-2 [37], and GEDI [38] missions. There are a variety of different lidar systems, often characterized by their laser wavelength, footprint size, pulse width, digitization method (discrete or waveform), and platform (terrestrial, airborne, and space based) [39]. However, intensity data from lidar are often not used, as these depend on a number of environmental and system conditions in addition to target material properties, e.g., adaptive gain settings [40]. Linking the intensity to material properties through normalization techniques therefore has been a focus of much research. For example, over large areas, landscape elevations can vary, thereby requiring a range normalization to lidar intensity data [41]. Besides range effects, incidence angle also presents problems when attempting to use lidar data for classification or segmentation. The effect of lidar incident angle to targets’ BRDF effects previously has been investigated for extended targets that are larger than the beam footprint. These studies, some of which are discussed below, typically take advantage of waveform lidar’s ability to capture the entire return energy from a target. However, many linear mode systems only record an arbitrary amplitude, which does not represent the true energy for different target geometries [42].
Jutzi and Gross [43] normalized first return intensity values from gabled roofs by considering range, incident angle, and atmospheric attenuation. The normalization was accomplished by considering the energy contained in Gaussian returns from a full-waveform system and then normalizing by range, incident angle, and atmosphere attenuation. The incident angle was found by determining the surface normal and known lidar scan direction (angle). If there are many points on a flat surface, the normal surface vector can be found as the last eigenvector of the covariance matrix of the three-dimensional points on the surface [44]. Both a Lambertian model and Phong specular–diffuse model were tested, but due to the diffuse nature of targets, the Lambertian correction was sufficient. Incident angle normalization was also performed by Zhu et al. [45] in the interest of finding correlation between leaf water content and intensity. These experiments were done at close range in a laboratory environment, resulting in extremely high point densities on plant leaves. Planar surfaces were extracted to determine incidence angles. A diffuse–specular model, based on the Beckmann law [46], was created by interpolating reference spectralon panels at different angles and reflectances. The removal of the specular component allowed for an increase of correlation to leaf water content from an R2 of 0.01 to 0.76. It thus follows that a similar normalization for airborne lidar systems (ALS), used for characterizing forest canopies, could prove invaluable for extracting data needed for forest management. However, the complexity of separating individual scattering areas, orientation, and reflectivity within an ALS footprint has yet to be solved. Instead, regression models to reference (ground) metrics are often used to extract canopy characteristics, such as leaf area index (LAI) and leaf area density (LADen).

1.4. Extracting LAI and LADen from Lidar Data

Lidar has been used to extract biophysical properties from forest canopies to include vegetation height [47], biomass [48], land cover classification [49], tree classification [50], and tree segmentation [51], to name a few. The leaf area index (LAI, one-sided leaf area per unit ground area) and leaf area density (LADen, one-sided leaf area per unit volume) [52] are forest metrics that can be estimated by lidar, but are susceptible to error due to leaf optical and physical properties. LAI and LADen are important parameters for ecological management and are often used as inputs to system models for understanding carbon sequestration and allocation processes [53,54]. LADen is especially important in understanding the vertical structure within forest canopies, holding the potential for more accurate above ground biomass estimates, as well as information on the impact of large-scale disturbances [55]. In fact, LADen can be mapped over the landscape in order to better understand the effects of forest disturbances (such as pathogens, invasive insects, fire, drought, and windthrow) and thus inform conservation decisions [56]. This may even be possible at a global scale, given the recent deployment of NASA’s Global Ecosystem Dynamics Investigation (GEDI) system [38]. This bodes well for future forest assessments, since publicly available lidar data have greatly increased in the last few years, especially ALS data from which regional forest metrics can be derived.
Traditional methods of determining LADen involve lowering a plumb line and recording contact with vegetation or to collect photographs pointing upward in the canopy at different heights [57]. These methods are labor intensive, time-consuming, and can only be accomplished at fine scales. Remote sensing, specifically lidar, is able to overcome many of these problems by acting as a “plumb line” penetrating into the canopy, thereby producing information from which the three-dimensional internal structure can be interrogated [58]. Although lidar data holds great potential in standardizing and mapping LAI and LADen metrics, calibration of the data is still required with coincident ground reference data, often estimated from hemispherical photographs [59] or an LAI instrument such as Li-Cor [60]. These instruments approximate LAI from gap fractions, estimated by looking up into the canopy, and actually return a plant area index (PAI), which is often substituted for LAI. Furthermore, many considerations must be made when using these instruments, such as having a uniform sky, so that canopy gaps exhibit similar pixel values, while one also has to adjust for the lidar scan angle [61].
A repeatable and accurate method in estimating LADen from lidar point clouds was presented by Kamoske et al. by comparing LADen, derived from NEON AOP [30] and NASA G-LiHT [33] data, which differ in sensor specifications and canopy penetration [54]. They found a dependence on voxel resolution with higher correlations at coarser spatial scales, with an R 2 value of 0.9 at a 10 m horizontal spatial resolution. Such a result is possible because the lidar sensor and operating specifications are directly linked to penetration into the canopy. Larger scan angle diversity allows the lidar pulses to find gaps under the top canopy layer, and the greater the PRF, the more pulses will reach the forest floor. The most significant factor, however, is the beam spread combined with operating altitude, both of which directly affect the signal-to-noise ratio [62]. The NASA G-LiHT lidar system provides larger scan angles, higher PRF, and smaller beam spread than the NEON AOP. Though NASA G-LiHT data seem superior for understanding internal forest structure, only point cloud data are available, while the NEON AOP provides small-footprint waveform data. Waveform data record the intensity of each return pulse with nanosecond resolution, thereby allowing for a more comprehensive look into the forest structure and complexities in finer temporal and spatial detail [63]. As a relatively new technology, the capability of (waveform lidar) wlidar data has yet to be fully exploited, and instead is usually down sampled to a discrete point cloud. We agree that a method for estimating LADen from the use of the entire lidar waveform intensities may show higher accuracy than estimates from point clouds alone [64].

1.5. Simulating Lidar Signals

Despite the increased use of lidar to extract meaningful structural assessments from forest canopies, there remains a gap in our ability to understand vegetation light interactions at fine scales. To this end, radiative transfer models have been created, which can help improve our understanding of signal interactions in the canopy, aid in the discovery of ideal sensor configurations, and contribute to the development of improved algorithms for extracting forest biophysical parameters. Simulations allow for the complete knowledge of scene geometry, as well as system parameters. Different modeling techniques include semiempirical, geometric, and Monte Carlo ray tracing (MCRT). Semiempirical and geometric models have broad assumptions and simplifications that do not lend themselves to small-scale interactions within the canopy. Semiempirical models consist of either Gaussian or lognormal signal profiles and produce lidar returns through a convolution between the lidar pulse and object distribution [65,66]. MCRT are more accurate, but require long rendering times. Some of the more prominent models include RAYTRAN [9], a MCRT model; FLiES [67], a 3D canopy MCRT with a coupled atmosphere model; RGM [68], a graphics based scattering model; GORT [69], a hybrid geometric optic and radiative transfer model that only considers first-order returns; DIRSIG [70], a photon mapping ray tracer model; POVRAY [71], a ray tracer model; LITE [72], a forward ray tracer with voxelized scattering probabilities; Librat [8], a MCRT designed to be flexible and modular; FLIGHT [73], a MCRT using voxelized properties and scene facets; and DART [4], a quasi-MCRT model.
The effects of environmental conditions, sensor configurations, and modeling methods on wlidar signals have previously been investigated with radiative transfer simulations. The following are a few examples of how simulations have increased our knowledge of lidar signals in forest canopies. Kotchenova et al. [74] demonstrated, through stochastic radiative transfer theory, the effects of multiscatter within the canopy on large-footprint wlidar signals. When multiscatter was applied, an increase in signal amplitude was observed, especially lower in the canopy. Calders et al. [75] found that crown archetypes and subsequent clumping are important factors in reproducing lidar signals through simulation. Disney et al. [8], on the other hand, used the Librat model to show the effect of signal triggering, scan angle, and footprint size on discrete lidar retrieval of canopy heights. Other authors, such as Qin et al. [76], evaluated the effect of scanning angle, flying altitude, and pulse density on canopy profile retrieval by modeling the scene and extracting waveforms with DART, while producing canopy profiles with GORT. Morsdorf et al. [77] used the POVRAY model to investigate the effect of footprint size on ground returns, reproducing the effect of increased ground returns with larger footprint sizes. Finally, Gastellu-Etchegorry et al. [39] compared DART large wlidar signals to actual waveforms from the Laser Vegetation Imaging Sensor (LVIS) and concluded that the inclusion of multiscatter in the model resulted in improved accuracy.
The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model, developed by Rochester Institute of Technology, is especially suited for lidar phenomenology investigations as it seeks to simulate high-fidelity signals by incorporating all aspects of the imaging chain. As a synthetic imagery generation model, it is based on quantitative first principles, records sensor reaching radiance, captures material spectral characteristics, and accounts for directional reflectance properties. When simulating lidar signals, the model accounts for the geometrical form factor, multiscatter from physical surfaces, speckle properties from rough surfaces, atmospheric conditions, laser scintillation and other beam effects, and allows for the inclusion of advanced sensor models [70]. DIRSIG supports a number of different lidar systems, including discrete, full waveform, and Geiger mode, with many past studies maturing and validating this capability [64,70,78,79,80,81]. Some of the previous wlidar studies with DIRSIG include an investigation by Wu et al. [82], who created simulated waveforms from which deconvolution techniques were evaluated. The authors also were able to extract branching structure and stem location estimates by preprocessing the waveforms through denoising, deconvolution, ground registration, and angular rectification. Romanczyk et al. [80] generated DIRSIG-simulated waveforms to study the effect of tree geometry components on the wlidar signal. They evaluated the complexity within scene models required to encompass the correct scale to maximize simulation efficiency, without sacrificing accuracy. Leaves were seen to be the dominant structure, while leaf stems, trunks, boughs, and first-order branching showed no statistical significance. Although the effect on wlidar signals from many types of environmental conditions has been examined, the impact of light directional scattering from leaves, known as the bidirectional scattering distribution function (BSDF), has been given little attention.

1.6. Objectives

Although BSDF has been shown to impact remote sensing imagery, as well as lidar intensity returns from targets filling the field of view (FOV) [42,43,44,45], little is known regarding how much this effect is present in airborne lidar systems (ALS) when interrogating forest canopies. There are also very few studies on transmission contributions to the return signal [72], and none that are known to the authors on specular transmission contributions. This may be true, since major limitations for forest canopy studies are their (forest) size and associated complications in obtaining reference data. Unlike Xie et al. [27], who were able to build a frame around a corn canopy to measure canopy BRF, these reference data are unavailable for forest canopies and lidar sensing. However, by implementing laboratory-measured leaf BSDF data onto facets in the DIRSIG model, we can perform sensitivity studies that reveal the impact of leaf specular components on lidar data. The objectives of this study therefore were to:
  • Quantify intensity contribution from transmission, i.e., using opaque vs. realistic leaf transmissions with a waveform lidar sensor;
  • Determine lidar waveform sensitivity for Lambertian vs. realistic BSDF leaves; and
  • Analyze BSDF effects on LAI, derived from waveform intensity data.
Each of the objectives were evaluated by examining sensitivity to wavelength, lidar footprint, view angle, and leaf angle distribution (LAD). We hypothesized that (i) transmission has significant contributions to large-footprint lidar waveforms, but less so at small footprints, as photons scatter outside the sensor FOV, and (ii) lidar sensitivity to BSDF effects is greatest at visible wavelengths, small wlidar footprints, and oblique interrogation angles relative to the mean leaf angle. These assumptions were based on observations that (i) the BSDF becomes largely specular in the visible, (ii) small footprints have fewer leaf angles to integrate over, and (iii) oblique angles cause diminished backscatter due to forward scattering.

2. Materials and Methods

2.1. Introduction to the Simulation Method

Simulations with the DIRSIG model were performed for various sensor configurations and vegetation properties. A sensitivity study was performed to isolate the error found when purely diffuse scattering leaves, or non-transmissive leaves are assumed for waveform lidar simulations. Specifically, the effect on the waveform return signal and subsequent compounding error, when calculating LAI, were assessed. The dependence on wavelength, lidar footprint, view angle, and leaf angle distribution (LAD) all were explored. We used a 10 m thick uniform vegetation layer, raised 2 m off the ground as the principal scene in these studies. The vegetation layer consists of 10 cm leaf disks of a spatially uniform distribution. We then analyzed leaf BSDF effects with a scene based on tree models in an effort to capture geometry that may be expected to occur naturally.

2.2. RAMI Comparison

Before completing the sensitivity study, we first did a cursory investigation into the legitimacy of lidar waveforms generated by DIRSIG. Ideally, simulations should be compared to lidar data from a real system. However, this would require exact knowledge of all physical structure, which is next to impossible when simulating forest environments. Past studies that have performed such comparisons required ad hoc normalizations, often approximating the scene with simple geometries [39]. Instead, we evaluated DIRSIG results in terms of models that participated in the radiation transfer model inter-comparison (RAMI-IV) campaign [83]. As part of RAMI-IV, lidar radiative transfer models were compared with a couple of abstract canopy scenes, one of which is named HET17, composed of overstory and understory vegetation representations over a uniform background. The scene was recreated for insertion into DIRSIG by modeling each specified facet. The overstory was created with spheres made up of uniformly distributed 10 cm diameter leaf disks, while the understory consisted of 1 m spheres, made from uniformly distributed 1 cm diameter disks. Both the overstory and understory had an LAI of 5. The abstract vegetation leaf disks had a near infrared (NIR) reflectance of 0.44, and transmittance of 0.5, while the ground had a reflectance of 0.16. For the inter-comparison, simulation configuration requirements call for a 50 m diameter instantaneous uniform cylinder beam and a 50 m diameter FOV, produced from a 2 m diameter detector and a 24 mrad FOV. These exact configurations are not possible within DIRSIG, since DIRSIG defines a laser beam from a source point with a divergence angle, and the FOV is defined by the detector and focal length. We approximated the required configuration with the parameters in Table 1. A number of simulations were run to find when the lidar signal reached an asymptote, based on the number of scattering events and photon bundles per pulse, along with the number of bounces each photon bundle could experience. We found for the scenarios we evaluated that 4,000,000 events, 8,000,000 photon bundles, and a maximum of 10 bounces were sufficient to produce high accuracy. These numbers are in line with those recommended by the DIRSIG lidar modality handbook [84]. We also had to add a manual search radius, from which density and vector directions are calculated, when interrogating the photon map, due to the nature of vegetation scenes containing hundreds of thousands of facets. We set this search radius to 1 cm, to ensure that it falls within the overstory leaf geometry, while also balancing run-time efficiency.
The simulation results are shown in Figure 1. In the multiscatter scenario, where light is able to transmit through leaves, more energy is preserved, thereby creating larger magnitudes. Also, notice a slight shift of the waveform down in height, due to the delayed response that results from multiple scattering. In the RAMI-IV study, lidar waveforms were normalized so that the integrated profiles equaled one. Different interpretations of the required configurations and reporting quantity led to the necessity for a normalization in which waveforms could be compared [83]. We converted our data in each bin to photons/meter, which we then normalized so that the integral of the waveform equaled one. The normalization causes a reduction in the DIRSIG multiscatter waveform as compared to the single scatter curve, mainly due to the near uniform increase in scale caused by the multiscatter return.
Overlaying the normalized DIRSIG waveforms onto the lidar plots from the RAMI-IV study, seen in Figure 2, shows that peak magnitudes in the canopy are similar, but that the DIRSIG returns appear to be lower in the canopy, with some noticeable structure. The structure seen in the DIRSIG waveform, as compared to the other models, is most likely caused by modeling the scene as distinct facets, instead of a turbid medium and possibly due to the coarse 0.6 m bins used, in an effort to record the canopy in 20 range bins per the RAMI IV requirements. The downward shift may be a time of flight recording difference, which is most noticeable when looking at the ground returns. The shift likewise may be attributed to different ground definitions, and subsequent alignment to the leading edge of the ground return for the original comparison, which we did not perform. In general, DIRSIG exhibits the expected phenomenology when adding multiscatter, and produces similar returns to those reported in the RAMI-IV study.

2.3. Multiscatter Contribution Comparison

Another validation was completed by simulating the multiscattering contribution for different beam footprints and leaf scattering albedo, which were then compared to a sensitivity study previously published using the DART model [39]. The study was accomplished by varying the lidar beam size (footprint radius) and the total albedo, defined as the sum of the reflectance and transmittance, with the reflectance and transmittance always being equal. The FOV was set to be always twice that of the beam footprint by varying the focal length. We simulated the platform at 1000 m, with a nadir-viewing wlidar system. The parameters set in DIRSIG are seen in Table 2. The scene consisted of a 10 m height vegetation layer, made up of 10 cm diameter leaf disks with an LAI of 4, and having spherical leaf angle distribution all over a perfectly absorbing ground. Because we simulated actual leaf facets, we found it overly computationally burdensome to create the vegetation layer beyond 30 × 30 m. The limit in scene size precluded modeling beyond a 7.5 m beam footprint radius, which was coupled with a 30 m FOV. The multiscatter percent contribution was calculated by simulating both the single scatter and multiscatter waveforms, taking the sum of each waveform, finding the difference, and then dividing by the sum of the multiscatter waveform. The result is shown in Figure 3. Comparing the result to that found in the DART study shows that the DIRSIG multiscatter contribution is about 75% of what had been previously reported. We attributed the discrepancy to the differences between the scenes and the simulation settings. The DART study used a turbid medium for the vegetation layer, while we modeled individual facets. Moreover, the limited size of our scene may have reduced the multiscatter contribution at the large footprints. We also set a manual search radius of 1 cm, from which density and vector directions were calculated when interrogating the photon map. We were able to achieve slightly higher values by decreasing the search radius even further, but noise became an issue (e.g., a 0.5 cm search radius resulted in 38.5% maximum multiscatter contribution). Furthermore, the precise lidar configurations were not specified in the DART study, so we were unable to replicate the study exactly. However, the general shape in the trend of an increasing multiscatter contribution for increasing footprint radius and total albedo matched between the studies. We also display the one albedo, 7.5 radius footprint waveforms in Figure 3, thereby showing the expected increase in magnitude and downward shift when multiscatter is accounted for.

2.4. Data-Driven BSDF in DIRSIG: Description and Verification

A recent addition to DIRSIG, specifically created for the DIRSIG5 release, is the ability to add data-driven BSDF descriptions to the material properties of facets within the scene. This allows for an open-ended insertion of optical scattering properties. Because DIRSIG5 has yet to support lidar, all simulations were completed in DIRSIG4. A beta version of the data-driven BSDF was incorporated into DIRSIG4, which we used to import our realistic leaf model into the simulations. The data-driven model uses a generic bidirectional weighting function, incorporating the projected area. The inputs include wavelengths, incident angles, view vectors, and associated BSDF data that are then processed into a spherical quad-tree (SQT) for efficient simulation processing [85]. The structure of the data is geometrically adaptable, thus concentrating tree nodes at high gradients. Also, for sampling efficiency, the file is set up with a hierarchical partitioning scheme to enable fast queries.
We tested the implementation of integrating measured BSDF data into DIRSIG with simulations of both a monostatic and bistatic lidar. We first used a monostatic lidar simulation to evaluate the sum of the return pulse when scanning along a single azimuth with different zenith angles. The system looked at a single plate many times larger than the lidar spot size of 0.5 m and a ground sampling distance (GSD) of 1 m, from a fixed distance of 500 m. The leaf BSDF material properties, measured from a sweetgum (Liquidambar styraciflua) leaf and fit to the Smith GGX model for BRDF and modified Dual Microfacet model for the bidirectional transmittance distribution function (BTDF), were applied to the plate [86,87]. Radiometric samples were taken in 5° zenith steps, thereby capturing the backscatter from 0° to 85°. The same simulation was completed, but with a change in the plate properties to that of a perfectly reflective and diffuse surface, from which the backscattered BRDF, also known as the monodirectional reflectance distribution function (MRDF) [88], could be determined. The resulting MRDF then was compared to the MRDF, taken directly from the BRDF input “RAW” file, which was used to assign the BSDF to the plate. Plots of the comparisons made at 550, 1064, and 1550 nm are shown Figure 4. Overall, the simulated MRDF values agree well with the input values from the RAW file, with slightly more noise at 1064 and 1550 nm, due to the SQT sampling, since the MRDF structure is a smaller percentage of the MRDF magnitude at these wavelengths.
We also verified the data-driven BSDF material properties by creating a “virtual” goniometer within DIRSIG, per a bistatic lidar configuration. The scene consisted of a perfectly absorbing ground and a flat plate at an altitude of 500 m, with both reflectance and transmittance defined with SQT material properties based on a modeled sweetgum leaf. The lidar source was fixed at a distance of 499 m from the center of the plate, and the receiver moved to various viewing azimuths and zenith angles, also 499 m from the plate. By looking at the center of the plate, both above and below the plate, both reflectance and transmittance were able to be measured. The signal at each viewing location was taken as the sum of the lidar waveform. A BRDF was then extracted by running the same scenarios with a perfectly Lambertian reflecting material property assigned to the plate. The BRDF and BTDF from the raw file, compared to the resulting BRDF and BTDF from the simulated lidar goniometer for a 45° source and view angles in 30° azimuth steps (0°–360°) and 15° zenith steps (0°–60°), are shown in Figure 5 for 550 nm and Figure 6 for 1064 nm wavelengths. The figures display polar plots of the BRDF and BTDF at every viewing azimuth, 0°–360° clockwise starting at the top, and zenith, 0°–60° starting from the center. Spline interpolation is used in one degree azimuth and zenith increments to fill the simulated goniometer plots. Note the forward scattering of the light with the source and maximum signal at opposing azimuths.

2.5. Leaf Angle Distribution Usage

Part of this study evaluated the effect that LADs have in connection with leaf directional scattering properties on the returned wlidar signal. We define the distributions according to the “graphical” method used by Verhoef [89], who takes the basis as the cumulative distribution function (cdf) of the uniform distribution, and builds the trigonometric functions ( a   s i n   x ) and ( 1 2 b   s i n   2 x ) on top with a coordinate transformation. Then all distributions are defined by two variables, a and b , from which practically all distributions can be defined. Verhoef [89] provides a simple algorithm which we implemented to produce the desired cdf, as seen in Figure 7. Of the leaf angle distributions first named by De Wit [90], spherical, planophile, and plagiophile distributions specifically were examined in this study, as most deciduous broadleaf trees can be described by one of these three distributions [91].

2.6. Leaf Area Density and Leaf Area Index

One of the metrics used to evaluate lidar waveforms was an LAI derived directly from the waveform, given a reference. LAI and leaf area density (LADen), the “Den” in the acronym to distinguish from LAD, were calculated via a voxel-based Beer–Lambert law approach, closely resembling that first introduced by MacArthur and Horn [57], and also used in a number of other publications [54,61,92,93]. Richardson et al. [94] showed that the Beer–Lambert law approach, when compared to a number of other techniques to estimate LAI from airborne lidar, had the most correlation to LAI reference data from ground-based hemispherical photographs. We specifically used the formulism laid out by Kamoske et al. [54]:
L A D e n i 1 = l n S e S t 1 k Δ z   ,
where Δ z is the voxel height and k is an extinction coefficient estimated from the reference data. In the Kamoske et al. [54] study, S e and S t were the number of pulses entering and exiting a voxel, respectively, otherwise known as “hit counting”. However, instead of hit counting, we propose a novel method to estimate LADen from a single waveform that takes advantage of wlidar’s ability to record the entire pulse by using the intensity values. Then, S e and S t will be the integrated intensity entering and exiting a voxel, otherwise known as the cumulative power distribution after R 1 , C P D a f t e r R 1 and the cumulative power distribution after R 2 , C P D a f t e r R 2 [64]. The intensity, or cumulative power distributions entering and exiting a voxel, were calculated after the same manner in which Hagstrom [64] produced transmission voxels, as illustrated in Figure 8. A disadvantage of point counting is that it requires ground returns in the vertical column to ensure that the entire vertical structure was sampled, thus requiring a ground return. The requirement to have ground points in a vertical voxel column is “…the most significant limiting factor in the estimation of LADen…” [54]. Moreover, the thicker the canopy, the more error results from a point counting method. Intensity accounting, on the other hand, does not have this constraint to calculate LADen; however, ground returns are needed if an LAI is estimated for the entire column, and is needed for the reference waveforms to calculate an extinction coefficient. Intensity accounting therefore has potential to increase accuracy between sensors at much finer resolutions. Hagstrom [64] showed that using the CPD instead of hit counting causes the calculated transmission through a voxel to converge to the truth much quicker. Far fewer pulses are needed, thereby allowing for smaller voxels, and hence a higher resolution of 3D LADen. Once the voxelated LADen is calculated, the vertical summation yields LAI estimates.
The extinction coefficient, k , in Equation (1) is determined from ground reference data. This can be done with a single waveform, if a reference LAI is provided. The k value is first set to one to find an uncalibrated LADen from which an uncalibrated LAI is determined. A top and bottom of the canopy must also be chosen, since LAI is a function of vegetation, and not the ground return, but the ground return is used in accounting for energy after each voxel layer. The LAI is found by approximating an integral with a sum of LADen, scaled by the voxel height:
L A I = L A D e n   d z i = V b V t L A D e n i   Δ z
where V b is the bottom vegetation voxel, V t is the top vegetation voxel, and Δ z is the voxel height. A k value then is found by dividing the uncalibrated LAI value by the reference LAI. We investigated the LADen algorithm with a simulated scene in DIRSIG with a 10 m thick vegetation layer, raised 2 m above the ground with 10 cm leaf disks, uniformly distributed, a spherical leaf angle distribution, and an LAI = 4. We discovered that a ground return with same reflectance as the vegetation is needed for the LADen algorithm to work correctly, so all energy is accounted for and not “lost”. In practice on real lidar waveforms, the ground return would need to be scaled to correct for the ratio of the ground-to-vegetation reflectance. If there is no ground return, or if it is very small, the error will be minimal, i.e., very little pulse power makes its way to the ground. We also found that the Δ z value has a significant effect on the estimated LADen. To illustrate this, various LADen profiles with different Δ z (m) are plotted in Figure 9 for a lidar system with a 5 m diameter footprint.
There are some noticeable differences between the Δ z curves. A sawtooth pattern for Δ z = 0.5 m is the result of partial pulses being within voxels. Range resolution is half the pulse length, and the pulse length here is 3 ns, or ~0.9 m for a round trip, or ~0.45 m in range. Therefore, it is suggested that each voxel should be at least 2 Δ R , with Δ R being the range resolution—this guarantees that an entire pulse is contained within a voxel. We expect the density to be constant within the canopy, but variations exist due to randomness in the canopy, and partially vegetation-filled voxels at the top and bottom of the canopy. In addition, “multibounce” returns will have an effect, as they cause a delay in the returned signal.
Another approach to find the k value is to fit a line to LAI data points, from varying reference LAIs, with the LAI estimated from lidar data using a value of one for the extinction. The slope of the line then is the extinction coefficient [54]. We tested this by making multiple vegetation layers with different LAI values, simulating the waveforms in DIRSIG, and investigating the increase in accuracy. In the investigation, we found a small bias, which can be accounted for to improve accuracy. The method in finding the extinction coefficient starts with finding uncorrelated LAI values for each vegetation layer, which are then plotted as a function of the reference LAI. Next, a best fit line is found, from which the slope is the extinction coefficient and the intercept is a bias. Ideally, no bias would exist, as uncorrelated LAI = 0 when reference LAI = 0. The bias can be incorporated into LADen by dividing by k and the number of voxels used in the LAI calculation, and then subtracting the scaled bias from each LADen. The LAI is then recalculated with the LADen values that incorporate this bias. The plots, seen in Figure 10, display the accuracy in determining LAI. Note that the estimated LAI without subtracting the bias, lies below the one-to-one line. This is primarily due to the contribution of multibounce returns being delayed in the waveform, which makes the density voxels appear to be more transmissive than they truly are.
When studying the sensitivity of the effect of leaf BSDF on lidar waveforms, we used the LADen from waveform method and then computed the associated LAI as a comparison metric, having only one reference waveform. Therefore, in our estimate of k we used a single reference value and assumed that the line goes through the origin to estimate k . Future studies could extend such analyses, comparable to that of Kamoske et al. [54] and Richardson et al. [94], to determine consistency across platforms and accuracy of the LADen from waveform method as compared to other methods.

2.7. Sensitivity Study Overview

BSDF effects were studied within DISIG by analyzing ALS waveform sensitivity to wavelength, footprint size, sensor view angle, and LAD. A 10 m thick abstract vegetation layer scene, raised two meters off the ground with an LAI = 4, was used as the primary scene. The scene consists of 10 cm diameter leaf disks with a uniform, random spatial distribution, from which different LADs were applied. Existing tree models in DIRSIG that closely resemble actual tree structure were then leveraged [95] to determine the impact of more natural scene geometry. The LAI and LAD were extracted to compare the results to what was found for the abstract vegetation layer.
The sensitivity study was accomplished by applying different permutations of wavelength, footprint size, view angle, and leaf angle distributions. Scattering properties were applied to the leaf disks with a sweetgum (Liquidambar styraciflua) leaf BSDF model (model) [86,87], Lambertian BSDF (Lambertian), leaf model reflectance with no transmittance (model-opaque), and leaf model reflectance with Lambertian transmittance (model-Lambertian) as seen in Figure 11.
Common lidar system wavelengths of 550, 1064, and 1550 nm were included in the study. We chose the 550 nm wavelength (vs. the frequency doubled Nd:YAG wavelength of 532 nm often used in bathymetry) to correspond to the green peak reflection of vegetation. Several multispectral terrain classification lidar systems via a tunable laser have been developed at or close to this wavelength [96,97,98]. The chosen footprint diameters, 0.1, 0.5, and 5 m also encompass the range of current ALS systems. In order to vary the FOV, the focal length was changed to keep the FOV twice the diameter of the beam. Because the lidar beam shape is generally Gaussian, with a width defined at one sigma (60.6% of the peak height, 68.2% of the energy) [84], significant energy fills the entire FOV. The spherical, planophile, and plagiophile classical leaf angle distributions, commonly found for large trees [91,99], also were applied. Specific parameters investigated are listed in Table 3, from which all permutations were carried out (324 total simulations). The specific DIRSIG settings for the study are shown in Table 4. An important discovery was that the simple rad solver in DIRSIG4 cannot be used for specifying transmission, as it will collect multiple points along each ray, instead of applying transmission properties to the first “hit”. Due to the nature of vegetation scenes that contain hundreds of thousands of facets, a manual search radius had to be implemented from which density and vector directions are calculated when interrogating the photon map. We set this search radius to 1 cm to ensure that it falls within the leaf geometry.

2.8. Waveform Comparison Metrics

Three comparison metrics, namely percent increase, waveform overlap, and a novel method we call “LAI-from-waveform” were used in determining the effect that simplified leaf BSDF assumptions have on the lidar waveform. The waveform generated from the leaf model BSDF served as the reference for each metric, while the three remaining BSDF configurations (Lambertian, model-opaque, model-Lambertian) at the same wavelength, footprint, zenith angle, and LAD were the tested waveforms. By completing these comparisons, the metrics highlight the lidar configurations where the simplified leaf BSDF assumptions are or are not valid. The percent increase was calculated by 100 T e s t R e f e r e n c e R e f e r e n c e and the LAI from waveform was calculated as already described in Equation (2). The waveform overlap metric is defined as the intersection divided by the union of the 95% Poisson confidence interval about each waveform [95]. The metric assumes that DIRSIG generates the mean signal, variance can be described by a Poisson distribution (as waveforms are generated through MCRT), and other noise is negligible. Mathematically, this can be written as a function O w i , w j in terms of waveforms w i , w j , a binary operator, Ψ , defining the confidence interval space in terms of the probability of seeing signal S b , given a signal w b within a range bin, for α parameter (0.05 used here for a 95% confidence interval).
O w i , w j b S b Ψ S b | w b i , α Ψ S b | w b j , α d S b b S b Ψ S b | w b i , α Ψ S b | w b j , α d S b .
The overlap metric encompasses the range (0,1), i.e., 0 for no agreement between the waveforms and 1 for total agreement. Due to a dependence on overlap area, more sensitivity is seen when compared to other metrics, mainly due to a steeper fall-off from the point of agreement [95].

2.9. Maple and Oak Grove Scene

Existing tree 3D models, previously created by Romanczyk [95], were used to create a small maple (Acer rubrum) and oak (Quercus rubra) grove to test BSDF effects on more realistic geometry that may be expected to exist naturally. The trees were randomly placed within the scene to have a maximum radii overlap of 40% with neighboring trees. The radii were determined as the distance from the center of the tree model to the outermost facet, also belonging to the tree in the x–y plane. Seven different tree models were randomly selected for creation of the grove, four of which were red maple and three were red oak trees. A description of the scene is shown in Figure 12, displaying the tree footprints, locations, model labels, and a thumbnail of each tree model.
Unlike the uniform vegetation layer, the maple and oak grove has different geometries and spatial statistics throughout the scene. We therefore interrogated at scene center and nine other random locations within a 10 m radius, centered over the canopy, in order to limit the geometry variability on our assessment of leaf BSDF effects. A DIRSIG-rendered red-green-blue (rgb) image of the maple and oak grove from directly overhead is seen in Figure 13, with the interrogation site locations indicated (a side view rgb image is also shown for perspective). Notice there is no ground level vegetation and material clutter that would be present in a natural environment. We limited our scene to trees, because we wanted to focus on lidar effects within the canopy, and not introduce other geometries that may impact computations and associated assessments.
In performing sensitivity studies on the maple and oak grove, we did not change leaf geometries, and instead analyzed the existing LAI and LAD within the scene in one meter pixels. The LAI was found by summing the one-sided area of leaf facets that fell within each one meter pixel. For each 1 m2 cell, histograms of leaf angle were also created in 5° increments of leaf facet zenith angles. We weighted each leaf angle contribution by the leaf facet area, since leaves consisted of more than one facet. This creates a histogram normalized by the LAI for each of the 1 m2 cells. The histogram was then normalized to one and the mean leaf angle found by using the histogram bin values as weights for each corresponding bin leaf angle. Heat maps of the LAI and mean leaf angle over the scene are shown in Figure 14. The cumulative distribution function (cdf) for the LAD was also calculated via a cumulative sum over the normalized histogram values. We plotted the cdf for each of the pixels corresponding to the interrogation sites, shown in Figure 14. The planophile and spherical distributions are included as a comparison to the distributions that were used for the 10 m vegetation layer. Of the LADs implemented with the uniform vegetation layer, the spherical distribution is the closest to the LADs appearing in the grove scene, and we expect BSDF effects found with the grove scene to be somewhat similar to those results. The lidar interrogation coordinates, matching those on the heat maps in meters, corresponding pixel LAI, and mean leaf angle also are shown in Table 5.
DIRSIG lidar settings for the maple and oak grove simulations were kept to those shown in Table 4, except that simulations were only performed for 550 and 1064 nm, while the range gate was expanded to capture the larger vertical canopy. We limited the scans to a smaller subset of parameter permutations than those used on the uniform vegetation scene by performing all simulations at zero zenith and retaining the native leaf geometry of the trees. The simulation parameters from which all permutations were run are shown in Table 6.

3. Results

3.1. Vegetation Layer Results

The results of the vegetation layer sensitivity study are first presented as waveform plots for the zero zenith look angle and spherical leaf angle distribution at each of the three beam diameters and wavelengths seen in Figure 15. When evaluating the general waveform shapes, the smaller 0.1 m footprint exhibited more vegetation structure and detail, while the 0.5 m waveform smoothed out some of this detail, and the larger 5 m footprint yielded the expected result of almost a linear response through the canopy, due to the integration over many more objects in the FOV. Differences between the model and Lambertian leaf waveforms were observed, most pronounced at 550 nm. With the Lambertian leaves, the reflected energy was spread evenly into the hemisphere, while also accounting for the projected area. For the model leaves, much of the reflected energy was contained in the specular lobe at 550 nm and with a spherical leaf angle distribution, the majority of leaves are angled, which caused specular reflection away from the receiver. At wavelengths of 1064 and 1550 nm, leaves proved to be largely diffuse, showing less of a difference between the Lambertian and model leaves. Comparing the model-opaque to the model leaf at the various footprints and wavelengths revealed that at the 0.1 and 0.5 m footprints, the difference was fairly minimal. However, the difference is more significant at the 5 m footprint, most notable at the 1064 and 1550 nm wavelengths where multiscatter is a larger contribution. The greater differences observed at the large-footprint NIR wavelengths is a result of a larger FOV capturing more scattering events and increased transmission contributing to multiscatter. An examination of the difference between the model and the model-Lambertian leaf showed no distinguishable difference for any of the configurations.
We display the five highest and lowest values for each leaf BSDF simplification type and for each metric as bar charts in Figure 16 after applying the three comparison metrics. At first glance, we see that the Lambertian and model-opaque BSDF types both can create significant error. However, the model-Lambertian leaves exhibited very little effect, with a maximum percent increase of just over 1%, a minimum overlap of over 0.75, and the greatest LAI difference of −0.06. In comparison, the greatest overall errors seen for all permutations for each metric was a maximum percent increase of 80% observed for the Lambertian leaf, a minimum overlap of less than 0.05 for the model-opaque leaf, and an LAI increase of over 0.9 for the Lambertian leaf. The bar graphs also reveal trends in wavelength, footprint size, view zenith, and LAD for the various metrics and BSDF types. Interestingly, although some of the same configurations created the most error between the three metrics, many were different due to the different features that each of the metrics are sensitive to in the waveform. For example, the percent increase statistic only considers the sum difference of intensity magnitudes, while the overlap statistic examines how closely the waveforms align, and the LAI increase is sensitive to relative magnitudes between the upper and lower portions of the waveform. In fact, the LAI metric is insensitive to pure shifts in magnitude, specifically highlighting when ground returns are diminished. A large LAI increase, for instance, occurred for the model reflectance and no transmittance leaves at 1064 nm, with a 5 m footprint, 0° zenith view, and planophile leaf angle distribution. The ground returns were significantly affected by the opaque leaves, because the flat angled, non-transmitting leaves block much of the energy going to or coming from the ground, thus causing a significantly reduced ground return. Because of the small ground return, the LAI algorithm estimated a much denser canopy, greatly overestimating the LAI. Some trends in the percent increase metric are that all the top Lambertian BSDF differences occurred at a 550 nm wavelength. As discussed earlier, this is due to the dominance of the specular lobe at this wavelength, which creates a significant difference in magnitude when reflected away from the lidar platform. All of the model reflection, non-transmitting leaves’ largest percent increase differences occurred at 1064 nm, with a 5 m footprint, where multiscatter becomes a significant factor. Similar trends are also seen for the overlap statistic, where most similarities for the Lambertian leaf were at 1064 nm and the largest differences at 550 nm wavelengths. While the largest differences for the non-transmitting leaf were at 1064 nm, and the most similarities were observed at 550 nm.
Each metric for every permutation is shown in the Appendix for completeness (Table A1, Table A2 and Table A3). The top 10% values are highlighted in yellow, while the bottom 10% are shown in blue to make the greater offenders more easily identifiable (except for the overlap statistic). These tables are useful for simulations of specific lidar systems and understanding possible error due to leaf BSDF simplifying assumptions. One reason to use simplified leaf BSDF properties is that computational time can be greatly increased. For instance, if transmission through the leaves is not a significant contributor for a specific configuration, eliminating transmission rays can result in significant computational efficiencies. It also is valuable to know if Lambertian assumptions are valid for a specific configuration, since limited data exist on individual leaf BSDF estimates from actual measurements. For example, the NEON AOP [30] operates at 1064 nm, typically with a 0.5 m footprint and a scanning angle of 0°–30°. We may want to know what error results when simulating Lambertian leaves in a radiative transfer model for the NEON AOP, assuming spherical LAD and a 22.5° view zenith angle. We can conclude from each table that the percent difference is expected to be under 4%, the overlap 0.36, and the LAI difference less than 0.1. These errors most likely are tolerable, and modeling Lambertian leaves is a solid assumption for the NEON AOP platform.

3.2. Maple and Oak Grove Scene Results

We investigated more realistic geometry, as is present naturally in a forest canopy, for the maple and oak grove sensitivity study. Major differences, when compared to the abstract uniform vegetation layer previously used, included different materials (e.g., leaves, trunks, stems, and branches), leaf geometries, and leaf distributions. As previously mentioned, because of the varying statistics in the scene, ten locations were interrogated for all permutations of parameters listed in Table 6. The resulting waveforms from location zero (center scene) are shown in Figure 17, and exhibited similar trends as those seen with the vegetation layer spherical LAD from Figure 15. We noted no distinguishable difference between the model leaf and the model reflectance Lambertian transmittance leaf (model-Lambertian) for all cases, except for a very slight increase at the 1064 nm wavelength and 5 m footprint. The largest differences were observed with the Lambertian leaf, most prominent in the 550 nm wavelength, 5 m footprint plot. There was also a significant decline in intensity from the non-transmitting leaf in the 1064 nm, 5 m plot.
A further evaluation was completed by using the model leaf as a reference and making comparisons with each of the three metrics, namely percent increase, waveform overlap, and LAI from waveform, in the same manner as was previously accomplished with the abstract vegetation layer simulations. However, to capture the variability due to the varying materials and geometries in a realistic canopy, we evaluated the statistics resulting from the ten different interrogation locations over the canopy. This was accomplished by producing box plots for each of the three metrics, shown in Figure 18. Each box encompasses data within the 25th–75th percentile, also known as the interquartile range (IQR), the whiskers are drawn at 1.5 times the IQR to the outermost data point, and outliers beyond 1.5 times the IQR are separately plotted as small circles. Figure 18 shows that there were several outliers, some fairly extreme. This was attributed to the photon mapping and search radius method having difficulty capturing the geometry accurately. First, in terms of the percent increase metric, as expected, the largest differences can be observed for the Lambertian BSDF at 550 nm, which was attributed to the specular contribution. The variability increased for the smaller footprints, also resulting in data points with a larger difference, the maximum representing an almost 30% increase. The model-opaque BSDF exhibited little difference in percent increase for the 550 nm wavelength, but showed as much as a 15% decrease for the 1064 nm with a 5 m lidar footprint. Lastly, the model-Lambertian waveforms showed almost no difference in intensity, although a few simulations resulted in outliers. The next statistic we looked at with box plots was the waveform overlap metric, shown as the second row in Figure 18. Overall, there was a larger spread of data points due to the higher sensitivity and faster fall-off of the metric. Again, large differences (data points closest to zero) were seen for the 550 nm wavelength, Lambertian BSDF. Significant differences also were seen for the 1064 nm wavelength and model-opaque leaves, the 5 m footprint data being the largest offenders. The 1064 nm Lambertian data, specifically the 0.1 m footprint, also exhibited a substantial dissimilarity. The lower overlap, evident for the 1064 nm wavelength Lambertian BSDF that is not registered by the percent increase statistic, suggests that the waveform can be considerably different, even though the summed intensity as compared to the model BSDF may be similar. The last metric we evaluated, namely LAI from waveform, had to be converted to a relative value, because the LAI varied between each lidar interrogation location. We therefore report the LAI from waveform as a percent increase. On first observation, the 550 nm wavelength Lambertian data exhibited the greatest error with the largest data point at just under 15%. The only other configuration with significant change was the model opaque BSDF at 1064 nm for the 5 m lidar footprint, resulting in an approximately 6% decrease in estimated LAI. Note that some of the LAI metric means are below the box and whiskers due to an extreme outlier. The extreme outliers are due to the different geometries in the scene, causing a single interrogation site to result in poor LAI correlation.
In summary, evaluations with a scene consisting of natural geometries revealed the same trends as were shown with the abstract vegetation layer, suggesting that abstract scenes are effective in exploring the impacts that scene properties have on lidar returns. The largest differences between the abstract scene and the maple and oak grove scene were a result of the increased variability in waveform shapes and magnitudes from the inconsistency in scene geometry. Variability in scene geometry resulted in the spread of metric data from the 10 different interrogation sites depending on the configuration and applied metric, which implied the necessity of evaluations at multiple scene locations. Simulations and ensuing comparisons revealed that simplifying BSDF assumptions are valid at certain configurations, depending on the error tolerance and specific application. As a result, the impact that leaf BSDF had on lidar returns, both in terms of magnitudes and shapes was quantified, and an awareness of possible effects to subsequent data processing is gained.

4. Conclusions

The DIRSIG radiative transfer model was used to study the effect that individual leaf optical scattering properties have on lidar waveforms, as compared to traditional Lambertian and non-transmitting scattering assumptions. Validation comparisons were first completed in order to gain confidence in simulated lidar waveforms produced by DIRSIG, specifically for the RAMI HET17 [83] scenario and multiscatter contribution, previously shown with the DART model [39]. DIRSIG simulations compared well against these scenarios, displaying the same general trends, effects, and magnitudes. Sensitivity studies were then accomplished by applying a leaf BSDF model [86,87] to leaf facets, with three leaf BSDF assumptions: Lambertian scattering, non-transmitting, and model reflectance with Lambertian transmission. Three metrics were used to highlight differences, namely percent increase, waveform overlap, and a novel method we call “LAI-from-waveform”. An abstract vegetation layer, as well as a realistic maple and oak grove based on 3D tree models, were used as the modeled scenes. Each metric was able to identify the effects of different model assumptions on the resultant waveform. The percent increase statistic showed magnitude differences, while the LAI metric revealed asymmetries between the top and bottom of the waveform. The waveform overlap metric largely followed errors highlighted in the LAI metric, while also revealing differences in the actual shape, noticeable for the NIR wavelength, small-footprint FOV, and non-transmitting leaves. In general, the model reflectance Lambertian transmittance assumption was seen to represent a solid approximation with minimal error. Performing simulations with non-transmitting leaves produced reasonably accurate waveforms for all 550 nm wavelength scenarios, and NIR wavelengths with small footprints. Conversely, significant error was observed with non-transmitting leaves for NIR wavelength and large-footprint configurations. Simulations with leaves that are both Lambertian reflecting and transmitting showed that this assumption is valid for lidar systems operating in the NIR, where the leaf BSDF specular lobe is not a large contributing factor. However, significant errors can be incurred if Lambertian leaf assumptions are made for lidar systems operating at visible wavelengths. We ultimately were able to obtain a better understanding of valid scattering assumptions, light interactions within the forest canopy, and methods to produce higher fidelity simulations by quantifying the impacts that individual leaf BSDF had on simulated waveforms. Such improved radiative transfer modeling paves the way for the development of next generation remote sensing systems and data processing algorithms, enabling accurate and precise forest structural assessments.

Author Contributions

Conceptualization, methodology, validation, analysis, and writing—original draft preparation, B.D.R.; software, A.A.G., S.D.B., and M.G.S.; writing—review and editing, J.A.v.A.; conceptualization, K.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank the entire Digital Imaging and Remote Sensing Image Generation (DIRSIG) team past and present, notably Rolando Raqueno and Byron Eng for their contributions in making this work possible. We would also like to thank Jim Bodie and Brett Matzke for computer server support. We also acknowledge administrative support provided by Colleen McMahon and Melanie Warren.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Percent increase statistic for each configuration. The shorthand for the descriptions of the columns are wavelength in nm and footprint in m, while the rows are zenith view angle in degrees, LAD type (Sp–spherical, Plan–planophile, Plag–plagiophile), and leaf BSDF (L–Lambertian, MO–model reflectance non-transmitting, ML–model reflectance Lambertian transmittance). Yellow highlights are the top 10% while the blue highlights are the bottom 10% of the data points.
Table A1. Percent increase statistic for each configuration. The shorthand for the descriptions of the columns are wavelength in nm and footprint in m, while the rows are zenith view angle in degrees, LAD type (Sp–spherical, Plan–planophile, Plag–plagiophile), and leaf BSDF (L–Lambertian, MO–model reflectance non-transmitting, ML–model reflectance Lambertian transmittance). Yellow highlights are the top 10% while the blue highlights are the bottom 10% of the data points.
Percent Increase550, 0.1550, 0.5550, 51064, 0.11064, 0.51064, 51550, 0.11550, 0.51550, 5
0 deg, Sp, L25.66%22.38%22.32%3.72%3.01%1.85%6.02%5.14%4.28%
0 deg, Sp, MO−0.04%−0.58%−2.61%−0.07%−1.67%−15.85%−0.09%−1.50%−8.74%
0 deg, Sp, ML0.01%0.11%0.68%0.00%−0.04%0.42%0.01%0.05%0.50%
0 deg, Plan, L−19.03%−19.07%−17.44%−4.42%−4.52%−3.97%−6.47%−6.55%−5.82%
0 deg, Plan, MO−0.10%−0.90%−2.24%−0.32%−2.94%−13.12%−0.29%−2.47%−7.98%
0 deg, Plan, ML−0.03%−0.10%−0.02%−0.05%−0.25%−0.09%−0.06%−0.24%−0.14%
0 deg, Plag, L73.95%63.22%60.02%8.19%7.12%4.99%13.49%12.02%10.86%
0 deg, Plag, MO−0.10%−0.96%−3.63%−0.30%−2.41%−16.70%−0.26%−2.08%−9.61%
0 deg, Plag, ML0.01%0.19%0.98%−0.01%0.00%0.70%−0.01%0.11%0.98%
22.5 deg, Sp, L33.82%30.23%29.43%4.61%3.91%2.43%7.51%6.69%5.68%
22.5 deg, Sp, MO−0.13%−0.75%−3.00%−0.01%−1.90%−15.83%−0.20%−1.71%−8.78%
22.5 deg, Sp, ML0.03%0.13%0.66%−0.01%−0.03%0.69%0.01%0.10%0.66%
22.5 deg, Plan, L9.87%17.48%18.20%1.36%2.27%1.86%2.38%4.07%4.05%
22.5 deg, Plan, MO−0.23%−1.24%−3.05%−0.71%−3.38%−14.18%−0.57%−2.83%−8.71%
22.5 deg, Plan, ML0.00%0.00%0.32%−0.04%−0.15%−0.07%−0.01%−0.04%0.28%
22.5 deg, Plag, L41.30%40.11%39.35%5.32%4.97%3.29%8.75%8.47%7.68%
22.5 deg, Plag, MO−0.18%−0.93%−3.38%−0.10%−2.40%−15.97%−0.28%−2.13%−9.23%
22.5 deg, Plag, ML0.00%0.20%0.73%−0.03%−0.03%0.33%0.00%0.09%0.73%
45 deg, Sp, L35.91%39.51%35.47%4.64%4.93%3.01%7.69%8.33%6.60%
45 deg, Sp, MO0.00%−0.70%−2.89%−0.15%−1.79%−15.29%−0.23%−1.60%−8.81%
45 deg, Sp, ML−0.02%0.15%0.58%0.00%−0.02%0.63%−0.01%0.08%0.66%
45 deg, Plan, L77.44%68.79%65.75%8.35%7.57%5.60%13.86%12.78%11.70%
45 deg, Plan, MO−0.59%−1.41%−5.11%−0.94%−3.05%−17.24%−0.91%−2.64%−10.83%
45 deg, Plan, ML0.09%0.29%1.11%−0.01%−0.01%0.56%0.07%0.19%1.17%
45 deg, Plag, L30.35%23.37%17.72%4.06%3.14%1.50%6.73%5.37%3.53%
45 deg, Plag, MO−0.11%−0.71%−2.70%−0.12%−1.93%−14.32%−0.18%−1.70%−8.28%
45 deg, Plag, ML0.03%0.10%0.36%−0.01%0.02%0.46%0.02%0.07%0.49%
Table A2. Overlap statistic for each configuration. The shorthand for the descriptions for the columns are wavelength in nm and footprint in m, while the rows are zenith view angle in degrees, LAD type (Sp–spherical, Plan–planophile, Plag–plagiophile), and leaf BSDF (L–Lambertian, MO–model reflectance non-transmitting, ML–model reflectance Lambertian transmittance). Yellow highlights are the top 10% of the data points.
Table A2. Overlap statistic for each configuration. The shorthand for the descriptions for the columns are wavelength in nm and footprint in m, while the rows are zenith view angle in degrees, LAD type (Sp–spherical, Plan–planophile, Plag–plagiophile), and leaf BSDF (L–Lambertian, MO–model reflectance non-transmitting, ML–model reflectance Lambertian transmittance). Yellow highlights are the top 10% of the data points.
Overlap550, 0.1550, 0.5550, 51064, 0.11064, 0.51064, 51550, 0.11550, 0.51550, 5
0 deg, Sp, L0.23010.18650.2780.32540.44240.67580.25090.25720.4672
0 deg, Sp, MO0.99750.95550.86680.97610.6390.06890.97690.69280.1593
0 deg, Sp, ML0.9990.99340.96110.99450.980.87960.99590.98690.919
0 deg, Plan, L0.08810.11940.17730.1470.18780.34090.10170.13140.2101
0 deg, Plan, MO0.98630.89660.82770.87120.33110.04160.89080.4130.1224
0 deg, Plan, ML0.99650.98720.98820.97720.91370.86080.9770.9220.9271
0 deg, Plag, L0.08040.0630.08420.11430.13970.38070.09130.07590.1397
0 deg, Plag, MO0.99180.93810.8370.90990.50440.05030.92980.59020.1355
0 deg, Plag, ML0.9990.9890.94870.99050.98630.8590.99470.97160.8451
22.5 deg, Sp, L0.22870.16620.25240.31790.35960.65340.25570.21530.4231
22.5 deg, Sp, MO0.98960.94830.87530.94620.61350.0650.94870.66880.2145
22.5 deg, Sp, ML0.99820.99140.96790.99260.98410.85480.99570.97120.9019
22.5 deg, Plan, L0.07630.1960.32740.22550.45950.67760.10920.26280.4516
22.5 deg, Plan, MO0.97320.8910.83580.75040.33130.04250.80860.40880.1525
22.5 deg, Plan, ML0.99890.99860.98210.98240.94930.88810.99490.9840.9285
22.5 deg, Plag, L0.06870.11210.16890.14720.26460.56860.08660.14480.2694
22.5 deg, Plag, MO0.9850.93740.85620.94140.51790.05260.92050.59240.1833
22.5 deg, Plag, ML0.99870.98680.96750.98680.98050.8930.9930.96880.8923
45 deg, Sp, L0.18120.12750.24850.2910.31280.63690.21580.17940.4123
45 deg, Sp, MO0.98710.95990.89860.8560.67010.07660.90330.72130.2662
45 deg, Sp, ML0.990.99090.97770.97950.96130.88770.9750.9750.9159
45 deg, Plan, L0.0610.05760.10550.12670.13310.39710.08220.0780.1474
45 deg, Plan, MO0.95840.91910.82740.75370.45230.03610.78330.54420.1678
45 deg, Plan, ML0.99340.98250.95880.9930.98140.87430.98080.95820.8481
45 deg, Plag, L0.13950.2230.47650.3070.46120.78170.19740.30220.6126
45 deg, Plag, MO0.99030.95050.89220.90530.6250.06730.94380.68540.2666
45 deg, Plag, ML0.99850.99370.9860.99410.97920.87850.9940.98090.9223
Table A3. LAI statistic for each configuration. The shorthand for the descriptions for the columns are wavelength in nm and footprint in m, while the rows are zenith view angle in degrees, LAD type (Sp–spherical, Plan–planophile, Plag–plagiophile), and leaf BSDF (L–Lambertian, MO–model reflectance non-transmitting, ML–model reflectance Lambertian transmittance). Yellow highlights are the top 10% while the blue highlights are the bottom 10% of the data points.
Table A3. LAI statistic for each configuration. The shorthand for the descriptions for the columns are wavelength in nm and footprint in m, while the rows are zenith view angle in degrees, LAD type (Sp–spherical, Plan–planophile, Plag–plagiophile), and leaf BSDF (L–Lambertian, MO–model reflectance non-transmitting, ML–model reflectance Lambertian transmittance). Yellow highlights are the top 10% while the blue highlights are the bottom 10% of the data points.
LAI550, 0.1550, 0.5550, 51064, 0.11064, 0.51064, 51550, 0.11550, 0.51550, 5
0 deg, Sp, L5.04634.50014.55664.13674.06514.04314.22344.11144.1026
0 deg, Sp, MO3.99833.98593.95693.9983.97564.34853.99673.96853.9645
0 deg, Sp, ML4.00064.00294.010643.99873.97544.00034.00053.9872
0 deg, Plan, L3.8083.78383.82983.95773.95173.95233.93733.92883.9431
0 deg, Plan, MO3.99913.99083.98543.99783.97944.63133.99753.97484.0366
0 deg, Plan, ML3.99983.9994.00083.99963.99713.97233.99953.99723.9881
0 deg, Plag, L4.87754.86394.8594.10384.09834.0684.16994.16724.1393
0 deg, Plag, MO3.99833.98213.94393.99673.97354.47813.99663.96833.9795
0 deg, Plag, ML4.00014.00354.010943.99833.96563.99994.00133.9888
22.5 deg, Sp, L4.4984.58454.64364.0694.07394.04354.11264.12744.1095
22.5 deg, Sp, MO3.99783.9843.96064.00073.97554.33753.99693.96843.9599
22.5 deg, Sp, ML4.00054.00364.02143.99953.99893.972244.00163.9839
22.5 deg, Plan, L3.97714.16934.15013.95224.02273.99624.01914.04014.0256
22.5 deg, Plan, MO3.99973.98633.9724.62913.97454.52464.14283.97074
22.5 deg, Plan, ML3.99993.99994.00374.00843.99793.95324.03243.99943.9812
22.5 deg, Plag, L4.36654.55664.55044.05064.07024.02254.08284.11964.1018
22.5 deg, Plag, MO3.99813.98423.95034.00023.97554.39373.99733.96923.9754
22.5 deg, Plag, ML44.00344.00713.99973.9993.961244.00073.9881
45 deg, Sp, L4.66924.54644.39774.08644.06974.01454.14314.1184.0733
45 deg, Sp, MO4.00013.98843.96463.99773.98294.433.99573.97723.9842
45 deg, Sp, ML3.99954.00254.00344.00013.99913.97783.99984.00093.9912
45 deg, Plan, L4.61754.55574.4324.07614.06734.02144.12474.11354.0776
45 deg, Plan, MO3.99363.98433.94313.99153.97954.44043.99123.97423.9687
45 deg, Plan, ML4.0014.00334.01123.99983.99873.95884.00074.00164.0005
45 deg, Plag, L4.37734.27224.18844.05254.03753.98134.08654.0644.035
45 deg, Plag, MO3.99843.99083.97193.99893.98584.37183.99763.97993.9948
45 deg, Plag, ML4.00044.00124.00373.99993.99923.94164.00024.00043.9913

References

  1. McRoberts, R.E.; Tomppo, E.O. Remote sensing support for national forest inventories. Remote Sens. Environ. 2006, 110, 412–419. [Google Scholar] [CrossRef]
  2. Franklin, S. Remote Sensing for Sustainable Forest Management; CRC Press: Boca Raton, FL, USA, 2010. [Google Scholar]
  3. Berk, A.; Conforti, P.; Kennett, R.; Perkins, T.; Hawes, F.; Van Den Bosch, J. MODTRAN® 6: A major upgrade of the MODTRAN® radiative transfer code. In Proceedings of the 2014 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Lausanne, Switzerland, 24–27 June 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1–4. [Google Scholar]
  4. Gastellu-Etchegorry, J.P.; Yin, T.; Lauret, N.; Cajgfinger, T.; Gregoire, T.; Grau, E.; Feret, J.B.; Lopes, M.; Guilleux, J.; Dedieu, G.; et al. Discrete anisotropic radiative transfer (DART 5) for modeling airborne and satellite spectroradiometer and LIDAR acquisitions of natural and urban landscapes. Remote Sens. 2015, 7, 1667–1701. [Google Scholar] [CrossRef] [Green Version]
  5. Han, Y. JCSDA Community Radiative Transfer Model (CRTM): Version 1; U.S. Department of Commerce: Washington, DC, USA, 2006.
  6. Stamnes, K.; Tsay, S.-C.; Wiscombe, W.; Laszlo, I. DISORT, A General-Purpose Fortran Program for Discrete-Ordinate-Method Radiative Transfer in Scattering and Emitting Layered Media: Documentation of Methodology; NASA Technical Reports; NASA: Washington, DC, USA, 2000.
  7. Fiorino, S.T.; Bartell, R.J.; Krizo, M.J.; Caylor, G.L.; Moore, K.P.; Harris, T.R.; Cusumano, S.J. A first principles atmospheric propagation & characterization tool: The laser environmental effects definition and reference (LEEDR). In Proceedings of the Atmospheric Propagation of Electromagnetic Waves II; International Society for Optics and Photonics: Bellingham, WA, USA, 2008; Volume 6878, p. 68780B. [Google Scholar]
  8. Disney, M.I.; Kalogirou, V.; Lewis, P.; Prieto-Blanco, A.; Hancock, S.; Pfeifer, M. Simulating the impact of discrete-return lidar system and survey characteristics over young conifer and broadleaf forests. Remote Sens. Environ. 2010, 114, 1546–1560. [Google Scholar] [CrossRef]
  9. Govaerts, Y.M.; Verstraete, M.M. Raytran: A Monte Carlo ray-tracing model to compute light scattering in three-dimensional heterogeneous media. IEEE Trans. Geosci. Remote Sens. 1998, 36, 493–505. [Google Scholar] [CrossRef]
  10. Ni, W.; Li, X.; Woodcock, C.E.; Caetano, M.R.; Strahler, A.H. An analytical hybrid GORT model for bidirectional reflectance over discontinuous plant canopies. IEEE Trans. Geosci. Remote Sens. 1999, 37, 987–999. [Google Scholar] [CrossRef] [Green Version]
  11. Goodenough, A.A.; Brown, S.D. DIRSIG 5: Core design and implementation. In Proceedings of the Algorithms Technol. Multispectral, Hyperspectral, Ultraspectral Imag XVIII, Baltimore, MD, USA, 9 May 2012; Volume 8390. [Google Scholar]
  12. Morsdorf, F.; Nichol, C.; Malthus, T.; Woodhouse, I.H. Assessing forest structural and physiological information content of multi-spectral LiDAR waveforms by radiative transfer modelling. Remote Sens. Environ. 2009, 113, 2152–2163. [Google Scholar] [CrossRef] [Green Version]
  13. Ross, J.K.; Marshak, A.L. Calculation of canopy bidirectional reflectance using the Monte Carlo method. Remote Sens. Environ. 1988, 24, 213–225. [Google Scholar] [CrossRef]
  14. Chelle, M.; Andrieu, B. Radiative models for architectural modeling. Agronomie 1999, 19, 225–240. [Google Scholar] [CrossRef] [Green Version]
  15. Newton, A.; Muller, J.; Pearson, J. Spot Dem Shading For Landsat-tm Topographic correction. In Proceedings of the IGARSS’91 Remote Sensing: Global Monitoring for Earth Management; IEEE: New York, NY, USA, 1991; Volume 2, pp. 655–659. [Google Scholar]
  16. Burgess, D.W.; Lewis, P.; Muller, J.-P. Topographic effects in AVHRR NDVI data. Remote Sens. Environ. 1995, 54, 223–232. [Google Scholar] [CrossRef]
  17. Antyufeev, V.S.; Marshak, A.L. Inversion of Monte Carlo model for estimating vegetation canopy parameters. Remote Sens. Environ. 1990, 33, 201–209. [Google Scholar] [CrossRef]
  18. Kuo, S.D.; Schott, J.R.; Chang, C.Y. Synthetic image generation of chemical plumes for hyperspectral applications. Opt. Eng. 2000, 39, 1047–1056. [Google Scholar] [CrossRef] [Green Version]
  19. Hancock, S.; Armston, J.; Hofton, M.; Sun, X.; Tang, H.; Duncanson, L.I.; Kellner, J.R.; Dubayah, R. The GEDI simulator: A large-footprint waveform lidar simulator for calibration and validation of spaceborne missions. Earth SP Sci. 2019, 6, 294–310. [Google Scholar] [CrossRef] [PubMed]
  20. Schott, J.R.; Brown, S.D.; Raqueño, R.V.; Gross, H.N.; Robinson, G. An Advanced Synthetic Image Generation Model and its Application to Multi/Hyperspectral Algorithm Development. Can. J. Remote Sens. 1999, 25, 99–111. [Google Scholar] [CrossRef]
  21. Kuusk, A. A two-layer canopy reflectance model. J. Quant. Spectrosc. Radiat. Transf. 2001, 71, 1–9. [Google Scholar] [CrossRef]
  22. Huang, D.; Knyazikhin, Y.; Wang, W.; Deering, D.W.; Stenberg, P.; Shabanov, N.; Tan, B.; Myneni, R.B. Stochastic transport theory for investigating the three-dimensional canopy structure from space measurements. Remote Sens. Environ. 2008, 112, 35–50. [Google Scholar] [CrossRef]
  23. Shabanov, N.V.; Huang, D.; Knjazikhin, Y.; Dickinson, R.E.; Myneni, R.B. Stochastic radiative transfer model for mixture of discontinuous vegetation canopies. J. Quant. Spectrosc. Radiat. Transf. 2007, 107, 236–262. [Google Scholar] [CrossRef] [Green Version]
  24. Qin, W.; Gerstl, S.A. 3-D Scene Modeling of Semidesert Vegetation Cover and its Radiation Regime. Remote Sens. Environ. 2000, 74, 145–162. [Google Scholar] [CrossRef]
  25. Yang, B.; Knyazikhin, Y.; Zhao, H.; Ma, Y. Contribution of leaf specular reflection to canopy reflectance under black soil case using stochastic radiative transfer model. Agric. For. Meteorol. 2018, 263, 477–482. [Google Scholar] [CrossRef] [Green Version]
  26. Yang, B.; Knyazikhin, Y.; Mõttus, M.; Rautiainen, M.; Stenberg, P.; Yan, L.; Chen, C.; Yan, K.; Choi, S.; Park, T.; et al. Estimation of leaf area index and its sunlit portion from DSCOVR EPIC data: Theoretical basis. Remote Sens. Environ. 2017, 198, 69–84. [Google Scholar] [CrossRef] [Green Version]
  27. Xie, D.; Qin, W.; Wang, P.; Shuai, Y.; Zhou, Y.; Zhu, Q. Influences of Leaf-Specular Reflection on Canopy BRF Characteristics: A Case Study of Real Maize Canopies with a 3-D Scene BRDF Model. IEEE Trans. Geosci. Remote Sens. 2017, 55, 619–631. [Google Scholar] [CrossRef]
  28. Ross, J.; Marshak, A. The influence of leaf orientation and the specular component of leaf reflectance on the canopy bidirectional reflectance. Remote Sens. Environ. 1989, 27, 251–260. [Google Scholar] [CrossRef]
  29. Walter-Shea, E.A. Laboratory and Field Measurements of Leaf Spectral Properties and Canopy Architecture and their Effects on Canopy Reflectance. Ph.D. Thesis, University of Nebraska, Lincoln, NE, USA, 1987. [Google Scholar]
  30. Kampe, T.U.; Johnson, B.R.; Kuester, M.A.; Keller, M. NEON: The first continental-scale ecological observatory with airborne remote sensing of vegetation canopy biochemistry and structure. J. Appl. Remote Sens. 2010, 4, 43510. [Google Scholar] [CrossRef]
  31. Brock, J.C.; Wright, C.W.; Clayton, T.D.; Nayegandhi, A. LIDAR optical rugosity of coral reefs in Biscayne National Park, Florida. Coral Reefs 2004, 23, 48–59. [Google Scholar] [CrossRef]
  32. Blair, J.B.; Rabine, D.L.; Hofton, M.A. The Laser Vegetation Imaging Sensor: A medium-altitude, digitisation-only, airborne laser altimeter for mapping vegetation and topography. ISPRS J. Photogramm. Remote Sens. 1999, 54, 115–122. [Google Scholar] [CrossRef]
  33. Cook, B.D.; Nelson, R.F.; Middleton, E.M.; Morton, D.C.; McCorkel, J.T.; Masek, J.G.; Ranson, K.J.; Ly, V.; Montesano, P.M. NASA Goddard’s LiDAR, hyperspectral and thermal (G-LiHT) airborne imager. Remote Sens. 2013, 5, 4045–4066. [Google Scholar] [CrossRef] [Green Version]
  34. Means, J.E.; Acker, S.A.; Harding, D.J.; Blair, J.B.; Lefsky, M.A.; Cohen, W.B.; Harmon, M.E.; McKee, W.A. Use of Large-Footprint Scanning Airborne Lidar To Estimate Forest Stand Characteristics in the Western Cascades of Oregon. Remote Sens. Environ. 1999, 67, 298–308. [Google Scholar] [CrossRef]
  35. Hollaus, M.; Mücke, W.; Roncat, A.; Pfeifer, N.; Briese, C. Full-waveform airborne laser scanning systems and their possibilities in forest applications. In Forestry Applications of Airborne Laser Scanning; Springer: Berlin/Heidelberg, Germany, 2014; pp. 43–61. [Google Scholar]
  36. Schutz, B.E.; Zwally, H.J.; Shuman, C.A.; Hancock, D.; DiMarzio, J.P. Overview of the ICESat mission. Geophys. Res. Lett. 2005, 32. [Google Scholar] [CrossRef] [Green Version]
  37. Abdalati, W.; Zwally, H.J.; Bindschadler, R.; Csatho, B.; Farrell, S.L.; Fricker, H.A.; Harding, D.; Kwok, R.; Lefsky, M.; Markus, T. The ICESat-2 laser altimetry mission. Proc. IEEE 2010, 98, 735–751. [Google Scholar] [CrossRef]
  38. Stavros, E.N.; Schimel, D.; Pavlick, R.; Serbin, S.; Swann, A.; Duncanson, L.; Fisher, J.B.; Fassnacht, F.; Ustin, S.; Dubayah, R. ISS observations offer insights into plant function. Nat. Ecol. Evol. 2017, 1, 1–5. [Google Scholar] [CrossRef]
  39. Gastellu-Etchegorry, J.-P.; Yin, T.; Lauret, N.; Grau, E.; Rubio, J.; Cook, B.D.; Morton, D.C.; Sun, G. Simulation of satellite, airborne and terrestrial LiDAR with DART (I): Waveform simulation with quasi-Monte Carlo ray tracing. Remote Sens. Environ. 2016, 184, 418–435. [Google Scholar] [CrossRef]
  40. Wagner, W.; Ullrich, A.; Ducic, V.; Melzer, T.; Studnicka, N. Gaussian decomposition and calibration of a novel small-footprint full-waveform digitising airborne laser scanner. ISPRS J. Photogramm. Remote Sens. 2006, 60, 100–112. [Google Scholar] [CrossRef]
  41. Korpela, I.; Ørka, H.O.; Hyyppä, J.; Heikkinen, V.; Tokola, T. Range and AGC normalization in airborne discrete-return LiDAR intensity data for forest canopies. ISPRS J. Photogramm. Remote Sens. 2010, 65, 369–379. [Google Scholar] [CrossRef]
  42. Wagner, W.; Hyyppa, J.; Ullrich, A.; Lehner, H.; Briese, C.; Kaasalainen, S. Radiometric calibration of full-waveform small-footprint airborne laser scanners. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 163–168. [Google Scholar]
  43. Jutzi, B.; Gross, H.; Sensing, R.; Karlsruhe, U. Normalization of Lidar Intensity Data Based on Range and Surface Incidence Angle. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2009, 38, 213–218. [Google Scholar]
  44. Gross, H.; Thoennessen, U. Extraction of lines from laser point clouds. Symp. ISPRS Comm. III Photogramm. Comput. Vis. 2006, 36, 86–91. [Google Scholar]
  45. Zhu, X.; Wang, T.; Darvishzadeh, R.; Skidmore, A.K.; Niemann, K.O. 3D leaf water content mapping using terrestrial laser scanner backscatter intensity with radiometric correction. ISPRS J. Photogramm. Remote Sens. 2015, 110, 14–23. [Google Scholar] [CrossRef]
  46. Beckmann, P.; Spizzichino, A. The scattering of Electromagnetic Waves from Rough Surfaces; Pergamon Press: New York, NY, USA, 1987. [Google Scholar]
  47. Lefsky, M.A.; Cohen, W.B.; Acker, S.A.; Parker, G.G.; Spies, T.A.; Harding, D. Lidar Remote Sensing of the Canopy Structure and Biophysical Properties of Douglas-Fir Western Hemlock Forests. Remote Sens. Environ. 1999, 70, 339–361. [Google Scholar] [CrossRef]
  48. McGlinchy, J.; Van Aardt, J.A.N.; Erasmus, B.; Asner, G.P.; Mathieu, R.; Wessels, K.; Knapp, D.; Kennedy-Bowdoin, T.; Rhody, H.; Kerekes, J.P. Extracting structural vegetation components from small-footprint waveform lidar for biomass estimation in savanna ecosystems. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 480–490. [Google Scholar] [CrossRef]
  49. Sarrazin, M.J.D.; Van Aardt, J.A.N.; Asner, G.P.; McGlinchy, J.; Messinger, D.W.; Wu, J. Fusing small-footprint waveform LiDAR and hyperspectral data for canopy-level species classification and herbaceous biomass modeling in savanna ecosystems. Can. J. Remote Sens. 2012, 37, 653–665. [Google Scholar] [CrossRef]
  50. Heinzel, J.; Koch, B. Exploring full-waveform LiDAR parameters for tree species classification. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 152–160. [Google Scholar] [CrossRef]
  51. Höfle, B.; Hollaus, M.; Hagenauer, J. Urban vegetation detection using radiometrically calibrated small-footprint full-waveform airborne LiDAR data. ISPRS J. Photogramm. Remote Sens. 2012, 67, 134–147. [Google Scholar] [CrossRef]
  52. Weiss, M.; Baret, F.; Smith, G.J.; Jonckheere, I.; Coppin, P. Review of methods for in situ leaf area index (LAI) determination: Part II. Estimation of LAI, errors and sampling. Agric. For. Meteorol. 2004, 121, 37–53. [Google Scholar] [CrossRef]
  53. Bonan, G.B.; Williams, M.; Fisher, R.A.; Oleson, K.W. Modeling stomatal conductance in the earth system: Linking leaf water-use efficiency and water transport along the soil–plant–atmosphere continuum. Geosci. Model Dev. 2014, 7, 2193–2222. [Google Scholar] [CrossRef] [Green Version]
  54. Kamoske, A.G.; Dahlin, K.M.; Stark, S.C.; Serbin, S.P. Leaf area density from airborne LiDAR: Comparing sensors and resolutions in a temperate broadleaf forest ecosystem. For. Ecol. Manag. 2019, 433, 364–375. [Google Scholar] [CrossRef]
  55. Stark, S.C.; Leitold, V.; Wu, J.L.; Hunter, M.O.; de Castilho, C.V.; Costa, F.R.C.; McMahon, S.M.; Parker, G.G.; Shimabukuro, M.T.; Lefsky, M.A.; et al. Amazon forest carbon dynamics predicted by profiles of canopy leaf area and light environment. Ecol. Lett. 2012, 15, 1406–1414. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Becknell, J.M.; Desai, A.R.; Dietze, M.C.; Schultz, C.A.; Starr, G.; Duffy, P.A.; Franklin, J.F.; Pourmokhtarian, A.; Hall, J.; Stoy, P.C. Assessing interactions among changing climate, management, and disturbance in forests: A macrosystems approach. Bioscience 2015, 65, 263–274. [Google Scholar] [CrossRef] [Green Version]
  57. MacArthur, R.H.; Horn, H.S. Foliage profile by vertical measurements. Ecology 1969, 50, 802–804. [Google Scholar] [CrossRef] [Green Version]
  58. Lefsky, M.A.; Cohen, W.B.; Parker, G.G.; Harding, D.J. Lidar remote sensing for ecosystem studies: Lidar, an emerging remote sensing technology that directly measures the three-dimensional distribution of plant canopies, can accurately estimate vegetation structural attributes and should be of particular inte. Bioscience 2002, 52, 19. [Google Scholar] [CrossRef]
  59. Leblanc, S.G.; Chen, J.M.; Fernandes, R.; Deering, D.W.; Conley, A. Methodology comparison for canopy structure parameters extraction from digital hemispherical photography in boreal forests. Agric. For. Meteorol. 2005, 129, 187–207. [Google Scholar] [CrossRef] [Green Version]
  60. Cutini, A.; Matteucci, G.; Mugnozza, G.S. Estimation of leaf area index with the Li-Cor LAI 2000 in deciduous forests. For. Ecol. Manag. 1998, 105, 55–65. [Google Scholar] [CrossRef]
  61. Solberg, S.; Næsset, E.; Hanssen, K.H.; Christiansen, E. Mapping defoliation during a severe insect attack on Scots pine using airborne laser scanning. Remote Sens. Environ. 2006, 102, 364–376. [Google Scholar] [CrossRef]
  62. Gatziolis, D.; Andersen, H.-E. A Guide to LIDAR Data Acquisition and Processing for the Forests of the Pacific Northwest; General Technical Report PNW-GTR-768; Pacific Northwest Research Station, Forest Service, United States Department of Agriculture: Portland, OR, USA, 2008; 768. [CrossRef]
  63. Wagner, W.; Ullrich, A.; Melzer, T.; Briese, C.; Kraus, K. From Single-Pulse to Full-Waveform Airborne Laser Scanners: Potential and Practical Challenges. Int. Arch. Photogramm. Remote Sens. Geoinf. Sci. 2004, 35, 414–419. [Google Scholar]
  64. Hagstrom, S.T. Voxel-Based LIDAR Analysis and Applications. Ph.D. Thesis, Rochester Institute of Technology, Rochester, NY, USA, 2014. [Google Scholar]
  65. Blair, J.B.; Hofton, M.A. Modeling laser altimeter return waveforms over complex vegetation using high-resolution elevation data. Geophys. Res. Lett. 1999, 26, 2509–2512. [Google Scholar] [CrossRef]
  66. Chauve, A.; Mallet, C.; Bretar, F.; Durrieu, S.; Deseilligny, M.P.; Puech, W. Processing full-waveform lidar data: Modelling raw signals. In Proceedings of the International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 2007; ISPRS: Hannover, Germany, 2008; pp. 102–107. [Google Scholar]
  67. Kobayashi, H.; Iwabuchi, H. A coupled 1-D atmosphere and 3-D canopy radiative transfer model for canopy reflectance, light environment, and photosynthesis simulation in a heterogeneous landscape. Remote Sens. Environ. 2008, 112, 173–185. [Google Scholar] [CrossRef]
  68. Goel, N.S.; Rozehnal, I.; Thompson, R.L. A computer graphics based model for scattering from objects of arbitrary shapes in the optical region. Remote Sens. Environ. 1991, 36, 73–104. [Google Scholar] [CrossRef]
  69. Ni-Meister, W.; Jupp, D.L.B.; Dubayah, R. Modeling lidar waveforms in heterogeneous and discrete canopies. IEEE Trans. Geosci. Remote Sens. 2001, 39, 1943–1958. [Google Scholar] [CrossRef] [Green Version]
  70. Burton, R.R.; Schott, J.R.; Brown, S.D. Elastic ladar modeling for synthetic imaging applications. In Proceedings of the International Symposium on Optical Science and Technology, Seattle, WA, USA, 7–11 July 2002; Volume 4816, p. 144. [Google Scholar]
  71. Plachetka, T. POV Ray: Persistence of vision parallel raytracer. In Proceedings of the Spring Conference on Computer Graphics, Budmerice, Slovakia, 23–25 April 1998; Volume 123. [Google Scholar]
  72. Goodwin, N.R.; Coops, N.C.; Culvenor, D.S. Development of a simulation model to predict LiDAR interception in forested environments. Remote Sens. Environ. 2007, 111, 481–492. [Google Scholar] [CrossRef]
  73. North, P.R.J.; Rosette, J.A.B.; Suárez, J.C.; Los, S.O. A Monte Carlo radiative transfer model of satellite waveform LiDAR. Int. J. Remote Sens. 2010, 31, 1343–1358. [Google Scholar] [CrossRef]
  74. Kotchenova, S.Y.; Shabanov, N.V.; Knyazikhin, Y.; Davis, A.B.; Dubayah, R.; Myneni, R.B. Modeling Lidar waveforms with time-dependent stochastic radiative transfer theory for remote estimations of forest structure. J. Geophys. Res. Atmos. 2003, 108. [Google Scholar] [CrossRef] [Green Version]
  75. Calders, K.; Lewis, P.; Disney, M.; Verbesselt, J.; Herold, M. Investigating assumptions of crown archetypes for modelling LiDAR returns. Remote Sens. Environ. 2013, 134, 39–49. [Google Scholar] [CrossRef]
  76. Qin, H.; Wang, C.; Xi, X.; Tian, J.; Zhou, G. Simulating the Effects of the Airborne Lidar Scanning Angle, Flying Altitude, and Pulse Density for Forest Foliage Profile Retrieval. Appl. Sci. 2017, 7, 712. [Google Scholar] [CrossRef] [Green Version]
  77. Morsdorf, F.; Frey, O.; Koetz, B.; Meier, E. Ray tracing for modeling of small footprint airborne laser scanning returns. In Proceedings of the ISPRS Workshop ‘Laser Scanning 2007 and SilviLaser 2007’, Espoo, Finland, 12–14 September 2007; ISPRS: Hanover, Germany; Volume 36, pp. 249–299. [Google Scholar]
  78. Blevins, D.D. Modeling Multiple Scattering and Absorption for a Differential Absorption LIDAR System. Ph.D. Thesis, Rochester Institute of Technology, Rochester, NY, USA, 2005. [Google Scholar]
  79. Wu, J.; Cawse-Nicholson, K.; VanAardt, J. 3D Tree Reconstruction from Simulated Small Footprint Waveform Lidar. Am. Soc. Photogramm. Remote Sens. 2013, 79, 1147–1157. [Google Scholar] [CrossRef]
  80. Romanczyk, P.; van Aardt, J.; Cawse-Nicholson, K.; Kelbe, D.; McGlinch, J.; Krause, K. Assessing the impact of broadleaf tree structure on airborne full-waveform small-footprint LiDAR signals through simulation. Can. J. Remote Sens. 2013, 39, S60–S72. [Google Scholar] [CrossRef]
  81. Wu, J.; Van Aardt, J.A.N.; McGlinchy, J.; Asner, G.P. A robust signal preprocessing chain for small-footprint waveform lidar. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3242–3255. [Google Scholar] [CrossRef]
  82. Wu, J.; Van Aardt, J.A.N.; Asner, G.P. A comparison of signal deconvolution algorithms based on small-footprint LiDAR waveform simulation. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2402–2414. [Google Scholar] [CrossRef]
  83. Widlowski, J.L.; Pinty, B.; Lopatka, M.; Atzberger, C.; Buzica, D.; Chelle, M.; Disney, M.; Gastellu-Etchegorry, J.P.; Gerboles, M.; Gobron, N.; et al. The fourth radiation transfer model intercomparison (RAMI-IV): Proficiency testing of canopy reflectance models with ISO-13528. J. Geophys. Res. Atmos. 2013, 118, 6869–6890. [Google Scholar] [CrossRef] [Green Version]
  84. Lidar Modality Handbook. Available online: https://dirsig.cis.rit.edu/docs/new/lidar.html (accessed on 11 June 2019).
  85. Goodenough, A.A.; Brown, S.D. DIRSIG5: Next-generation remote sensing data and image simulation framework. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4818–4833. [Google Scholar] [CrossRef]
  86. Roth, B.D.; Saunders, M.G.; Bachmann, C.M.; van Aardt, J. On Leaf BRDF Estimates and Their Fit to Microfacet Models. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1761–1771. [Google Scholar] [CrossRef]
  87. Roth, B. Broad Leaf Bidirectional Scattering Distribution Functions (BSDFs). Remote Sens. 2020. [Google Scholar] [CrossRef]
  88. Katsev, I.L.; Zege, E.P.; Prikhach, A.S.; Polonsky, I.N. Efficient technique to determine backscattered light power for various atmospheric and oceanic sounding and imaging systems. JOSA A 1997, 14, 1338–1346. [Google Scholar] [CrossRef]
  89. Verhoef, W. Light scattering by leaf layers with application to canopy reflectance modeling: The SAIL model. Remote Sens. Environ. 1984, 16, 125–141. [Google Scholar] [CrossRef] [Green Version]
  90. De Wit, C.T. Photosynthesis of Leaf Canopies; Wageningen University: Wageningen, The Netherlands, 1965. [Google Scholar]
  91. Pisek, J.; Sonnentag, O.; Richardson, A.D.; Mõttus, M. Is the spherical leaf inclination angle distribution a valid assumption for temperate and boreal broadleaf tree species? Agric. For. Meteorol. 2013, 169, 186–194. [Google Scholar] [CrossRef]
  92. Zhao, K.; Popescu, S. Lidar-based mapping of leaf area index and its use for validating GLOBCARBON satellite LAI product in a temperate forest of the southern USA. Remote Sens. Environ. 2009, 113, 1628–1645. [Google Scholar] [CrossRef]
  93. Bouvier, M.; Durrieu, S.; Fournier, R.A.; Renaud, J.-P. Generalizing predictive models of forest inventory attributes using an area-based approach with airborne LiDAR data. Remote Sens. Environ. 2015, 156, 322–334. [Google Scholar] [CrossRef]
  94. Richardson, J.J.; Moskal, L.M.; Kim, S.-H. Modeling approaches to estimate effective leaf area index from aerial discrete-return LIDAR. Agric. For. Meteorol. 2009, 149, 1152–1160. [Google Scholar] [CrossRef]
  95. Romanczyk, P. Extraction of Vegetation Biophysical Structure from Small-Footprint Full-Waveform Lidar Signals. Ph.D. Thesis, Rochester Institute of Technology, Rochester, NY, USA, 2015. [Google Scholar]
  96. Woodhouse, I.H.; Nichol, C.; Sinclair, P.; Jack, J.; Morsdorf, F.; Malthus, T.J.; Patenaude, G. A multispectral canopy LiDAR demonstrator project. IEEE Geosci. Remote Sens. Lett. 2011, 8, 839–843. [Google Scholar] [CrossRef]
  97. Wei, G.; Shalei, S.; Bo, Z.; Shuo, S.; Faquan, L.; Xuewu, C. Multi-wavelength canopy LiDAR for remote sensing of vegetation: Design and system performance. ISPRS J. Photogramm. Remote Sens. 2012, 69, 1–9. [Google Scholar] [CrossRef]
  98. Thomas, J.J. Terrain Classification Using Multi-Wavelength LiDAR Data. Master’s Thesis, Naval Postgraduate School Monterey United States, Monterey, CA, USA, September 2015. [Google Scholar]
  99. Bunnik, N. The Multispectral Reflectance of Shortwave Radiation by Agricultural Crops in Relation with Their Morphological and Optical Properties. Ph.D. Thesis, Wageningen University, Wageningen, The Netherlands, 1978. [Google Scholar]
Figure 1. Lidar waveforms created by DIRSIG, approximating the configuration for the RAMI-IV study (both single and multiscatter) for the HET17 scene. The left plot is the waveform in terms of photons, while the right plot waveforms are normalized to have an integral of one, as was performed in RAMI-IV.
Figure 1. Lidar waveforms created by DIRSIG, approximating the configuration for the RAMI-IV study (both single and multiscatter) for the HET17 scene. The left plot is the waveform in terms of photons, while the right plot waveforms are normalized to have an integral of one, as was performed in RAMI-IV.
Remotesensing 12 02909 g001
Figure 2. Normalized lidar waveforms from the RAMI-IV study with the DIRSIG-produced waveforms overlapped. The left plot displays waveforms produced from single scatter contributions, while the right figure includes multiscatter (figure adapted from [83]).
Figure 2. Normalized lidar waveforms from the RAMI-IV study with the DIRSIG-produced waveforms overlapped. The left plot displays waveforms produced from single scatter contributions, while the right figure includes multiscatter (figure adapted from [83]).
Remotesensing 12 02909 g002
Figure 3. The left figure is the multiscatter contribution percentage as a function of lidar footprint radius and total albedo (reflectance + transmittance); the right figure is the waveform for the one albedo, 7.5 footprint radius.
Figure 3. The left figure is the multiscatter contribution percentage as a function of lidar footprint radius and total albedo (reflectance + transmittance); the right figure is the waveform for the one albedo, 7.5 footprint radius.
Remotesensing 12 02909 g003
Figure 4. Comparison of monodirectional reflectance distribution function (MRDF) seen from a simulated lidar in DIRSIG, scanning at 5° zenith steps and capturing the backscatter from 0° to 85° zenith angles of a modeled flat plate. The resulting MRDF is compared to the input raw file, defining the bidirectional scattering distribution function (BSDF) properties of the plate, at equivalent data point locations. The comparison was accomplished at 500, 1064, and 1550 nm, as seen in the plots from left-to-right.
Figure 4. Comparison of monodirectional reflectance distribution function (MRDF) seen from a simulated lidar in DIRSIG, scanning at 5° zenith steps and capturing the backscatter from 0° to 85° zenith angles of a modeled flat plate. The resulting MRDF is compared to the input raw file, defining the bidirectional scattering distribution function (BSDF) properties of the plate, at equivalent data point locations. The comparison was accomplished at 500, 1064, and 1550 nm, as seen in the plots from left-to-right.
Remotesensing 12 02909 g004
Figure 5. Comparison at 550 nm of bidirectional reflectance distribution function (BRDF), top row, and bidirectional transmittance distribution function (BTDF), bottom row, between input BSDF material properties, left column, and a simulated goniometer in DIRSIG, right column. The illumination source is set at 45° with view angles in 30° azimuth steps (0°–360°) and 15° zenith steps (0°–60°) for the goniometer. The white “x” depicts the source angle, while white stars represent the measured locations of the goniometer.
Figure 5. Comparison at 550 nm of bidirectional reflectance distribution function (BRDF), top row, and bidirectional transmittance distribution function (BTDF), bottom row, between input BSDF material properties, left column, and a simulated goniometer in DIRSIG, right column. The illumination source is set at 45° with view angles in 30° azimuth steps (0°–360°) and 15° zenith steps (0°–60°) for the goniometer. The white “x” depicts the source angle, while white stars represent the measured locations of the goniometer.
Remotesensing 12 02909 g005
Figure 6. Comparison at 1064 nm of BRDF, top row, and BTDF, bottom row, between input BSDF material properties, left column, and a simulated goniometer in DIRSIG, right column. The illumination source is set at 45° with view angles in 30° azimuth steps (0°–360°) and 15° zenith steps (0°–60°) for the goniometer. The white “x” depicts the source angle, while white stars represent the measured locations of the goniometer.
Figure 6. Comparison at 1064 nm of BRDF, top row, and BTDF, bottom row, between input BSDF material properties, left column, and a simulated goniometer in DIRSIG, right column. The illumination source is set at 45° with view angles in 30° azimuth steps (0°–360°) and 15° zenith steps (0°–60°) for the goniometer. The white “x” depicts the source angle, while white stars represent the measured locations of the goniometer.
Remotesensing 12 02909 g006
Figure 7. Leaf angle distribution (LAD) cdfs of some of the more prominent named distributions. Each distribution is defined here by its name, e.g., planophile, and its two variables (a,b) according to the method described by Verhoef [89].
Figure 7. Leaf angle distribution (LAD) cdfs of some of the more prominent named distributions. Each distribution is defined here by its name, e.g., planophile, and its two variables (a,b) according to the method described by Verhoef [89].
Remotesensing 12 02909 g007
Figure 8. The C P D a f t e r R 1 is the integrated waveform after the top of the voxel, while C P D a f t e r R 2 is the integrated waveform after the bottom of the voxel.
Figure 8. The C P D a f t e r R 1 is the integrated waveform after the top of the voxel, while C P D a f t e r R 2 is the integrated waveform after the bottom of the voxel.
Remotesensing 12 02909 g008
Figure 9. Lidar waveform, seen on the left, and the LADen profiles estimated using the waveform intensity data on the right.
Figure 9. Lidar waveform, seen on the left, and the LADen profiles estimated using the waveform intensity data on the right.
Remotesensing 12 02909 g009
Figure 10. Accuracy of the slope method for finding the extinction coefficient, k . The plot compares the estimated LAI to the “truth” or reference LAI. Red crosses are the LAI estimates with the bias accounted for, while the solid green line shows where the estimates would lie without accounting for the bias (small zoomed insert shown, bottom-right). The blue dashed line is the one-to-one line.
Figure 10. Accuracy of the slope method for finding the extinction coefficient, k . The plot compares the estimated LAI to the “truth” or reference LAI. Red crosses are the LAI estimates with the bias accounted for, while the solid green line shows where the estimates would lie without accounting for the bias (small zoomed insert shown, bottom-right). The blue dashed line is the one-to-one line.
Remotesensing 12 02909 g010
Figure 11. Visualization of the four different leaf BSDF configurations used for the sensitivity studies.
Figure 11. Visualization of the four different leaf BSDF configurations used for the sensitivity studies.
Remotesensing 12 02909 g011
Figure 12. Diagram of the construction of the maple and oak grove. On the left side of the figure is a map with locations and footprints of the tree models in the scene. The numbers correspond the tree model index, which is listed in the table on the right side of the figure. Under the table are thumbnail views of each tree model to provide a relative approximation of the shape and size of each tree.
Figure 12. Diagram of the construction of the maple and oak grove. On the left side of the figure is a map with locations and footprints of the tree models in the scene. The numbers correspond the tree model index, which is listed in the table on the right side of the figure. Under the table are thumbnail views of each tree model to provide a relative approximation of the shape and size of each tree.
Remotesensing 12 02909 g012
Figure 13. Images of maple and oak grove from directly overhead, (left), and from the side, (right). Red scatter points are displayed on the overhead image at locations where the canopy was interrogated.
Figure 13. Images of maple and oak grove from directly overhead, (left), and from the side, (right). Red scatter points are displayed on the overhead image at locations where the canopy was interrogated.
Remotesensing 12 02909 g013
Figure 14. Heat map of LAI, left, and mean leaf angle, center, in one meter pixels. Blue scatter points are displayed on the heat maps for the lidar interrogation sites. Right, the cumulative distribution function curves for the 10 interrogation sites are plotted over each other, along with planophile and spherical distribution curves for reference.
Figure 14. Heat map of LAI, left, and mean leaf angle, center, in one meter pixels. Blue scatter points are displayed on the heat maps for the lidar interrogation sites. Right, the cumulative distribution function curves for the 10 interrogation sites are plotted over each other, along with planophile and spherical distribution curves for reference.
Remotesensing 12 02909 g014
Figure 15. Waveform comparison plots for each leaf BSDF type at the zero zenith angle and spherical leaf angle distribution of leaves. Plots are shown for the three beam footprint diameters of 0.1, 0.5, and 5 m (left-to-right), and the three wavelengths of 550, 1064, and 1550 nm (top-to-bottom). The legend symbols for the four leaf BSDF types are “M” for model, “L” for Lambertian, “MO” for model reflectance and no transmittance, and “ML” for model reflectance and Lambertian transmittance. In order to display more detail, the last five meters of range are excluded, thus only displaying the canopy returns.
Figure 15. Waveform comparison plots for each leaf BSDF type at the zero zenith angle and spherical leaf angle distribution of leaves. Plots are shown for the three beam footprint diameters of 0.1, 0.5, and 5 m (left-to-right), and the three wavelengths of 550, 1064, and 1550 nm (top-to-bottom). The legend symbols for the four leaf BSDF types are “M” for model, “L” for Lambertian, “MO” for model reflectance and no transmittance, and “ML” for model reflectance and Lambertian transmittance. In order to display more detail, the last five meters of range are excluded, thus only displaying the canopy returns.
Remotesensing 12 02909 g015
Figure 16. Bar graphs of the five highest and lowest values (top five and bottom five bars in each plot, respectively) for each leaf BSDF simplification type, left-to-right: Lambertian, model-opaque, and model-Lambertian; each metric, top-to-bottom: percent increase, overlap, and LAI increase. Note, the axes are scaled differently for each graph as the scales between the different leaf types are drastically different for many of the metrics. Labels for the simulations are wavelength (nm), footprint size (m), zenith angle (degree), and LAD type (spherical (Spher), planophile (Plan), and plagiophile (Plagi)).
Figure 16. Bar graphs of the five highest and lowest values (top five and bottom five bars in each plot, respectively) for each leaf BSDF simplification type, left-to-right: Lambertian, model-opaque, and model-Lambertian; each metric, top-to-bottom: percent increase, overlap, and LAI increase. Note, the axes are scaled differently for each graph as the scales between the different leaf types are drastically different for many of the metrics. Labels for the simulations are wavelength (nm), footprint size (m), zenith angle (degree), and LAD type (spherical (Spher), planophile (Plan), and plagiophile (Plagi)).
Remotesensing 12 02909 g016
Figure 17. Waveform comparison plots for the maple and oak grove scene for each leaf BSDF type. Plots are shown for the three beam footprint diameters of 0.1, 0.5, and 5 m (left-to-right) and the two wavelengths of 550 and 1064 nm (top-to-bottom). The legend symbols for the four leaf BSDF types are “M” for model, “L” for Lambertian, “MO” for model reflectance and no transmittance, and “ML” for model reflectance and Lambertian transmittance. Only the main canopy range is shown, thereby highlighting any differences.
Figure 17. Waveform comparison plots for the maple and oak grove scene for each leaf BSDF type. Plots are shown for the three beam footprint diameters of 0.1, 0.5, and 5 m (left-to-right) and the two wavelengths of 550 and 1064 nm (top-to-bottom). The legend symbols for the four leaf BSDF types are “M” for model, “L” for Lambertian, “MO” for model reflectance and no transmittance, and “ML” for model reflectance and Lambertian transmittance. Only the main canopy range is shown, thereby highlighting any differences.
Remotesensing 12 02909 g017
Figure 18. Box plots of the percent increase, waveform overlap, and LAI percent increase metric (rows top-to-bottom) for the 550 nm wavelength (left column) and the 1064 nm wavelength (right column). The plots encompass the ten interrogation sites for each of the three lidar footprints (0.1, 0.5, and 5 m) and three optical properties (Lambertian, L; model-opaque, MO; and model-Lambertian, ML).
Figure 18. Box plots of the percent increase, waveform overlap, and LAI percent increase metric (rows top-to-bottom) for the 550 nm wavelength (left column) and the 1064 nm wavelength (right column). The plots encompass the ten interrogation sites for each of the three lidar footprints (0.1, 0.5, and 5 m) and three optical properties (Lambertian, L; model-opaque, MO; and model-Lambertian, ML).
Remotesensing 12 02909 g018aRemotesensing 12 02909 g018b
Table 1. Digital Imaging and Remote Sensing Image Generation (DIRSIG) lidar settings for radiation transfer model inter-comparison (RAMI-IV) comparison.
Table 1. Digital Imaging and Remote Sensing Image Generation (DIRSIG) lidar settings for radiation transfer model inter-comparison (RAMI-IV) comparison.
ParameterValueParameterValue
Altitude2000 mGate Range1.323 × 10−5 to 1.335 × 10−5 s
Wavelength1064 nmBin Size4 ns (0.6 m)
Laser Spectral Width0.0003 μmReceive Radius0.05 m
Pulse Energy100 mJDetector length100 μm square
Beam Shapecylinder “Rect”Focal Length0.004 m
Beam Divergence0.025 radSpatial subsampling100 × 100
Pulse Length1 × 10−21 sMaximum events in photon map per pulse4,000,000
Temporal Pulse ShapeGaussianMaximum source bundles per pulse8,000,000
Photon Map (PM) search radius1 cmMaximum bounces per photon bundle10
Table 2. DIRSIG lidar settings for RAMI-IV comparison.
Table 2. DIRSIG lidar settings for RAMI-IV comparison.
ParameterValueParameterValue
Altitude1000 mGate Range6.55 × 10−6 to 6.75 × 10−5 s
Wavelength1064 nmBin Size1 ns
Laser Spectral Width1 × 10−5 μmReceive Radius0.025 m
Pulse Energy0.2 mJDetector length250 μm square
Beam ShapeGaussianFocal Lengthvaried
Beam DivergencevariedSpatial subsampling101 × 101
Pulse Length3 nsMaximum events in photon map per pulse400,000
Temporal Pulse ShapeGaussianMaximum source bundles per pulse800,000
PM search radius1 cmMaximum bounces per photon bundle10 (for multiscatter)
Table 3. List of parameters and associated values investigated for vegetation layer sensitivity study.
Table 3. List of parameters and associated values investigated for vegetation layer sensitivity study.
ParameterValues
Wavelength550 nm1064 nm1550 nm
Footprint Diameter0.1 m0.5 m5 m
Zenith View Angle22.5°45°
LADSphericalPlanophilePlagiophile
BSDFModelLambertianModel-OpaqueModel-Lambertian
Table 4. DIRSIG lidar settings for BSDF sensitivity study.
Table 4. DIRSIG lidar settings for BSDF sensitivity study.
ParameterValueParameterValue
AltitudeVaried (1000 m distance to target)Gate Range6.55 × 106 to 6.75 × 105 s
Wavelength550, 1064, 1550 nmBin Size1 ns
Laser Spectral Width1 × 105 μmReceive Radius0.025 m
Pulse Energy0.2 mJDetector length250 μm square
Beam ShapeGaussianFocal Length1.25, 0.25, 0.025 m
Beam Divergence1 × 104, 5 × 104, 5 × 103 mSpatial subsampling101 × 101
Pulse Length3 nsMaximum events in photon map per pulse400,000
Temporal Pulse ShapeGaussianMaximum source bundles per pulse800,000
PM search radius1 cmMaximum bounces per photon bundle10 (for multiscatter)
Table 5. List of interrogation locations in x, y coordinates, LAI, and mean leaf angle.
Table 5. List of interrogation locations in x, y coordinates, LAI, and mean leaf angle.
Loc #XYLAIMean Leaf Angle
015155.6138.9°
114.5154.1441°
28.5212.432.5840.8°
313.4513.034.0939.2°
49.0414.925.3140°
514.5417.755.6139.9°
613.8614.826.0838.9°
711.919.812.8239.2°
89.2915.221.0240.6°
913.3914.464.0939.2°
Table 6. List of parameters and associated values for simulations of the maple and oak grove scene.
Table 6. List of parameters and associated values for simulations of the maple and oak grove scene.
ParameterValues
Wavelength550 nm, 1064 nm
Footprint Diameter0.1 m, 0.5 m, 5 m
Zenith View Angle
BSDFModel, Lambertian, Model-Opaque, Model-Lambertian

Share and Cite

MDPI and ACS Style

Roth, B.D.; Goodenough, A.A.; Brown, S.D.; van Aardt, J.A.; Saunders, M.G.; Krause, K. Simulations of Leaf BSDF Effects on Lidar Waveforms. Remote Sens. 2020, 12, 2909. https://doi.org/10.3390/rs12182909

AMA Style

Roth BD, Goodenough AA, Brown SD, van Aardt JA, Saunders MG, Krause K. Simulations of Leaf BSDF Effects on Lidar Waveforms. Remote Sensing. 2020; 12(18):2909. https://doi.org/10.3390/rs12182909

Chicago/Turabian Style

Roth, Benjamin D., Adam A. Goodenough, Scott D. Brown, Jan A. van Aardt, M. Grady Saunders, and Keith Krause. 2020. "Simulations of Leaf BSDF Effects on Lidar Waveforms" Remote Sensing 12, no. 18: 2909. https://doi.org/10.3390/rs12182909

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop