Airborne Waveform Lidar Simulator Using the Radiative Transfer of a Laser Pulse

: An airborne lidar simulator creates a lidar point cloud from a simulated lidar system, ﬂight parameters, and the terrain digital elevation model (DEM). At the basic level, the lidar simulator computes the range from a lidar system to the surface of a terrain using the geomatics lidar equation. The simple computation e ﬀ ectively assumes that the beam divergence is zero. If the beam spot is meaningfully large due to the large beam divergence combined with high sensor altitude, then the beam plane with a ﬁnite size interacts with a ground target in a realistic and complex manner. The irradiance distribution of a delta-pulse beam plane is deﬁned based on laser pulse radiative transfer. The airborne lidar simulator in this research simulates the interaction between the delta-pulse and a three-dimensional (3D) object and results in a waveform. The waveform will be convoluted using a system response function. The lidar simulator also computes the total propagated uncertainty (TPU). All sources of the uncertainties associated with the position of the lidar point and the detailed geomatics equations to compute TPU are described. The boresighting error analysis and the 3D accuracy assessment are provided as examples of the application using the simulator.


Introduction
Airborne lidar point clouds and the derived elevation products are essential geospatial datasets, along with airborne or spaceborne image data. Accordingly, the efforts to ensure high-quality lidar data became increasingly important [1,2]. Much research contributed to the quality assurance (QA) of lidar data [3][4][5]. The direct georeferencing geomatics equation is the backbone of lidar accuracy research. An airborne lidar simulator that computes point cloud solutions by the forward modeling of airborne lidar parameters using the geomatics equation is a useful tool in lidar research. A lidar simulation software can simulate the effect of lidar parameters on the point cloud as well as various advanced lidar research. An airborne lidar simulator consists of a laser module, a scanner module, a global navigation satellite system, an inertial measurement unit (GNSS/IMU) module, the flight parameters, and the environmental parameters. The environmental parameters include the optical property of the propagating medium and the 3D terrain, including 3D objects. The laser module is characterized by pulse repetition frequency (PRF), pulse width, the beam divergence angle at the e −2 level, and the pulse distribution function. The scanner determines the overall scan pattern, such as the linear oscillating pattern from a Galvo mirror scanner and the circular pattern from a rotating prism. The scan frequency is also an important factor to determine the along-track and across-track lidar point distribution. The field of view (FOV) of a scanner along with the sensor above ground level (AGL) altitude determines the swath width. The DEM component describes the terrain over which the lidar system scans.
Several airborne lidar simulator software programs were created using mainly direct georeferencing [6][7][8][9]. The simple computation assumes that the beam divergence is effectively zero.
Several airborne lidar simulator software programs were created using mainly direct georeferencing [6][7][8][9]. The simple computation assumes that the beam divergence is effectively zero. Thus, it assumes that the light interaction occurs only at the point where the beam center hits the terrain surface. In case the beam edge touches the target, but the beam center evades the target as shown in Figure 1, the simple georeferencing will not find a target hit. Instead, it will find the point at the background terrain.
If the beam spot is substantially large, only the waveform approach based on the radiative transfer of a laser pulse can describe the problem properly. An advanced simulator addresses the beam divergence using a simplified Gaussian beam spot model [10]. The approach presented here simulates the lidar scanner using the waveform from the solution of the radiative transfer equation of a laser pulse propagation, rather than a semi-analytical beam spot model. The governing lidar waveform equation is given by the following [11][12][13][14][15][16]: ( ) = Σ 2 ( , ) ( , ) ( , ) ( ) . (1) The general lidar equation models the returned optical power, S(t). Here, Q is the laser pulse energy, ∑ is the receiver pupil area, Ω is the solid angle subtended from a target to the receiver, c is the speed of light, and n is the refractive index. The governing equation models the optical return as an integral of the two irradiance distributions, emitter irradiance, EE, and the receiver irradiance, ER, in the presence of the interaction function . The integration using Equation (1) is performed over a normal cross-section plane at distance z(t) after the pulse propagation time t. The vector represents the two-dimensional (2D) position on the cross-section plane. As the emitted laser pulse propagates, the cross-section area expands due to the beam divergence and the light is attenuated by the medium, creating weaker emitter irradiance.
The concept of receiver irradiance may not be intuitive because the receiver does not send out any light. However, the receiver sensitivity distribution can be considered as if it is a radiance distribution. The sensitivity function experiences the same type of expansion and attenuation defined by the field of view. Indeed, the governing lidar equation calls for a different view of how lidar return is detected. In Figure 1, the narrow green beam represents the laser irradiance and the wider blue beam represents the receiver irradiance.
The interaction function, ( , ), for the first green return arrow represents the lidar return from the ambient medium, the second arrow represents the target reflection, and the third arrow represents the topographic surface reflection (Figure 1). The irradiance distribution of a delta-pulse beam plane provides the chance of evaluating the time-varying interaction along the delta-pulse and If the beam spot is substantially large, only the waveform approach based on the radiative transfer of a laser pulse can describe the problem properly. An advanced simulator addresses the beam divergence using a simplified Gaussian beam spot model [10].
The approach presented here simulates the lidar scanner using the waveform from the solution of the radiative transfer equation of a laser pulse propagation, rather than a semi-analytical beam spot model. The governing lidar waveform equation is given by the following [11][12][13][14][15][16]: The general lidar equation models the returned optical power, S(t). Here, Q is the laser pulse energy, is the receiver pupil area, Ω is the solid angle subtended from a target to the receiver, c is the speed of light, and n is the refractive index. The governing equation models the optical return as an integral of the two irradiance distributions, emitter irradiance, E E , and the receiver irradiance, E R , in the presence of the interaction function β. The integration using Equation (1) is performed over a normal cross-section plane at distance z(t) after the pulse propagation time t. The vector r represents the two-dimensional (2D) position on the cross-section plane. As the emitted laser pulse propagates, the cross-section area expands due to the beam divergence and the light is attenuated by the medium, creating weaker emitter irradiance.
The concept of receiver irradiance may not be intuitive because the receiver does not send out any light. However, the receiver sensitivity distribution can be considered as if it is a radiance distribution. The sensitivity function experiences the same type of expansion and attenuation defined by the field of view. Indeed, the governing lidar equation calls for a different view of how lidar return is detected. In Figure 1, the narrow green beam represents the laser irradiance and the wider blue beam represents the receiver irradiance.
The interaction function, β(z, r), for the first green return arrow represents the lidar return from the ambient medium, the second arrow represents the target reflection, and the third arrow represents the topographic surface reflection (Figure 1). The irradiance distribution of a delta-pulse beam plane provides the chance of evaluating the time-varying interaction along the delta-pulse and the target surface interaction line as the beam propagates downward. Thus, the essential component in the radiative transfer-based waveform approach is how to compute irradiance. The theory and solutions are described in the following sections.

Lidar Waveform Solution by Radiative Transfer Theory
Compared to the simple radiometric solution, the radiative transfer theory of a laser pulse and the numerical solution is quite challenging. Thus, only the essential components of the theory are briefly described here. Refer to References [11][12][13][14][15][16] for the full theories and the numerical solutions [16]. To solve the governing equation of a lidar system, the irradiance solutions are needed in Equation (1). The irradiance as a function of 2D position vector, r, on the beam cross section at a given distance, z, is an integration of the radiance distribution over the hemisphere.
Here, the radiance is integrated over infinitesimal solid angle, dω(n), centered on the direction vector n. Thus, we need a radiance distribution function I(r, n), which is obtained as a solution of the laser pulse propagation based on the radiative transfer theory.
This is a 3D radiative transfer equation (RTE) that describes the net radiance balance at a given position and direction. The value ρ is a 3D position vector, c is an attenuation coefficient, and β is a volume scattering function of the medium. Since there is no analytical solution for Equation (3) in 3D, it is reduced to a radiance distribution on a 2D plane perpendicular to the beam direction using small angle approximation because the volume scattering function of the natural air or water medium is strongly forward-oriented.

∂ ∂z
I(z, r, n t ) + n t ·∇ t I(z, r, n t ) = −cI(z, r, n t ) + b s I(z, r, n t )x s n t − n t dn t .
Here, n t is the 2D direction vector defined on the 2D beam cross section and its norm is small according to the small angle approximation, b s is a small angle forward-scattering coefficient, and the small angle forward-scattering phase function, x s , is defined as a function of difference between incoming and scattered 2D direction vectors. The radiance solution is derived via an analytical Fourier transform.
Here, k, p is the conjugate frequency variable of the r, n t and x s is the analytical Fourier transform of x s . Converting Equation (5) into an inverse Fourier transform and plugging it into the Equation (2) yields the following: Here, a s is the effective absorption coefficient and a bs is the effective attenuation coefficient due to forward scattering. Two things are needed to solve Equation (6). First, the source laser pulse radiance distribution in the Fourier transform, and, second, the effective attenuation coefficient. Assuming a Gaussian beam the radiance and its Fourier Transform are as follows: Here, r 0 is effective radius defined by multiplication of half of the beam divergence angle, Θ, and slant altitude, H s . The effective attenuation coefficient, a bs , due to the small angle forward-scattering b s is given as follows using the analytical forward-scattering phase function model: Here, α is a volume scattering function shape parameter by Dolin [13]. The effective attenuation coefficient, a bs , characterizes the contribution from small angle forward scattering to the attenuation of spatial harmonics of the spatial frequency, k, in the initial distribution of radiance within a light beam when propagated to the distance, z, in the medium. Finally, using Equations (7) and (8) the irradiance solution of Equation (6) becomes the following: Here the effective radius is defined as P = Θ 2 · H s + z n and J 0 is the 0-th order first kind Bessel function and z (r = ) is the geometrical distance of the slant-incident beam projected along the coordinate parallel to the direction of pulse propagation. Equation (9) is a final irradiance solution for a point at r away from the center on a circular plane at a distance, z, for a Gaussian laser pulse. Once another irradiance solution for the receiver is calculated in the same manner, the lidar waveform is formed by applying two irradiances in Equation (1) and by computing the numerical integration. Complete details of the theories can be found in [16].
These equations are most generic to be used for laser pulse propagation through any medium. This level of sophistication is critical when the laser pulse propagates through substantially dense medium, like water for bathymetric lidar. The high absorption and scattering by the natural water medium must be properly handled using these equations. However, if a lidar simulation is only for the topographic objects, the equations can be dramatically simplified. Equation (9) will be a concentric 2D Gaussian defined by the beam spot radius, which is determined by the slant sensor altitude, H s , and the full beam divergence, Θ, at the e −2 level. Assuming the instantaneous field of view (IFOV) is significantly larger than the beam divergence, the two irradiance multiplication in the governing Equation (1) is still a Gaussian distribution, mainly determined by the beam divergence as follows: The delta-pulse first touches down at a tiny line segment A ( Figure 2). The length of the interaction line segment increases to the arrow direction and then decreases toward the corner.
The delta-pulse first touches down at a tiny line segment A ( Figure 2). The length of the interaction line segment increases to the arrow direction and then decreases toward the corner.    The interaction function, β(z(t), r) in this case, is the target surface reflectance. Assuming the homogeneous reflectance property of the target surface, the value is constant all over the target surface. The radiative transfer-based irradiance solution, E E E R , is also a 2D circular plane. However, the plane hits the target surface with a non-zero incident angle to make it a slant plane. Thus, the waveform Equation (1) will be non-zero only along the interaction line segment between the two position vectors r 1 and r 2 , which results in the 2D integral in a one-dimensional (1D) line integral. The solution can be obtained only via numerical solution because the position of the target is over a random relative position and orientation of the target. The waveform solutions are demonstrated in the Results section.

Lidar Direct Georeferencing
While the radiative transfer-based waveform approach is the advanced component of the airborne lidar simulator, the direct georeferencing is the backbone of the simulator. A direction vector of an emitted laser pulse defined in sensor body frame (SBF) is defined first, then, by multiplying with the lidar range, ρ, to a point, P, on the surface, the position vector in the SBF is determined.
Next, the position vector in SBF X SBF P is converted to the position in the IMU body frame (IBF), X IBF P , using the following relationship: where X IBF O SBF is the lever-arm vector to the SBF origin in the IBF and R IBF SBF is a rotation matrix that transforms X SBF P to X IBF P and is defined using three boresighting angles, ω, ϕ, κ, as follows: cos ϕ cos κ sin ω sin ϕ cos κ − cos ω sin κ cos ω sin ϕ cos κ + sin ω sin κ cos ϕ sin κ sin ω sin ϕ sin κ + cos ω cos κ cos ω sin ϕ sin κ − sin ω cos κ − sin ϕ sin ω cos ϕ cos ω cos ϕ Next, the position vector, X IBF P , is transformed into a new position vector, X LTF P , in a local tangential frame (LTF), as follows: An LTF is defined in reference to a global geodetic frame, which follows a coordinate axes convention of a North-East-Down (NED) reference frame. The orientation of the IBF, with respect to the LTF, is described by three IMU angles, roll (R), pitch (P), and heading (H), as follows: Appl. Sci. 2019, 9, 2452 6 of 16 Along with the position of the GNSS antenna phase center, the lever-arm distances between the IMU origin and the antenna can be used to calculate the position of the O_IBF (which is the same as O_LTF). Using the latitude, φ, longitude, λ, and ellipsoid height, h, from O_LTF, it is possible to form a rotation matrix that transforms an arbitrary position vector in LTF to the position vector in the Earth Centered Earth Fixed (ECEF) coordinate: In addition to the rotation matrix, the geodetic position (ϕ, λ, h) of O_LTF itself can be transformed into the position in the ECEF coordinate using the following relationship: where a is the semi-major axis, e is the eccentricity, and r N is the position in ECEF coordinates. Now the position vector can be solved in ECEF coordinates, as follows: Finally, the ECEF coordinates are converted to the map projection, such as the Universal Transverse Mercator (UTM) or a state plane.

Total Propagated Uncertainty (TPU)
The TPU in this simulator is calculated using the following direct georeferencing equation: This is the equation that describes the position of a point P(x, y, z) in the LTF coordinate. The direct georeferencing equation that calculates the position in LTF indeed includes all the major sources of errors, such as boresighting, scan angle, GNSS position, and IMU angles. Regarding TPU, it is called "total" because multiple error sources are combined and it is called "propagated" because the errors are propagated through Equation (20). The TPU as a statistical uncertainty is computed based on the linear approximation and by ignoring the covariances between multiple parameters.
Currently, the assumption of negligible covariances is accepted in the lidar community. As the TPU is still in its infancy, no major lidar manufacturers (e.g., Teledyne Optech, Hexagon Leica, Riegl) currently provide direct georeferencing proprietary software that computes TPU. The only known exception is the CZMIL bathymetric lidar system that computes TPU with support from the US Army Corps of Engineers (USACE). The TPU model in this research utilizes 14 parameters. Thus, the number of covariance combination is 14 × 14/2, considering the symmetry of the covariance matrix. While most covariances are likely to be negligible, potential exceptions of non-negligible covariances could be the position and orientation parameters. However, even the Applanix IMU systems do not have covariance specifications due to the measurement difficulties, but it is not impossible. Numerous U.S. Geological Survey (USGS) internal lidar accuracy assessments indicate that the uncertainties from airborne lidar data in terms of smooth surface precision [2] are very close to the computed TPU based on negligible covariances, which justifies the assumption that the negligible covariance is acceptable.
To formulate the TPU computation equations, Equation (20) is rewritten as follows: The TPU in x position in LTF is given by the following equation, which is derived from Equation (21) (The TPU for y and z components is also computed in the same manner): To solve Equation (22), two sets of data are needed. First, the uncertainty, σ, of each component, which is given as a priori standard deviation. The second is the partial derivative, which is the Jacobian matrix element.
The Jacobian matrix elements for lever-arm displacement and position vector of IBF origin can be derived as follows: The Jacobian matrix elements for lever-arm displacement and position vector of IBF origin are derived as follows: The Jacobian matrix elements for boresighting and the IMU angles are derived as follows: The analytical derivative solutions in Equations (24)-(26) required at the next stage are straight forward, thus they are not described here. The TPU computed via the equations in this section computes the uncertainty associated to the position vector in LTF. This means when the TPU associated with the lidar point cloud in the commonly used 2D map projection coordinate, such as the UTM or state plane, is needed, then the additional uncertainty that occurs in the rest of the coordinate transform needs to be included. Those additional procedures convert the point from LTF to the point in the map projection. Since the major sources of error were already handled in the equations so far, only the uncertainty (σ ) caused by the projection conversion will be added in the following manner.

Airborne Lidar Simulator
An airborne lidar simulator was developed to implement direct georeferencing, target simulation, and the radiative transfer-based waveform modeling. The simulator consists of three main components, as follows: ( The pulse function is the radiance distribution function (e.g., Gaussian or Top-hat) on the cross section of a pulse fired at PRF (pulse repetition frequency). The typical scanner types are liner Galvo-mirror, circular, and Palmer scanner. The scanning frequency is an important factor determining the lidar point cloud point density in PPM (points/m) or PPSM (points/m 2 ).
The point density is an important simulation condition because the lidar point cloud positional accuracy is constrained by the point density. The following are the descriptive formulae: Along track linear point density (PPM_al) is determined by the scanner frequency (F s ) and the speed (v). Across track density (PPM_ac) is related to the swath width (SW). Although the parameters affect the point density in an intuitive manner, creating an optimal set of parameters is a tricky issue. For instance, if we want higher PPSM, then the increase of PRF might come as a first option. However, if PRF is simply increased, it will result in the increased across track density with unchanging along track density, which will cause uneven point distribution. It is obvious that numerous cases like this can be investigated for an optimal setting. All these multiple parameters need to be carefully adjusted to achieve desired evenly distributed point density.
The airborne lidar simulation starts from determining the total number of pulses, because it is the foundation of the computation flow ( Figure 3). Using several related parameters, the SW is determined and then it is multiplied by another input, the "Length to Width" ratio, which determines the flight distance. The distance divided by speed gives the flight time. Eventually the total number of pulses is obtained by the PRF times flight time.  The second step is to create an IMU and GNSS data array. Although any arbitrary IMU and GNSS position data can be generated, the practical strategy is to use a real SBET file (or other form of position and orientation data) and sample two sets of data out of a straight north flight segment:  The second step is to create an IMU and GNSS data array. Although any arbitrary IMU and GNSS position data can be generated, the practical strategy is to use a real SBET file (or other form of position and orientation data) and sample two sets of data out of a straight north flight segment: (R 1 , P 1 , H 1 ) and (R 2 , P 2 , H 2 ). The data come in a low temporal resolution (e.g., SBET in 200 Hz). Thus, the time and (R, P, H) data need to be interpolated to match the high-resolution time array (usually hundreds of KHz) of the laser pulses. Several different flight patterns can be selected, including parallel, parallel opposite, cross, slant cross, etc. The following modifications in heading angles (Figure 4) are needed to accommodate the flight pattern.
The point clouds in Figure 4 demonstrate the resultant swaths for two examples (cross pattern with circular scanner and slant-cross pattern with linear scanner).
GNSS position data can be generated, the practical strategy is to use a real SBET file (or other form of position and orientation data) and sample two sets of data out of a straight north flight segment: ( , , ) and ( , , ). The data come in a low temporal resolution (e.g., SBET in 200 Hz). Thus, the time and (R, P, H) data need to be interpolated to match the high-resolution time array (usually hundreds of KHz) of the laser pulses. Several different flight patterns can be selected, including parallel, parallel opposite, cross, slant cross, etc. The following modifications in heading angles ( Figure 4) are needed to accommodate the flight pattern. The GNSS position array is prepared for four preset flight patterns in Figure 5, where the x-axis and y-axis represent ground coordinates in meters. The coordinates are prepared in LTF. The swath boundary of the two swaths is illustrated in red and white. The GNSS position corresponds to the dashed line located in the center of each swath. This illustration is based on a 30% swath overlap ratio. The next step is to create a direction vector of the propagating pulse in the SBF. The scan angle (θ) array is prepared first using scanner type, , , and the scan angle offset, (∆θ). Then the direction vector in SBF for the Galvo-mirror and circular scanner is given, respectively, as follows: The GNSS position array is prepared for four preset flight patterns in Figure 5, where the x-axis and y-axis represent ground coordinates in meters. The coordinates are prepared in LTF. The swath boundary of the two swaths is illustrated in red and white. The GNSS position corresponds to the dashed line located in the center of each swath. This illustration is based on a 30% swath overlap ratio. the time and (R, P, H) data need to be interpolated to match the high-resolution time array (usually hundreds of KHz) of the laser pulses. Several different flight patterns can be selected, including parallel, parallel opposite, cross, slant cross, etc. The following modifications in heading angles ( Figure 4) are needed to accommodate the flight pattern. The point clouds in Figure 4 demonstrate the resultant swaths for two examples (cross pattern with circular scanner and slant-cross pattern with linear scanner).
The GNSS position array is prepared for four preset flight patterns in Figure 5, where the x-axis and y-axis represent ground coordinates in meters. The coordinates are prepared in LTF. The swath boundary of the two swaths is illustrated in red and white. The GNSS position corresponds to the dashed line located in the center of each swath. This illustration is based on a 30% swath overlap ratio. The next step is to create a direction vector of the propagating pulse in the SBF. The scan angle (θ) array is prepared first using scanner type, , , and the scan angle offset, (∆θ). Then the direction vector in SBF for the Galvo-mirror and circular scanner is given, respectively, as follows: The next step is to create a direction vector of the propagating pulse in the SBF. The scan angle (θ) array is prepared first using scanner type, F s , FOV, and the scan angle offset, (∆θ). Then the direction vector in SBF for the Galvo-mirror and circular scanner is given, respectively, as follows: The final step is to estimate a range to the DEM surface. Unlike the active real lidar that measures the range, the simulator cannot measure the distance. The simulator can only estimate the range to the DEM surface by following the computation procedure described in the direct georeferencing section above, as follows: Starting from the direction vector in SBF →computing a position in SBF by introducing unknown range ρ → transforming the SBF position into IBF position and translating by a lever-arm displacement → transforming the position in IBF into the position in LTF using (R, P, H) angles. The solution for the position where the laser pulse hits the target can be computed in two methods. One approach is the analytical solution when a specific geometry of the target yields a forward solution. The other approach is a generic optimization using DEM to get the range solution.
Once the beam center interaction point is obtained, it is refined into the waveform solution using the radiative transfer-based irradiance distribution described in the above section. Several examples are presented to demonstrate how the waveform-based solution is different compared to the simple georeferencing of the beam center. The example shown in Figure 6 uses a cube and a beam spot size comparable to the cube dimension, which creates a non-negligible interaction effect. The upper panel in Figure 6 illustrates a sequence of interaction lines between the delta-pulse and the cube surface along the arrow direction. As pulse 1 propagates downward and a little toward the right side, the interaction line touches down on the left beam edge first, and then the interaction line makes a transition to the right side to create a full circle interaction area. Thus, the z-coordinate of the lidar point is exactly the altitude of the cube top surface, which is shown in the P1 waveform in Figure 7. along the arrow direction. As pulse 1 propagates downward and a little toward the right side, the interaction line touches down on the left beam edge first, and then the interaction line makes a transition to the right side to create a full circle interaction area. Thus, the z-coordinate of the lidar point is exactly the altitude of the cube top surface, which is shown in the P1 waveform in Figure 7.
The estimated relative reflected energy of the pulse and the pulse center altitude for each point are as follows: 1.0 (1.0 m) for point 1, 0.08 (1.043 m) for point 2, 0.54 (0.990 m) for point 3, and 0.14 (0.957 m) for point 4. Figure 6. A cube target interacts with an incoming laser pulse at different locations. The 3D point cloud on the right side is colored by the intensity, which represents the relative energy given by the waveform integral in Figure 7. The color scheme follows the typical "rainbow" color table of the lidar point cloud visualization so that the yellowish color represents higher values and the bluish color represents lower values. Figure 6. A cube target interacts with an incoming laser pulse at different locations. The 3D point cloud on the right side is colored by the intensity, which represents the relative energy given by the waveform integral in Figure 7. The color scheme follows the typical "rainbow" color table of the lidar point cloud visualization so that the yellowish color represents higher values and the bluish color represents lower values.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 10 of 16 Pulse 2 hits the upper-left edge as it is moving in a westerly downward direction. Thus, it hits on the right edge of the beam circle first and makes a transition toward the left. As it hits the small area of the corner, the interaction area is a small wedge giving only 8% of the relative energy, which means the total reflected energy, by integrating the low intensity, waveform is about 8%, compared to the full circle interaction of point 1. Additionally, the computed altitude is 4.3 cm higher than the cube surface because the short duration of the interaction occurs when the beam center is still above the surface before moving down to the cube surface level. The rest of the interpretation can be understood in a similar manner. Point 3 interacts in the upper half circle area, resulting in 54% of relative energy, and the computed altitude is about 1 cm below the surface. The point 4 beam propagates in a westerly downward direction, making the first hit at the bottom edge of the small arc of the circle. The whole interaction area is a small arc area resulting in a relative intensity of only 14%. Considering the interaction position is the left arc of the circle, the center of the beam is already lower than the top surface. That is why the altitude (z-component) of the computed lidar point position is 95.7 cm, which is lower than the cube top altitude of 1 m. This means the computed lidar point is suspended in air at 4.3 cm lower than the top surface. In fact, the laser pulse reflection occurs exactly on the surface, which is at a 1 m altitude. However, the waveform modeling simulates the pulsetarget interaction accurately and simulates the situation that a real lidar system experiences.
A lidar system detects a waveform peak, but it has no idea on exactly which part of the crosssectional area of a beam the light reflection occurs. Thus, it assumes that the interaction occurs at the beam center. Obviously, this error occurs because the radiative transfer and waveform-based rigorous computation approach is used. However, this error is exactly what must be simulated. As a result, the target will have slightly larger apparent dimension; in other words, a target enlargement effect. If the simple georeferencing was used, the cube hit position will be on the side wall of the cube, which does not happen in a real lidar system. Of course, if the beam divergence is very small and the sensor AGL is low, the target enlargement effect will be minimized accordingly. Finally, the simulator computes TPU according to the equations described above. Pulse 2 hits the upper-left edge as it is moving in a westerly downward direction. Thus, it hits on the right edge of the beam circle first and makes a transition toward the left. As it hits the small area of the corner, the interaction area is a small wedge giving only 8% of the relative energy, which means the total reflected energy, by integrating the low intensity, waveform is about 8%, compared to the full circle interaction of point 1. Additionally, the computed altitude is 4.3 cm higher than the cube surface because the short duration of the interaction occurs when the beam center is still above the surface before moving down to the cube surface level. The rest of the interpretation can be understood in a similar manner. Point 3 interacts in the upper half circle area, resulting in 54% of relative energy, and the computed altitude is about 1 cm below the surface. The point 4 beam propagates in a westerly downward direction, making the first hit at the bottom edge of the small arc of the circle. The whole interaction area is a small arc area resulting in a relative intensity of only 14%. Considering the interaction position is the left arc of the circle, the center of the beam is already lower than the top surface. That is why the altitude (z-component) of the computed lidar point position is 95.7 cm, which is lower than the cube top altitude of 1 m. This means the computed lidar point is suspended in air at 4.3 cm lower than the top surface. In fact, the laser pulse reflection occurs exactly on the surface, which is at a 1 m altitude. However, the waveform modeling simulates the pulse-target interaction accurately and simulates the situation that a real lidar system experiences.
A lidar system detects a waveform peak, but it has no idea on exactly which part of the cross-sectional area of a beam the light reflection occurs. Thus, it assumes that the interaction occurs at the beam center. Obviously, this error occurs because the radiative transfer and waveform-based rigorous computation approach is used. However, this error is exactly what must be simulated. As a result, the target will have slightly larger apparent dimension; in other words, a target enlargement effect. If the simple georeferencing was used, the cube hit position will be on the side wall of the cube, which does not happen in a real lidar system. Of course, if the beam divergence is very small and the sensor AGL is low, the target enlargement effect will be minimized accordingly. Finally, the simulator computes TPU according to the equations described above.

Results
Several examples of the airborne lidar simulator applications are presented here.

Boresighting Error Effect
The boresighting angles are the angular offsets between the SBF and IBF. A sensor optical model defines the 3D Cartesian coordinate system SBF. The coordinate axis of an IMU system is usually printed on the cover as arrows, and the lidar axis is aligned to the IMU axis as close as possible, but the boresighting (fine tuning) is required. The IMU system bridges between the "subjective" sensor coordinate to the "objective" local real-world coordinate system by telling the (R, P, H) angles. Thus, if there are any misalignment angles, (R, P, H) must be corrected by those misalignment (boresighting) angles. Frequently, those misalignment angles are called "roll, pitch, and heading". However, they are reserved terms for IMU angles. Strictly speaking boresighting angle means the angular offset between the SBF and IBF coordinates. The three angle sets (ω, ϕ, κ) are the angular offset defined by rotating x, y, and z axes, respectively. The most commonly used convention is that the x-axis points to the nose of an aircraft, the y-axis points to the right-wing, and the z-axis points downward nadir. Thus, rotating the x-axis (axis that points the nose) results in the net effect such as a body roll, and so forth. Thus, the ω boresighting angle will effectively increase the IMU roll (R) angle by ω, and the IMU (P, H) angle adjustment in the same manner by (ϕ, κ).
The data for boresighting correction is obtained by flying multiple swaths in different flight directions. If the perfect boresighting angles are found, the overlapped area by multiple flights needs to be aligned. Thus, boresighting angles are iteratively optimized until the relative differences between multiple virtual planes become minimal. The airborne lidar simulator can achieve this via visual demonstration.
The net effect of the boresighting error is illustrated in Figure 8. The point cloud in the middle shows perfect boresighting, thus the point cloud from two swaths makes a perfect overlap. Figure 8a and b demonstrate the effect of an incorrect ω, ϕ angle for two parallel swaths to the same direction, while Figure 8c,d show the effect for two parallel swaths to the opposite direction. Figure 8 shows the overestimation, which means the optimized boresighting angles are greater than the true value. offset defined by rotating x, y, and z axes, respectively. The most commonly used convention is that the x-axis points to the nose of an aircraft, the y-axis points to the right-wing, and the z-axis points downward nadir. Thus, rotating the x-axis (axis that points the nose) results in the net effect such as a body roll, and so forth. Thus, the boresighting angle will effectively increase the IMU roll (R) angle by , and the IMU (P, H) angle adjustment in the same manner by ( , ).
The data for boresighting correction is obtained by flying multiple swaths in different flight directions. If the perfect boresighting angles are found, the overlapped area by multiple flights needs to be aligned. Thus, boresighting angles are iteratively optimized until the relative differences between multiple virtual planes become minimal. The airborne lidar simulator can achieve this via visual demonstration.
The net effect of the boresighting error is illustrated in Figure 8. The point cloud in the middle shows perfect boresighting, thus the point cloud from two swaths makes a perfect overlap. Figure 8a and b demonstrate the effect of an incorrect , angle for two parallel swaths to the same direction, while Figure 8c and d show the effect for two parallel swaths to the opposite direction. Figure 8 shows the overestimation, which means the optimized boresighting angles are greater than the true value. For the overestimated ω for parallel same direction swaths, the point cloud from both swaths shifts to the left side, as shown in Figure 8a, while the underestimated ω will send both swaths to the right side. For the overestimated ω for parallel opposite direction swaths, the point cloud from each swath is shifted to the opposite (one left and the other right) direction, as shown in Figure 8c. If underestimated, each swath will change direction but the end result will be the same, which is to be separated, as shown in Figure 8c.
For the overestimated ϕ for the parallel same direction swaths, the point cloud from both swaths shifts forward, as shown in Figure 8b, while the underestimated ϕ will send both swaths to the rear. For the overestimated ϕ for parallel opposite direction swaths, the point cloud from each swath is shifted to the opposite (one forward and the other backward) direction, as shown in Figure 8c. If underestimated each swath will change direction but the end result will be the same, which is to be separated, as shown in Figure 8d.
As demonstrated in Figure 8, the net result of boresighting error cannot be explained by the boresighting angle error itself. The flight pattern plays a major role to amplify or lessen the boresighting angle error effect because the relative location of the target from one swath and the other swath is different. This is an example that a airborne lidar simulator can reproduce.

TPU Estimation
Using Equations (21)-(26), the TPU σ x , σ y , σ z associated with every lidar point is computed. The variation of the TPU values comes from the partial derivative, which is related to the magnitude of local variables. Consider the first term in TPU Equation (22) as an example. Although the uncertainty (σ x 0 ) is constant, the partial derivative of a Jacobian element ∂x ∂x 0 can have a wide range of values depending on the variables used in the computation. Thus, the dynamic range of TPU is determined by the dynamic range of variables, such as lidar range, scan angle range, IMU angle range, and so on. Figure 9 shows all the uncertainty sources and the lidar system parameters. The lidar system parameters on the right are the common input. Figure 9a represents horizontal TPU and Figure 9b represents vertical TPU using the high precision IMU system with circular scanner. The range of horizontal TPU is 14-15 cm and the vertical TPU range is 4-5 cm, which is much smaller than the horizontal TPU. The overall magnitude of the TPU is determined mainly by the AGL (1 km) and minor contributions come from the rest of the system specification. The minimal range (1 cm) is due to the IMU angle variation. Figure 9c represents the horizontal TPU and Figure 9d represents the vertical TPU using the low precision IMU system with a linear scanner. The range of horizontal TPU is 37-47 cm and the vertical TPU range is 1-15 cm. The high TPU using low precision IMU is obvious compared to the low TPU using high precision IMU. Figure 9a,b use a circular scanner that has a constant off-nadir angle. Figure 9c,d, however, use a linear scanner with the off-nadir angle between −20 • and 20 • . Thus, in addition to the raised TPU baseline caused by the low precision IMU, the large off-nadir scan angle range adds a substantial TPU range. It is reasonable to assume that most of the uncertainties are constant numbers, at least during the flight time. However, the GNSS positional uncertainties ( , , ) can be quite dynamic depending on the satellite conditions. Thus, this situation was simulated by creating a temporary short spike in ( , , ). Figure 10 shows when the temporary poor position dilution of precision (PDOP) occurs in one swath, while the other swath is in good GNSS condition. The left panel in Figure 10, with a linear galvo-mirror scanner, shows the quick rise and fall of the GNSS uncertainty, while the right panel, with a circular scanner, shows a different pattern. The TPU provides a quality check of the lidar data. When the abnormal TPU spike area is identified, a decision can be made about recollecting data over this area. It is reasonable to assume that most of the uncertainties are constant numbers, at least during the flight time. However, the GNSS positional uncertainties (σ x 1 , σ y 1 , σ z 1 ) can be quite dynamic depending on the satellite conditions. Thus, this situation was simulated by creating a temporary short spike in (σ x 1 , σ y 1 , σ z 1 ). Figure 10 shows when the temporary poor position dilution of precision (PDOP) occurs in one swath, while the other swath is in good GNSS condition. The left panel in Figure 10, with a linear galvo-mirror scanner, shows the quick rise and fall of the GNSS uncertainty, while the right panel, with a circular scanner, shows a different pattern. The TPU provides a quality check of the lidar data. When the abnormal TPU spike area is identified, a decision can be made about recollecting data over this area.
It is reasonable to assume that most of the uncertainties are constant numbers, at least during the flight time. However, the GNSS positional uncertainties ( , , ) can be quite dynamic depending on the satellite conditions. Thus, this situation was simulated by creating a temporary short spike in ( , , ). Figure 10 shows when the temporary poor position dilution of precision (PDOP) occurs in one swath, while the other swath is in good GNSS condition. The left panel in Figure 10, with a linear galvo-mirror scanner, shows the quick rise and fall of the GNSS uncertainty, while the right panel, with a circular scanner, shows a different pattern. The TPU provides a quality check of the lidar data. When the abnormal TPU spike area is identified, a decision can be made about recollecting data over this area.

Accuracy Assessmenr of 3D Point Cloud
The lidar simulator is an important research tool for 3D accuracy assessment. For example, consider a parking lot with painted lines on the pavement, as shown in Figure 11a. A common practice in performing a check point survey is to place a GNSS rover on the intersection point of two painted lines. For the 3D accuracy assessment, the intersection position (x, y, z) in the airborne lidar

Accuracy Assessmenr of 3D Point Cloud
The lidar simulator is an important research tool for 3D accuracy assessment. For example, consider a parking lot with painted lines on the pavement, as shown in Figure 11a. A common practice in performing a check point survey is to place a GNSS rover on the intersection point of two painted lines. For the 3D accuracy assessment, the intersection position (x, y, z) in the airborne lidar point cloud must be identified. Once identified, the difference from the survey point yields an error vector (∆x, ∆y, ∆z). Using multiple error vectors, a statistical estimate of the 3D accuracy σ x , σ y , σ z can be made. However, the position identification from the airborne point cloud is difficult, mainly because of the low point density. Thus, the usual practice is to ignore horizontal accuracy and find the location in the airborne point cloud using the surveyed horizontal position and interpolate the z-value for that position. The difference against the surveyed z-value is used to calculate the root mean square error in z (RMSEz). can be made. However, the position identification from the airborne point cloud is difficult, mainly because of the low point density. Thus, the usual practice is to ignore horizontal accuracy and find the location in the airborne point cloud using the surveyed horizontal position and interpolate the zvalue for that position. The difference against the surveyed z-value is used to calculate the root mean square error in z (RMSEz). Identifying the intersection point from two crossing lines is challenging. Figure 11b,c illustrates the random cases of lidar point cloud intensity over the modeled painted lines with the circular scanner at 9 PPSM, which is the USGS quality level 1 category. The color intensity ranges from yellow (full spot hit), green, cyan, and blue (minimal hit at the line edge). Intensity is used because the elevation of the painted lines and the background pavement is not distinguishable. The background dotted lines are the painted lines with 20 cm width. Only the simulator, however, can show the true painted lines.   Figure 11c simulates a 6 m long line. Depending on the location in a swath, the point density, and various other conditions, some cross lines will be under sampled, e.g., the upper right in Figure 11b. The 6 m line example in Figure 11c demonstrates that the number of points is enough to describe the cross line properly.
An airborne lidar simulator is a useful tool in addressing the required conditions (line length, width, point density, and intensity contrast) of the crossing lines for a 3D accuracy assessment. About 200 cross painted lines were distributed in the two-swath overlap area. As demonstrated in Figure Identifying the intersection point from two crossing lines is challenging. Figure 11b,c illustrates the random cases of lidar point cloud intensity over the modeled painted lines with the circular scanner at 9 PPSM, which is the USGS quality level 1 category. The color intensity ranges from yellow (full spot hit), green, cyan, and blue (minimal hit at the line edge). Intensity is used because the elevation of the painted lines and the background pavement is not distinguishable. The background dotted lines are the painted lines with 20 cm width. Only the simulator, however, can show the true painted lines. Figure 11b simulates a 2 m long line and Figure 11c simulates a 6 m long line. Depending on the location in a swath, the point density, and various other conditions, some cross lines will be under sampled, e.g., the upper right in Figure 11b. The 6 m line example in Figure 11c demonstrates that the number of points is enough to describe the cross line properly.
An airborne lidar simulator is a useful tool in addressing the required conditions (line length, width, point density, and intensity contrast) of the crossing lines for a 3D accuracy assessment. About 200 cross painted lines were distributed in the two-swath overlap area. As demonstrated in Figure 11, to pinpoint the intersection point requires an unrealistically large point density. Thus, directly identifying a conjugate point is not a correct approach. Instead, 3D line regression modeling is needed so that the intersection point can be determined from two regression lines. The difference (∆x, ∆y, ∆z) is computed against the true intersection point. Using a large number of error vectors, a statistical estimate of 3D accuracy σ x , σ y , σ z can be made. The uncertainty plot (Figure 12b) illustrates the uncertainty of x and y error σ x , σ y as a function of painted line length. As the length of the painted line increases, the uncertainty becomes smaller. Figure 12a,c represents the (∆x, ∆y) distribution for the 2 m and 6 m lines, respectively, and the uncertainty σ values are 3 cm and 1.5 cm, respectively. The research results (like Figure 12b) obtained from the simulation will provide a guideline for a minimum line length to meet the uncertainty requirement in the 3D accuracy assessment. For instance, if the required uncertainty associated to locate a conjugate survey point in the airborne point cloud is 2 cm, then the length of the painted line must be longer than 4 m.

Discussion
An airborne lidar simulator was developed using a rigorous radiative transfer theory of a laser pulse. The simulator allows the realistic simulation of a laser pulse with a realistic beam divergence angle. Thus, it can reproduce the various interaction conditions between the laser pulse and a target. The simulator also computes the TPU associated to each lidar point. The lidar simulator also has the capability to perform target detectability research. Additionally, the requirement conditions for the 3D accuracy assessment can be investigated using this simulator, as demonstrated in the example showing the intensity point cloud from the painted line over the pavement. To the best of the author's knowledge, an airborne lidar simulator combining these rigorous concepts has not been available previously. Further lidar applications research will be performed in the future.
Funding: This research was funded by USGS National Geospatial Program (NGP) 3D Elevation Program (3DEP).