Concentration Fluctuations and Odor Dispersion in Lagrangian Models

In this paper, a review of the Lagrangian stochastic models developed in the last decades for the simulation of the concentration–fluctuation dispersion is presented. The main approaches available in the literature are described and their ability in reproducing the higher order moments of the probability density function is discussed. Then, the Lagrangian approaches for evaluating of the odor annoyance are presented. It is worth to notice that, while Lagrangian stochastic models for mean concentrations are well-known and their ability in correctly reproducing the observation is well assessed, concerning concentration fluctuations the approaches are often new and unknown for most of the scientific community.


Introduction
Concerning the study of the atmospheric dispersion, it can be recalled that much research has been devoted to modeling the mean concentration field of non-reactive substances released in the atmospheric boundary layer (ABL), and modeling systems for mean concentration are routinely applied in air quality monitoring activities. Nevertheless, other statistical properties than the mean of scalar quantities (such as concentration) that are advected and mixed by a turbulent velocity field have practical importance in a large number of applications. Some applications require the estimates of second and higher order moments of concentration, and in some cases the knowledge of the full one-time one-point probability density function (pdf) of the concentration is needed. For example, the pdf of concentration may be required in the study of toxic gas effects on humans, the assessment of nuisance due to malodorous materials [1], the flammability of substances [2] and the interaction between turbulence and chemistry. The Lagrangian approach describes the motions of the reactive fluid particles up to a point where the distance between the particles is of the order of the Kolmogorov scale. In most atmospheric flows, this distance is much larger than the molecular scale, where the chemical reactions occur. The dynamic at this scale can be disregarded and, consequently, the dependency on the molecular viscosity and diffusivity neglected [3]. Provided the Reynolds number is large enough (depending on the Schmidt number) we expect that at large enough times Brownian particles will behave like fluid particles and exhibit a Richardson scaling regime followed by diffusive behavior for times larger than the Lagrangian time scale [4]. Lagrangian stochastic models are based on the Langevin equation for the turbulent velocities. This is a stochastic differential equation for a Markovian continuous process. In order to specify the different terms of the equation, it is required to determine two coefficients which should reflect the properties of the turbulent flow. The first one is obtained according to the Kolmogorov's theory of locally isotropic turbulence in the inertial sub-range and the second one is a solution of the Fokker-Planck equation with a specified pdf of the process, or in other words, the Eulerian pdf of the turbulent velocities [5].
In several countries, odor assessments are based on so-called odor-hours defined by at least 6 min of perceivable odor concentrations in an hour. Modeling odor hours requires the determination of the 90th percentile of the corresponding cumulative frequency distribution. It can be determined by assuming a pdf that is generally completely defined by the concentration fluctuation intensity i = σ c C 2 and the mean concentration [6]. Obtaining an estimation of i requires a more sophisticated approach. Many Lagrangian models have been developed for the second-and higher-order moments of the concentration pdf over the last 10 years. Thomson (1990) [7] and Borgas and Sawford (1994) [8] proposed the two-particle model, Pope (1985) and Cassiani et al. (2005Cassiani et al. ( , 2007, Leuzzi et al. (2012) [8][9][10][11][12] the pdf method, while the fluctuating plume model was suggested by Yee et al. (1994), Yee [13][14][15][16][17][18][19][20]. Manor (2014) [21] used a Lagrangian particle model to simulate the concentration variance dispersion and Ferrero et al. (2017) [22] suggested a formulation for the variance dissipation time scale to be used in the same model. This parameterization has been used by Oettl and Ferrero (2017) [6] and Ferrero and Oettl (2019) [23] to simulate odor dispersion in field campaigns.
A rigorous Lagrangian approach would require the simultaneous knowledge of position, velocity, and chemical content of all fluid particles in the volume and in the time interval considered, which would be an impossible task. The-two particle approach [7,8,24] gives the variance and covariance of the concentrations in the range of spatial scales of interest.
Several models were presented for the estimation of the moments of concentration in a convective boundary layer (CBL) based on a fluctuating plume approach (e.g., [15][16][17][18]20,25,26]). These models can be easily adapted to simulations in a neutral boundary layer. The fluctuating plume approach has some intrinsic limitations in that it can treat only non-reactive scalars and relies on parameterized in-plume relative concentration fluctuations. A complete theory for relative concentration fluctuations is not available, and experimental measurements are scarce and case specific. As a consequence, to date there is not a solid parameterization for this statistic. Pope (1994;2000) and Heinz (2003) [27][28][29] reviewed various closures and modeling techniques for the viscous stresses and pressure gradient terms needed for the modeling of the turbulence velocity pdf. These closures are then used to define a set of Lagrangian stochastic differential equations for the evolution of velocity and position of the fluid particles. The term associated with molecular diffusivity defines the shape of the pdf of concentration. The type of closure used for this term is usually referred to as the micromixing model. A micromixing model describes the evolution of the pdf of concentration using an additional equation for the concentration carried by each particle in the domain. Therefore, the model equations include a set of equations for velocity, position, and concentration of each fluid particle. This separation of the effects of turbulent mixing and molecular processes is typical of a variety of modeling approaches (see, e.g., the short review by [30]).
In this review, we present the main Lagrangian approaches for simulating the higher order moments of the concentration pdf and evaluating of the odor concentration. In Section 2, we introduce the theory of the Lagrangian stochastic models. Section 3 is devoted to Lagrangian models for the concentration fluctuations, while Section 4 presents the Lagrangian method for simulating odor dispersion. Conclusions are summarized in Section 5.

Lagrangian Stochastic Models
Lagrangian stochastic models (LSMs) are based on the Langevin equation that describes the temporal evolution of the velocity of pollutant particles in a turbulent field. The solution of the Langevin equation is a continuous stochastic Markov process. In fact, particle position and velocity, in The Langevin equation for the turbulent velocity of a Lagrangian particle can be written as and is coupled to the equation for the particle position x(t) where dW j is an incremental Wiener process component, Gaussian distributed with zero mean and variance of where the notation < > represents an ensemble average. The term b ij (x, u, t) can be derived from the Kolmogorov theory of local isotropy in the inertial sub-range [31]. This theory is based on similarity relations valid in a particular interval of the turbulence spectrum. The energy is transferred from the larger vortices to the smaller ones, until to the smallest scales of the atmospheric turbulence, where it is dissipated by the viscosity. Inside this energetic cascade there is a part of the spectrum where the vortices are sufficiently small so that they are not affected by the anisotropy induced from the larger vortices and are not dissipated as heat. This part of the turbulent spectrum is called the inertial sub-range. This interval coincides with the interval of temporal scales in which the turbulent velocities can be considered a Markov process.
Defining the structure function of the Lagrangian velocities, in one dimension, as then, if τ η ≤ dt ≤ T L , the following relationship can be considered [31]: where ε is the mean rate of turbulent kinetic energy dissipation and C 0 is a universal constant. Substituting du i (t) as given by the Langevin equation in (Equation (1)), averaging and considering only the terms of the order of dt, leads to and A Eulerian description of a continuous Markov process is available through the Fokker-Planck equation. If the position and turbulent velocity can be considered a continuous stochastic Markov process, the Fokker-Planck equation can be used to calculate the coefficient a of the Langevin equation for any given pdf [32].
where P(x, u, t) is the Lagrangian pdf of the particles.
Atmosphere 2020, 11, 27 4 of 23 In the stationary case, P(x, u, t) does not depend on time and the Fokker-Planck equation reduces to where b ij is given by Equation (7). Following Thomson (1987) [5], it can be stated that an LSM satisfies the well-mixed condition (if the particles are initially well-mixed in the fluid, they will remain so), if P(x, u) is equal to the Eulerian atmospheric pdf. This is a necessary and sufficient condition. In other words, all moments of P(x, u) must equal the measured or parameterized moments.
In one dimension, the Fokker-Planck equation can be solved and the term a(x, u) calculated, for a given pdf, as [5] a(x, u) = 1 P(x, u) where and In addition, . LSMs usually are one-dimensional models solving one or two and, in some cases, three Langevin equations, one for each Cartesian direction. The extension to a fully three-dimensional model was dealt with by [8,34,35]. It can be demonstrated that a unique solution for the Fokker-Planck equation, in the three-dimensional case, exists only for homogeneous, isotropic turbulence [8]. The non-uniqueness of the solution is related to the first term on the right side of Equation (9), as this equation can be satisfied by any vector obtained through adding a rotational vector, in u space, to (a i (x, u) · P(x, u)) [36].
The theory of this section is applicable to one-particle models, and these have been widely tested and applied to many different situations characterized by non-homogeneous turbulence and different stability conditions. However, it should be stressed that a one-particle model is only able to describe the absolute dispersion and to predict the mean concentration fields. To simulate the concentration fluctuations, other developments of this model or different approaches should be applied (see Section 3). However, the model described in this chapter and the related theory is the base for these developments.

Experimental Results
In the past, numerous field and wind-tunnel experiments have been carried out concerning concentration fluctuations. The focus of the studies has been mostly the intensity of concentration fluctuations, I, but often the shape of the normalized probability density function (pdf) has been analyzed, too. It should be stated that this review has been limited to field campaigns, as laboratory experiments are limited in the sense that the full range of large-scale eddies in the atmosphere and very stable atmospheric stratifications cannot be properly modeled. Hanna (1984) [37] reported that only a few field campaigns were available at the beginning of the 1980s due to the difficulties involved in obtaining and interpreting the data at that time. The earliest experiment quoted by Hanna (1984) [37] was undertaken by Ramsdell and Hinds (1971) [38], who observed concentration fluctuations from 10 to 20 min duration releases of 85 Kr from a point source 1 m above ground level. Averaging time was 39 s and arcs of samplers were located at downwind distances of 200 and 800 m. They observed that the intensity of concentration fluctuations i increased Investigations by Jones (1981) [39] revealed an increase in i very close (<10 m) to a ground-level release of ionized air from a point source with a diameter of 0.01 m. At slightly larger distances (>10 m) the intensity of concentration fluctuations decreased. More than a decade later Mole and Jones (1994) [40] repeated similar ground-level tracer releases using again ionized air, but used improved technology. The field campaign covered convective as well as stable atmospheric conditions. Regarding the intensity no clear trend with downwind distance was evident from their data. The authors assumed that the samplers were located at around the maximum of the intensity, and covered only a small range of distances from the source, therefore, no trend was visible due to the specific experimental layout. Furthermore, higher values were found during unstable conditions. Lewellen and Sykes (1986) [41], by analyzing LIDAR data gathered from plumes originating from tall (200 m) power-plant stacks, found i to be lowest (from 0.2 to 4) in the center of the plume, with increasing values in the edges of the plume, therefore confirming the early results of Ramsdell and Hinds (1971) [38]. Moreover, their data supported the usage of a clipped normal distribution for the concentration pdf. One of the main drawbacks of this study yet was the low sampling frequency available at that time.
Quite a different experimental set up was used by Mylne and Mason (1991) [42] half a decade later. Tracer was released close to the surface and sampled with a temporal resolution of 0.1 s at distances between 50 m and 1000 m. In contrast to the field experiments described in Mylne and Mason (1991) [41], no data was available for stable atmospheric conditions. In accordance with previous works, a minimum of the concentration-fluctuations intensity i on the plume centerline and an increase towards the edges was reported. In the alongwind direction i on the plume centerline was found to decrease rapidly, though, it approached a near-constant, non-zero value at longer range. The concentration pdf has been shown to be generally well described by the clipped normal form as was also the case in the study of Lewellen and Sykes (1986) [41]. Mylne (1992) [43] reported that concentration-fluctuation observations (i.e., intensity, pdf) in stable conditions are qualitatively similar to those in near-neutral or slightly unstable conditions, but that there are some significant quantitative differences regarding the time scales, which are much larger due to the effect of horizontal meandering of plumes. Peak-to-mean (p/m) concentrations were also found to be significantly higher in stable conditions (>10) than in near-neutral or convective conditions. A large range of the p/m ratio from 4 to 99 was also found by Lung et al. (2002) [44] analyzing nine field experiments performed in flat terrain using 85 Kr released from a ground-level point source in near-neutral and convective conditions. Furthermore, they established a reasonable linear relationship between p/m and i, whereby the largest observed values for i were around 30. While Lung et al. (2002) [44] reported also increasing i towards the edge of the plumes and decreasing values at larger distances, in contrary to the studies mentioned previously, they concluded that the Gamma-and Weibull pdfs provided a better fit to the data than did a log-normal distribution.
Earlier Lung et al. (2002) [13] by studying data gathered from fast-response concentration measurements in the surroundings of a point-source release near the ground, stated that the concentration pdf depends on crosswind and downwind location. Close to the source, in the meander-dominated regime of plume development, an exponential shape fits observations best, while farther downwind, where meandering and internal fluctuations contribute almost equally, the pdf on the plume axis is bimodal. Further away from the source, in the turbulent-diffusive regime, where internal fluctuations become dominant, the pdf assessed at plume centerline becomes unimodal again. The data of Yee et al. (1994) [13] supported also strongly that the intensity of concentration fluctuations exhibits a minimum (~0.6-3.0) at the plume centerline axis, and increases towards the edges of the plume in horizontal crosswind direction (~10-30).
The Joint Urban 2003 (JU03) experiment conducted in downtown Oklahoma City is still the most comprehensive study of concentration fluctuations in an urban setting. Naturally, clear plume shapes as can be found in flat terrain without obstacles and homogenous land-use cannot be expected in built-up areas. Therefore, the focus of the study published by Finn et al. (2010) [45] was on possible differences in plume dispersion between day and nighttime conditions. However, when classified by fluctuation intensity i, few differences were evident between daytime and nighttime pdfs. The log-normal pdf provided the overall best fit to their data compared to the exponential or clipped-normal pdfs. In agreement with other field experiments performed in less complex environments, a decrease of i was observed with increasing distance to the tracer release. Figure 1 shows an example of observed concentration signals from selected 120 Hz gas analyzers, used in the JU03 experiment (IOP 188, release 5, a continuous release. Courtesy of the U.S. Defense Threat Reduction Agency). The left panel illustrates the locations of the source (black pentagon) and the gas analyzers (green circles), where the wind direction is around 200 • , so the selected four analyzers are close to the plume axis. At the right pane, the corresponding concentration signals are plotted, each normalized by its time average. For all four time series it can be observed that the peaks are considerably higher than the mean value (unity). One can also notice, that the concentration time series measured closer to the source are characterized by larger gaps of zero concentration, referred to as 'intermittency', which reflects the existence of time intervals when the whole plume lies outside of the detector's measuring volume.
where internal fluctuations become dominant, the pdf assessed at plume centerline becomes unimodal again. The data of Yee et al. (1994) [13] supported also strongly that the intensity of concentration fluctuations exhibits a minimum (~0.6-3.0) at the plume centerline axis, and increases towards the edges of the plume in horizontal crosswind direction (~10-30).
The Joint Urban 2003 (JU03) experiment conducted in downtown Oklahoma City is still the most comprehensive study of concentration fluctuations in an urban setting. Naturally, clear plume shapes as can be found in flat terrain without obstacles and homogenous land-use cannot be expected in built-up areas. Therefore, the focus of the study published by Finn et al. (2010) [45] was on possible differences in plume dispersion between day and nighttime conditions. However, when classified by fluctuation intensity , few differences were evident between daytime and nighttime pdfs. The lognormal pdf provided the overall best fit to their data compared to the exponential or clipped-normal pdfs. In agreement with other field experiments performed in less complex environments, a decrease of was observed with increasing distance to the tracer release. Figure 1 shows an example of observed concentration signals from selected 120 Hz gas analyzers, used in the JU03 experiment (IOP 188, release 5, a continuous release. Courtesy of the U.S. Defense Threat Reduction Agency). The left panel illustrates the locations of the source (black pentagon) and the gas analyzers (green circles), where the wind direction is around 200°, so the selected four analyzers are close to the plume axis. At the right pane, the corresponding concentration signals are plotted, each normalized by its time average. For all four time series it can be observed that the peaks are considerably higher than the mean value (unity). One can also notice, that the concentration time series measured closer to the source are characterized by larger gaps of zero concentration, referred to as 'intermittency', which reflects the existence of time intervals when the whole plume lies outside of the detector's measuring volume. More recently, NOAA [46,47] carried out extensive tracer experiments (Sagebrush I, II) in flat and homogenous terrain in 2013 (convective with low wind speeds, and near-neutral with higher wind speeds) and in 2016 (low-wind speeds, convective to stable conditions). In 2013, observed unconditional (including zeros) intensities ranged between 1.4 and 17.6, and in 2016 between 0.6 and 16, while unconditional p/m ratios were found in the range 5.1 to 19.9.
Though the experimental data discussed was gathered under varying environmental conditions (e.g., meteorology, land cover), different measurement equipment (e.g., analyzers, tracers) and methods (e.g., sampling frequencies and duration, source geometries), one may identify at least two More recently, NOAA [46,47] carried out extensive tracer experiments (Sagebrush I, II) in flat and homogenous terrain in 2013 (convective with low wind speeds, and near-neutral with higher wind speeds) and in 2016 (low-wind speeds, convective to stable conditions). In 2013, observed unconditional (including zeros) intensities ranged between 1.4 and 17.6, and in 2016 between 0.6 and 16, while unconditional p/m ratios were found in the range 5.1 to 19.9.
Though the experimental data discussed was gathered under varying environmental conditions (e.g., meteorology, land cover), different measurement equipment (e.g., analyzers, tracers) and methods (e.g., sampling frequencies and duration, source geometries), one may identify at least two common features that could serve as benchmarks for testing dispersion models for concentration fluctuations and odors: (i) With increasing distance the intensity of concentration fluctuations is found to decrease in practically all experiments regardless the environmental conditions and experimental set-up. A few data sets indicated that there might be an increase in the intensity very close to point sources, and at least one study indicated that the intensity is not approaching zero even at large distances. (ii) Intensities evaluated along horizontal cross-sections of plumes show usually a minimum at the plume centerline and increasing values towards the edges. Having said this, no universal shape of the Atmosphere 2020, 11, 27 7 of 23 concentration pdf could be found so far, as it seems to depend on the sampling location, but probably could also be locally influenced.

Two-Particle Model
The two-particle approach is probably the pioneering one for modeling concentration second order statistics within the framework of a Lagrangian stochastic model [48]. The approach essentially stems from the Bayesian probability distributions, relating source distributions and receptor concentration statistics. The ensemble-averaged concentration, can be written as with the source rate S(x , t ). P 1 (x, t|x , t ) is the conditional probability distribution for a particle released from x at time t to arrive to the detector location x at time t. In practice, this probability distribution is calculated by the Langevin equations, Equations (1) and (2). Equivalently, the simultaneous two-point concentration covariance is where here, P 2 is the two-particle conditional probability distribution for two particles released at two different times from two different points, to arrive simultaneously to the points x 1 , x 2 . For the limit with the aid of Equation (13), the concentration variance σ 2 c is determined by A pair of particles, at locations x 1 , x 2 at some time t, which are carried by a turbulent flow moves with correlated velocities. The velocities are correlated because mutual eddies, with typical dimensions greater than |x 1 − x 2 | will affect the positions of the two particles. The correlation is determined by the structure function describing spatial correlation in the flow. The two-particle probability function P 2 should hence take into account the dependence of the velocity correlation of the two particles, upon their mutual separation, and for nonhomogeneous flows, also upon their absolute locations.
It is insightful to consider two limits of Equation (15). The first, far-field limit applies when separation between the particles exceeds the turbulent mixing length scale, and thus the turbulent velocities of the two particles are no longer correlated. For that limit, the two-particle probability can be approximated as the multiplication of two independent single-particle probabilities.
Equation (17), substituted in Equation (15), leads to C(x, t) 2 = C(x, t) 2 , and thus, the concentration variance is zero. This limit is a proper description of plumes with sizes extending well beyond the turbulent length scales, which indeed exhibit very low concentration fluctuations intensity.
On the other hand, when particles are close enough to each other, such that their mutual separation is less than the Kolmogorov scale, µ, which corresponds to the length scale of the smallest eddies, the Atmosphere 2020, 11, 27 8 of 23 near field limit |x 1 − x 2 | ≈ µ is considered. By neglecting separation of particles by means of molecular diffusion, one can write which, substituted into Equation (15), gives an upper bound for the concentration second moment and consequently, for the concentration fluctuations intensity [24]. It is important to mention, that the computational application of Equation (15) for the calculation of the concentration variance is not as straight-forward as the usual calculation of average concentrations using Equation (13). The straight-forward approach of utilizing Equation (15) within the framework of a Lagrangian stochastic model (LSM) would imply releasing multiple particle pairs from a localized source and monitoring events where two of the particles comprising a pair simultaneously visit the same control volume. Unfortunately, these events are extremely rare in comparison to a single-particle simulation. This issue will be addressed in the following review of different particle pair models.
The first LSM models that accounted for inter-particle correlation took into consideration only correlation resulting from common initial conditions of simultaneously-emitted particles from a point source. Such a model is Frank Gifford's "random force model" [49]. Although simple and straight forward to implement, this approach fails to completely describe relative dispersion because the rate of separation growth does not depend on the separation.
Separation dependence of the dynamics was first introduced into a LSM model by [24] Durbin (1980) and later by some other authors (e.g., [50]). These models independently describe the dynamics of the pair center of mass, Σ ≡ x 1 + x 2 and the pair separation A typical set of equations of motion is where U 1 , U 2 are independent turbulent velocities for a single uncorrelated particle, and the functions F 1 , F 2 are designed to rescale U 1 and U 2 such that the velocity fluctuations separating the particles will scale like ∆ 2/3 . Durbin [24] has solved the equations for an infinite area source (one dimensional), where the dynamics is simulated backwards from the point of interest towards the source in order to reduce the number of particles needed. Kaplan and Dinar (1991) [51] realized that the above approach does not comply with the so called "two to one" condition. This condition demands that i.e., each of the single particles comprising a pair will exhibit, by itself, a single-particle statistics. They hence proposed an alternative approach in which instead of rescaling the uncorrelated velocity fluctuations, the correlation between particles is taken into account by generating stochastic velocity accelerations from a joint pdf which depends on the pair separation. Another theoretical contribution for the subject came with the introduction of the "well-mixed condition" for particle pairs [7]. This condition demands that the probability distribution function P (x 1 , u 1 , x 2 , u 2 , t), describing the state of an ensemble of particle pairs at time t, should tend to a physically reasonable steady-state distribution which conforms with the single and two-point statistics of the turbulence. Thomson solved the resulting model numerically by expressing P 2 as where P Σ if the probability for the pair's center of mass, and P ∆ is the probability of the pair's inter-separation. For homogenous turbulence, P ∆ is a function of ∆ only and thus can be numerically evaluated independently of P Σ . This enables the calculation of Equation (13) with a number of particles In spite of the advancement in the development of the particle pair approach, it has not attained operational maturity. This is attributed to a few shortcomings. First, the usage of any of the advanced particle pair methods demands knowledge of the structure function tensor. While for homogeneous or infinite boundary layer flows this is plausible, this is not the case for more complex flows encountered in actual environmental applications, such as complex topography or canopy flows. Secondly, as explained above, a computational implementation with a feasible number of particles is not straight forward, especially in complex three-dimensional heterogeneous flows. Nevertheless, as will be shown in the following, the work on particle pair models paved the way to more practical models.

Fluctuating Plume Model
As seen in the previous section, the dynamics of a pair of particles in a turbulent flow is different from the dynamics of a single particle. Richardson (1926) [52] showed that the shape of a cloud of contaminant is affected and changed only by eddies with a lengthscale comparable to the cloud instantaneous dimension, while larger vortices would just advect the cloud. Following Richardson (1926) [52] and Gifford (1959) [53] assumed that the absolute dispersion could be divided in a meandering part, related to the movement of the plume barycentre, and a relative diffusion part, related to the internal mixing inside the plume. In Gifford's (1959) [53] fluctuating plume model concentration fluctuations were produced by the instantaneous bodily movements of the plume, while internal fluctuations were not considered. In two-particle models this would be equivalent to two particles with a very small initial separation that would likely tend to move as a single particle. Therefore, Gifford's (1959) [53] model is expected to perform well close to the source. However, as the plume (or the two particles) is swept around by the turbulent eddies it would also change shape and the changes in the concentration fluctuations would depend on eddies with sizes comparable to the plume instantaneous dimension or smaller. Yee and Wilson (2000) [14] took into account the effect of in-plume fluctuations assuming a functional form (a gamma function) for the pdf concentration relative to the plume barycenter. Luhar et al. (2000) [15] proposed a LSM for simulating the centroid vertical position and thus going beyond the vertical homogeneity and Gaussian pdf of turbulence assumed by both Gifford (1959) [53] and Yee and Wilson (2000) [14]. Following Luhar et al. (2000) [15], Franzese (2003) [16] developed a fluctuating plume model in which the Lagrangian equations governing the centroid motions are evaluated filtering out the energy fluctuations whose wavelengths are larger than the cloud instantaneous dimensions using a low-pass filter. After the barycenter pdf is evaluated by the LSM, the absolute concentration field is evaluated assuming horizontal homgeneous turbulence and then parameterizing the relative concentration pdf, i.e., parameterizing the plume dispersion relative to the instantaneous plume centroid. Both models of Luhar et al. (2000) [15] and Franzese (2003) [16] were specifically developed for a convective boundary layer. Mortarini et al. (2009) [17] extended their approach to dispersion in a simulated plant canopy. The model presented here is the one presented in Mortarini et al. (2009) [17].
In a fixed reference frame, the concetration moments are defined as where c is the instantaneous concentration and p(c, x, y, z) is the instantaneous spatial pdf of the concentration. Gifford's (1959) [53] assumption consists of writing the concentration pdf as p(c, x, y, z) = p cr (c; x, y, z, y m , z m )p m (x, y m , z m )dy m dz m p m (x, y m , z m ) is the pdf of the plume centroid, (y m , z m ), at a distant x from the source, while p cr (c; x, y, y m , z, z m ) represents the concentration pdf relative to (y m , z m ). Hence, p cr (c; x, y, y m , z, z m ) is conditional to the centroid location, it is the concentration pdf in the frame of reference moving with the plume barycentre. Since ∞ 0 c n p cr (c; x, y, z, y m , z m )dc = c r n (x, y, z, y m , z m ) (24) it follows that c n (x, y, z) = c r n (x, y, z, y m , z m ) p m (x, y m , z m )dy m dz m (25) where c r n represent the concentration moments at a distance x from the source. Equation (25) summarizes the fluctuating plume models: it states that at any distance x from the source the absolute concentration field can be evaluated by integrating the concentration field relative to the plume centroid over the pdf of the centroid position (y m , z m ). The evaluation of the absolute concentration field requires the separate determination of p m (x, y m , z m ) and c r n (x, y, z, y m , z m ) . It is worth noticing that Equation (25) states that, in principle, the fluctuating plume model can evaluate all the moments of the concentration distribution and not only the first two as the two-particle models.
Assuming that the plume meandering on the crosswind direction is independent on the meandering on the vertical direction, p m (x, y m , z m ) can be factorized as If horizontal homogeneity is assumed, the barycenter pdf in the crosswind direction, p ym (x, y m ), is Gaussian [14,53]. To take into account the vertical inhomogeneities of the velocity field, the pdf of the vertical centroid position, p zm (x, z m ), can be evaluated using the following stochastic differential equations dw m (t) = a m (t, w m , z m )dt + b(t, z m )dW t dz m (t) = w m (t)dt (27) where w m is the vertical velocity of the centroid, a m represents the deterministic acceleration, b is the diffusion coefficient and dW t are the increments of a Wiener process, with zero mean and variance equal to dt. The term c r n (x, y, z, y m , z m ) in Equation (25) needs to be parameterized using a Gamma pdf for the concentration relative to the plume barycentre. However, its parameterization is quite complex and its details are outside the scopes of the present manuscript. For further information we refer our reader to [15][16][17][18].

Pdf Micro-Mixing Model
In a LSM, the trajectory associated with every single particle is equivalent to an independent realization of a single marked molecule in a turbulent flow. It therefore follows, that a large enough set of simulated trajectories connecting any arbitrarily chosen two points A and B, represent all the different possibilities of getting from A to B.
If a point particle emitted at point A represents an air parcel with some known pdf of a physical property (concentration, temperature, etc.), which is altered during its travel to B, and if this property is influenced by the processes the parcel experiences along its trajectory, then calculating the different optional trajectories from A to B will enable determining the pdf of the physical property at B. This is, in a nutshell, the idea behind pdf models implemented by LSM [54].
Specifically, for concentration fluctuations, the physical process which changes the concentration of an air parcel along its trajectory is the turbulent mixing with its environment [10,55]. As mixing acts to reduce concentration fluctuations, it is natural to model the process as a first order tend towards the local mean concentration value with some time constant. This approach, termed 'interaction by exchange with the mean' (IEM) involves an additional equation to the set of LSM Equations (1) and (2) where c is the particle concentration, c is the local mean concentration, and t m a time scale which represents the local rate of mixing which needs to be parameterized (see Section 3.7). In practice, applying the model demands the computational domain to be densely populated with particles throughout the simulation. All particles are initialized with c = 0, and in the course of the simulation, all particles passing through the source are assigned a prescribed concentration. The simulation is run until statistical convergence is attained. The concentration pdf at any given location is evaluated by the different individual concentration values associated with the different particles visiting the location immediate control volume along the simulation. Higher order statistics are directly derived from the pdf.
The IEM method has been applied for various applications [9,56] which also include concentration fluctuations in rural and urban environments [57]. However, a few shortcomings have been reported. A basic requirement of a mixing model is that it will not influence the mean concentration. The IEM violates this, creating spurious fluctuations of the average concentration, especially in the vicinity of strong mean concentration gradients. As explained by Sawford (2004) [55], this is caused because on the average, particles moving down a concentration gradient will carry greater concentration than particles moving in the up-gradient direction. Mixing both populations towards the mean, as done in IEM wrongly reduces the flux, altering the average concentration. Additionally, Sawford (2004) [55] has reported that IEM does not model plume meandering, and therefore underestimates concentration variance in near and mid field, where the concentration fluctuations are dominated by plume meandering.
As a remedy to the above shortcomings, the interaction by exchange with conditional mean (IECM) extension was suggested, which is different from IEM in the representation of mixing. The concentration equation applied in IECM is where c|u is the conditional mean concentration, i.e., the mean concentration where only particles possessing the same velocity as the current particle are considered. Limiting mixing to particles with the same velocity is reasonable, because actual mixing an air parcel mixes with its immediate surrounding fluid, which have similar instantaneous velocity. Sawford (2004) [55] also shows that IECM models correctly evaluate the concentration flux, and as a result, the concentration fluctuations near the source are properly recovered. A considerable downside of the IECM approach, stopping it from attaining operational maturity is its computational difficulty of implementation. Obtaining statistical convergence for every different velocity at every spatial location involves the usage of a very large number of particles. This has led some authors to seek ways of accelerating the model. Cassiani et al. (2005) [10] used a dynamical grid, expanding with time to contain a dispersing plume. Nesting, as well as particle splitting and merging algorithms were also applied (Cassiani et al., 2007) [11]. Postma et al. (2011) [58] performed the calculation following two steps, a first one to calculate c(x)|u on an Eulerian grid, and a second one with particles interacting with the concentration averages, emitted sequentially instead of simultaneously.

Volumetric Particle Approach
Another step of single-particle mixing models towards practical applications was made with the introduction of the volumetric particle approach (VPA). Differing from pdf methods which demand a computational domain fully populated with particles, VPA enables the calculation of the concentration second moment with particles ejected from the source only, so the computational requirements remain modest and comparable with classical average concentration calculations.
The VPA method has been suggested independently by Cassiani (2013) [59] and by Kaplan (2014) [60]. While the former developed the method as an extension of the IEM\IECM methods, the latter has presented the method as stemming from the two-particle probability formulation. It is insightful to present both directions of reasoning.
Cassiani (2013) [59] has introduced a volume V p ≡ M p /c p (hence the method's name) associated with each particle, where M p is the particle mass, which is conserved, and c p is the particle concentration which is governed, like in IEM, by Equation (28). The average concentration in a grid cell labeled j with volume V j is Recognizing the term V p /V j as the probability that particle p will be associated with concentration c p , the second moment of the concentration can be written as The implementation of Equation (31) is straight forward. Alongside with the calculation of average concentration at site j using Equation (30), the second moment at the site is determined using Equation (31). Finally, c p values are updated, and movement of the particles according to the LSM equations of motions take part.
Kaplan's [60] starting point is the relation between the two-particles probability function and the concentration second moment, Equation (15), here written for an instantenous source S(x, t) = S(x)δ(t) gives In order to express Equation (32) in terms of the easily evaluated P 1 , instead of P 2 , the equation is rearranged to obtain C(x, t) 2 = S x 1 , 0 P 1 x, t|x 1 , 0 P 2 (x, t|x 1 , x 2 , 0, 0) where the term P 2 x, t|x 1 , x 2 , 0, 0 /P 1 (x, t|x 1 , 0) in the inner integral is recognized as the conditional probability of a particle released from x 2 at t = 0, to arrive at x at time t, subject to the condition that another particle, simultaneously released from x 1 , also reaches x at time t. Next, we define as the concentration attributed to particles reaching x at time t, simultaneously with particles released from x 1 . Equation (33) is now written as To model C 2 , the far field limit of the two-particles probability function, Equation (17), is used. For large distances from the source, C 2 x, x 1 , t → C(x, t) , and C(x, t) 2 → C(x, t) 2 . C 2 is hence modeled as tending towards the local average concentration with some time constant t m . The equations of motions and the implementation is exactly as done by Cassiani (2013) [59], only that the particle-associated concentration c p that emerges at Cassiani (2013) [59] as a result of the particle-associated volume, is interpreted by Kaplan (2014) [60] in terms of C 2 which is related to two particle statistics.
It is worth noting that the two interpretations are complementary, and also shed light on the physical meaning of the particle-associated volume V p introduced by Cassiani (2013) [59]. Let us consider the limit of a point source. Keeping each particle mass constant implies that V p ∼ c −1 p . C 2 (x, t) is defined as the concentration associated with events where particles visit x simulataneously with their pair counterparts. If we regard V p as an intantenous puff volume, is it clear why the smaller V p gets (thus enlarging c p ), the more probable it is for two particles to reach x simulataneously.
The VPA model has been successfully evaluated against wind tunnel experimental data of dispersion in a neutral boundary layer [59,60], canopy turbulence [59] and decaying grid turbulence [60]. It has also been used for environmental analysis involving reacting pollutants [61].
Unlike the IEM\IECM models, the VPA model cannot be used directly to calculate concentration moments higher than the second moment. However, the first two moments fully define a gamma distribution function, which describe the distribution of instantenous concentrations for a wide scale of ranges [62,63]. This agrees with observed scaling relations between the first two moments and higher moments [19]. Marro et al. (2018) [64] took advantage of these results and proposed a so-called VPΓ model, which utilizes the VPA model for the calculation of the two first moments and estimates the third and fourth moment using the gamma function hypothesis. The model was successfully compared against the IECM approach which directly yields the whole concentration pdf distribution.

Parameterizations
In many problems related to the tracer dispersion in a turbulent flow, the cross-covariance term between the concentration fluctuations has to be modeled or parameterized. This is the case, for example, for the chemical reaction simulation on a time scale lower than the typical equilibrium scale. The covariance of two compounds participating to the reaction should be accounted for [65]. The contribution of this term is often referred as 'segregation' coefficient α, that is where primes indicate the concentration fluctuations of the chemical compounds A and B. To calculate this quantity the second order moments of the concentration distribution are necessary. As already discussed herein, single-particle LSM, which account for independent particle trajectories, cannot provide higher order moments. In order to overcome this shortcoming Alessandrini and Ferrero (2009 [66-68] adopted a simple parameterization based on wind tunnel data. It is worth noting that many chemical models do not account for the segregation term, considering the chemical reactions as occurring always in equilibrium conditions. This hypothesis is valid only when large time and space scales are considered, but not in short term simulations, where the segregation effect, due to the turbulence mixing, cannot be neglected as shown in Alessandrini et al. (2012) [69]. Because the term c A c B is generally negative, considering only the product of the mean values and neglecting the segregation results in an overestimation of the covariance.
Consequently the reaction proceeds too fast. In other words, turbulent mixing at smaller scales is not accounted for unless the term c A c B is considered. Alessandrini and Ferrero (2009) [66] proposed a proper parameterization for the segregation coefficient, which was obtained by performing a best fit of the experimental data of Brown and Bilger (1998) [70] (38) where x is the downwind distance, x s is the stoichiometric distance, which is the axial centerline location where the initial concentration of the reactant A emitted by the source was diluted to the initial background concentration of B (which remains constant outside the plume) and N D the Damköhler number, which represents the ratio between the time scales of turbulence and chemical reaction.
As a matter of fact, if the time scale of any chemical process is large compared with the time scale of the turbulence, i.e., for a very low Damköhler number the turbulent concentration fluctuations are damped and hence do not contribute to the reaction rate. Nevertheless, in many cases, in which the scales are small, as in urban pollution and near the source, the effect of the segregation cannot be neglected [65], thus in the single-particle stochastic model the segregation coefficient should be parameterized in some way. Aguirre et al. (2006) [71] introduced a parameterization which depends on the local gradient of concentration and the diffusion coefficient. However, in practical applications, the calculation of the local gradient at each time step would be very time consuming. Vilá-Guerau de Arellano et al. (2004) [72] introduced the moments method, which involves a closure problem.

Virtual Variance Sources
The approach of virtual variance sources (VVS), suggested by Manor (2014) [21], is based on the steady state governing equation for the concentration variance where C is the average concentration, c is a fluctuation, U k and u k are, respectively, the k-th components of the mean and turbulent velocities, and ν is the kinematic viscosity. The first two terms on the left are the advection and turbulent diffusion terms, which have a complete analogue in the equation of passive scalar dynamics. The next term is a production term, which, after applying the K-gradient hypothesis can be written as The fourth term is a dissipation term, which expresses the smoothing of spatio-temporal heterogeneities by molecular viscosity. This term is commonly modeled by a first order decay, i.e., where t d is a dissipation time scale (see Section 3.8).
In practice, the average concentration field, C(x, t) is calculated first. Then, a source term is determined and used to release particles representing concentration variance.
These particles are governed by the same advection\diffusion dynamics as the 'ordinary' particles used to model the average concentration, only that they are designed to decay according to Equation (41). The spatial distribution of Q(x, t) essentially dictates a very large number of variance sources, which may pose a considerable computationally load. However, as verified by Manor (2014) [21], for a localized source, a small fraction (about 3%) of the potential sources are enough to produce almost all (97%) of the total effect. Thus, only these sources are identified and used.
The VVS approach was tested by Manor (2014) [21] against the JU2003 database. Ferrero et al. (2017) [22] used the approach with an improved parameterization for τ d and compared their results to wind tunnel observations by Fackrell and Robins (1982) [73]. Oettl and Ferrero (2017) [6] used a simplified version of Equations (40) and (41), neglecting the advection and the turbulent diffusion terms (see also Section 4.3). For steady state conditions this results in a simple expression relating the average concentration to the concentration variance Being strictly algebraic, Equation (42) can be usefully applied regardless of the specific dispersion model.

Variance Dissipation Time Scale
Differently from the mean concentrations, the concentration fluctuations are not conserved along the flow. Hence the dissipation of variance should be modeled. In analogy with the Eulerian Equation (41), the concentration variance dissipation, to be introduced in the Lagrangian equation in order to determine the decrease of the concentration variance at each timestep, can be evaluated through the dissipation time scale, t d .
In the LSM the variance dissipation is accounted for at each time step ∆t using the equation The determination of this time scale is the key point in simulating the concentration variance dispersion. A first attempt to predict t d was made by Lewellen (1977) [74] who calculated this parameter through the plume standard deviations. However, Sykes et al. (1984) [75] observed that the variance dissipation is controlled by eddies with length scales of the order of the plume size. They also suggested that, while the outer scale becomes rapidly independent of the source, as also shown by Fackrell and Robins (1982) [73], the total concentration variance depends on the inner scale λ c . As a matter of fact, the plume can be divided into inner and outer scales. The outer scale corresponds to the scale over which the plume meanders and the inner scale is the relative spread of the plume. As a consequence, Sykes et al. (1984) [75] prescribed an inner scale from which they determined the dissipation time scale t d . In particular they showed, as also stated by Sawford (1982) [76], that λ c should be initially proportional to the time t and then to t 3/2 . However, in these works, the time scale is obtained from both the length scale and a velocity scale, and the velocity scale actually depends on the length scale. Galperin (1986) [77] showed that the dissipation time scale grows linearly with time, and, if we consider the dissipation time scale as the life-time of the fluctuations, it should be smaller close to the source where the intense mixing causes rapid development and dissipation of the fluctuations. Proportionality between t d and the Lagrangian integral time scale was proposed by Bèguier et al. (1978) [78] and Warhaft and Lumley (1978) [79], and used by Andronopoulos et al. (2002) [80] and Milliez and Carissimo (2008) [81]. A similar relationship can be found in Yee et al. (2009) [63]. A proportionality coefficient equal to 22 was suggested by Manor (2014) [21]. It is worth to notice that Manor (2014) [21] parameterized t d as proportional to the local vertical Lagrangian time scale at a particle position. Ferrero et al. (2017) [22] proposed the following function for the dissipation time scale t d where T s L w is the value of the vertical component of the Lagrangian time scale at the source height, t * = zi/U (zi is the boundary-layer depth and U is the freestream velocity), α 1 , α 2 are constants, ds is the source diameter, and hs is the source height. Since it has been demonstrated [22] that the observed Atmosphere 2020, 11, 27 16 of 23 concentration-variance dispersion by Fackrell and Robins (1982) [73] depend on both source size and height, a dependency on these two parameters was introduced in the parameterization for t d . As t → 0 (i.e., for times just after emission) Equation (44) becomes a constant that depends on the Lagrangian time scale and the ratio between source diameter and height. As a matter of fact, the dimension of the plume close to the source is of the order of the source size and the eddies that can dissipate the concentration variance should have the same size. The second term in Equation (44) is needed to avoid that t d = 0 at t = 0 and hence that c (t + ∆t) 2 = 0, while in contrast the fluctuations are larger close to the source. Naturally, the mean concentration field has the largest gradient near the source and, as a consequence, concentration fluctuations are highest there. Subsequently the dissipation time scale becomes proportional to the dispersion time.
It can be noted that the parameterization is similar to the expression derived in the Appendix of Sykes et al. (1984) [75], except for the fact that the source-size ratio has a 1/3 power rather than 2/3. An observation of Equation (44) indicates that initially, for t d t, the fluctuation variance rapidly decreases along the particle trajectory, while for longer times, and as a consequence of t d being dependent on t, dissipation ceases when t d t. This is reasonable as the fluctuations become smaller. It is worth noting that the source of the concentration variance depends on the gradient of the mean concentration. For this reason, the largest source terms will be found close to a point source, where the gradients of the mean concentration are higher.

Odor Dispersion
The use of general-purpose steady state Gaussian models for predicting odor exposure levels around the vicinity of an industrial site has been considered an accepted practice for many countries around the world for a long time. This practice has been questioned lately in several countries in Europe (e.g., Germany, Italy, Austria, Switzerland) due to the widely known shortcomings of steady-state plume models to accurately assess dispersion under a range of 'complex' conditions (e.g., topography; coastal flows, calm and stable conditions; cold flows; heterogeneous land usage). In such circumstances, there is a real danger that odor impact risk can be either under-or overestimated, which has a substantial influence on the development of pragmatic, cost efficient odor mitigation management. Moreover, odor nuisance is known to be closely linked to short-term odor-concentration peaks, as these may reach levels well above the recognition threshold causing immediate annoyance. Short-term concentration peaks of the order of a few seconds (i.e., a single breath) require models for the concentration fluctuation as outlined in the previous chapters. In the following, a brief review about models for assessing odor impact, which are currently in use for regulatory purposes in different countries, is given.

Fluctuating Plume Models
In order to assess odor impact knowledge about concentration fluctuations are needed. Thus, only Lagrangian models able to provide these quantities should be selected. As we have seen in Section 3.3, Lagrangian fluctuating plume models (FPMs) can satisfy this requirement. Ferrero et al. (2013) [18] showed how FPMs can simulate a chemically reactive plume. However, up to now no Lagrangian FPM has been used for regulatory purposes to predict odor concentrations. On the contrary, Gaussian FPMs have been tested and used for this purpose (see for instance [82,83]) already.

Lagrangian Stochastic Particle Models
LSM have been used to calculate direction-dependent separation distances to avoid odor annoyance by Piringer et al. 2016 [84]. The relevant short-term peak odor concentrations are calculated with a stability-dependent peak-to-mean algorithm. It was found that the Lagrangian model used by the authors better reproduces the physical processes and generally calculates larger separation distances compared to the Gaussian model applied in their study. For worst-case calculations, necessary when environmental impact assessment studies are carried out, the use of a Lagrangian model is therefore to be preferred.
The peak-to-mean concept, also explained in Piringer et al. (2014) [85], is based on a relationship by Smith (1973) [86], where the peak-to-mean factor ψ 0 is given by with the mean concentration C m calculated for an integration time of t m (1800 s) and the peak concentration C p for an integration time of t p (5 s). The exponent 'a' depends on atmospheric stability. Generally, Equation (45) gives relatively large values of ψ 0 . Following Mylne (1992) [42], it is assumed that, due to turbulent mixing, the peak-to-mean factor is reduced with increasing distance from the source. The peak-to-mean factor in Equation (45) is modified by an exponential decaying function of the quantity T/t L , where T = x/u is the time of travel with distance x and the mean wind speed u, and t L is a measure of the Lagrangian time scale [42].
Having said this, Janicke and Janicke (2004) [87] suggested using a constant factor for the peak-to-mean ratio. In their work, they did not only consider the concentrations fluctuations themselves, but took into account the probability P 0 (c) of qualified panel members to recognize a certain type of odor dependent on its concentration. This is expressed in the definition of the so-called "odor hour" in the German guideline VDI 3788 (2015) [88] by the following function (46) whereby f (c) is the probability density function of odor concentrations at some observational point for an hourly interval. An odor hour is defined by κ ≥ 0.9, i.e., in 10% of the time odor will be detected by the qualified panelists. Janicke and Janicke (2004) [87] and Oettl et al. (2018) [89] demonstrated that for an assumed log-normal distribution for P 0 (c) and α > 1 an almost constant peak-to-mean ratio of about 4 is obtained, practically independent on the shape of f (c). In Equation (47) erf is the error function, c the odor concentration, c OT the odor concentration detected by 50% of qualified panel members, and α a scale parameter. The value of α can be determined by means of dynamic olfactometry. Oettl et al. (2018) [89] analyzed more than 1000 datasets covering a wide range of odor types, and found a median for α of around 0.6. In this case, the shape of f (c) becomes important and needs to be taken into account in odor assessments. A second example of the application of a LSM for odor dispersion was recently proposed by Ferrero and Oettl (2019) [23]. This approach can be defined as fully Lagrangian since both mean and variance concentrations are simulated through particle trajectories. In their work, the authors propose and evaluate a one-particle LSM for the assessment of the concentration-fluctuation variance, which, by providing the fluctuation intensity, can be used to predict odor hours. The concentration variance in the Lagrangian particle model evolves according to the same Langevin equation used for the simulation of the turbulent dispersion, which provides the mean concentration field. As already outlined in Section 3.7, the main challenge is to provide a suitable parameterization for the concentration-variance dissipation time scale. From the mean concentration and the variance concentration fields, it is possible to calculate the fluctuation intensity that is the ratio between the concentration standard deviation and the mean concentration. The fluctuation intensity, introduced in the concentration pdf, allows analytically calculating the 90th percentile which is the key parameter needed to assess an odor hour.

Hybrid Lagrangian-Eulerian (Enrico)
Oettl and Ferrero (2017) [Error! Reference source not found.] suggested a hybrid approach in which the mean concentration is calculated with a LSM, while the concentration variance is estimated on an Eulerian grid neglecting the advection and diffusion terms in the time-dependent governing equation for the concentration variance One of the main advantages of using Equation (48) is that it can be computed in post-processing mode, and thus, is independent on the dispersion model applied for calculating the mean concentration field. The variance dissipation is currently computed by using the relationship t d = 2T L3 . For more discussion on proper parameterizations, the reader is referred to Section 3.8. The simulated concentration variances are used in combination with a slightly modified two-parameter Weibull pdf to get the 90th percentile of the cumulative frequency distribution of odor-concentration fluctuations, which is required for computing an odor hour. Figure 2 displays modeled concentration-fluctuation intensities using Equation (48). Though, Equation (48) represents a strongly simplified version of the governing equation for the concentration variance, some important features evident from observations (see Section 3.1) are captured: First, towards the edge of the plume the intensities are increasing, and second, after a slight increase close to the source, intensities are slowly decreasing with distance to the source along the plume centerline.
Atmosphere 2020, 11, x FOR PEER REVIEW 18 of 22 intensities are increasing, and second, after a slight increase close to the source, intensities are slowly decreasing with distance to the source along the plume centerline.

Puff Model (Enrico)
A different Lagrangian approach is the Puff model, in which, differently from the particle models, the plume is composed by different puffs, which follow the mean wind and are dispersed independently. Using the more known and diffused puff model, CALPUFF [90], considered in US EPA guideline an alternative to the Gaussian model able to provide a more effective way of simulating complex conditions, Murguia et al. (2014) [91] evaluate how predictions with different meteorological input data type compare for odor assessment purposes for a complex study site, and whether the use of any of the meteorological data sets offers any advantage in gaining a better understanding of odor exposure and impact risk. The area where the maximum hourly average ground level concentration is greater than 3 ouE/m 3 for more than 2% of the hours in the year. It defines the maximum extent at which the CALPUFF model predicts there is reasonable cause of odor annoyance. The 3 ouE/m 3 98 percentile is the applicable odor impact criteria. De Melo et al. [92]

Puff Model (Enrico)
A different Lagrangian approach is the Puff model, in which, differently from the particle models, the plume is composed by different puffs, which follow the mean wind and are dispersed independently. Using the more known and diffused puff model, CALPUFF [90], considered in US EPA guideline an alternative to the Gaussian model able to provide a more effective way of simulating complex conditions, Murguia et al. (2014) [91] evaluate how predictions with different meteorological input data type compare for odor assessment purposes for a complex study site, and whether the use of any of the meteorological data sets offers any advantage in gaining a better understanding of odor exposure and impact risk. The area where the maximum hourly average ground level concentration is greater than 3 ou E /m 3 for more than 2% of the hours in the year. It defines the maximum extent at which the CALPUFF model predicts there is reasonable cause of odor annoyance. The 3 ou E /m 3 98 percentile is the applicable odor impact criteria. De Melo et al. [92] compared AERMOD and CALPUFF with wind tunnel data simulating odor dispersion around a pig farm building complex. The results show that concentrations predicted by AERMOD are, in general, higher than those predicted by CALPUFF, especially regarding the maximum mean concentrations observed in the near field. Ranzato et al. [93] assessed the olfactory nuisance caused by an anaerobic treatment plant for municipal solid waste by means of two alternative techniques: the field inspection procedure and the atmospheric dispersion model CALPUFF.

Conclusions
The paper reviewed open questions and challenges posed by modeling of processes controlling the advection and diffusion of concentration fluctuation through a Lagrangian approach, as well as odor dispersion. In contrast to the mean concentration dispersion, the fluctuation dispersion Lagrangian models are still at the research stage, although the number of works on this matter is large and the range of different methods is very rich. Our capability of monitoring and predicting odor dispersion still suffer from significant limitations and in spite of the number of interesting works, it deserves more in-depth investigations.
Author Contributions: All the authors contribute in writing-original draft preparation, writing-review and editing. All authors have read and agreed to the published version of the manuscript.