1. Introduction
Climate change represents an urgent and potentially irreversible threat to human societies and the planet. The fast increase of the global temperature is predominantly due to human activity, causing the amount of greenhouse gases (GHGs) in the atmosphere [
1]. To mitigate the GHGs emissions, the fastest opportunity is to reduce methane (CH
4) emission, which is the second largest contributor to global warming after carbon dioxide (CO
2) [
2]. Although the concentration of the atmospheric CH
4 is lower than CO
2, the global warming potential of CH
4 is approximately 80 times that of CO
2 over a 20-year period and 25 times over a 100-year period, which means it is more efficient in trapping infrared radiation emitted from the earth’s surface than CO
2 to cause global warming [
3]. The increase in atmospheric CH
4 concentration is caused mainly by human activities, which account for more than 60% of total CH
4 emissions globally [
4]. Main emission sources include industry, agriculture, and waste management activities. In Canada, CH
4 emissions account for 14% of total GHGs emissions in 2020. The emissions are largely from fugitive sources in oil and natural gas systems (35%), agriculture (30%), and landfills (27%) [
1]. A similar trend can be seen in the United States. To mitigate CH
4 emissions, it is essential to accurately monitor the emissions from those sources so that we can assess the effectiveness of various emission reduction measures, identify emission hotspots for targeted treatment, and develop accurate emission inventories.
There is still considerable uncertainty over the amount of fugitive CH
4 emissions. A report shows that the Canadian upstream oil and gas CH
4 inventory is estimated to be underestimated by a factor of 1.5 [
5]. The main issue is that most of the emissions are from area sources or unknown leaks in large areas, which have long been a special challenge for atmospheric monitoring [
6]. The quantification difficulties result mainly from their large-scale and non-homogeneous characteristics [
7]. First, the source may differ in composition from area to area, leading to spatial and temporal variability across the entire area source [
8]. Second, emission rates of sources such as landfills are often influenced by environmental factors, such as atmospheric temperature, humidity, and wind conditions, causing them to keep changing over time, and they show obvious daily and seasonal variabilities [
9,
10]. Third, some area sources are not flat and have complicated terrain. All these factors lead to large uncertainty in emission quantification. Therefore, to accurately quantify area emissions, a measurement method must provide high spatial and temporal resolutions as well as the ability for continuous and automatic monitoring.
Conventional methods show limitations in dealing with fugitive area sources. Flux chamber and eddy covariance collect data at a point, rendering them not representative for non-homogeneous area sources [
10,
11]. The flux gradient method is based on the turbulent diffusivity and the vertical gradient of gas concentration. It requires a neutral stability atmosphere, steady-state conditions, and a uniform emission source [
7]. The integrated horizontal flux method is a micrometeorological mass balance method, which needs to measure wind speed and gas concentration at different heights around the edges of the source [
12]. The deployment of the equipment is too complicated for practical applications. The tracer ratio method has been used in estimating GHGs emissions from landfills [
13]. It can provide reasonable results when equipped with mobile sensors. However, it is difficult to implement and labor-intensive, making it not suitable for long-term continuous monitoring. A practical mass balance method based on optical remote sensing (ORS) is solar occultation flux (SOF). It uses a mobile platform to obtain vertical column concentrations along the road around the source, but it needs sunlight and available roads to work. The temporal resolution is also very low. Another ORS-based mass balance method is the vertical radial plume mapping (VRPM) [
14]. It derives the concentration distribution on a vertical plane downwind of the source using the multi-path ORS technique and computed tomography (CT) [
15]. This method can achieve real-time and continuous monitoring. However, it is difficult to deploy the mirrors on the vertical plane. The measurement is also affected by the wind direction.
A more flexible method is the inverse-dispersion modeling, which back-calculates the emission rates on the basis of the concentration observations at downwind locations from the source [
16]. The method simplifies the concentration measurements compared with other approaches. It can use data obtained from point, line, or mobile sensors. It is very suitable for continuous, in situ measurement of emissions from large-scale area sources. However, it assumes idealized airflow. The source should be homogeneous and can be separated from other possible sources [
17]. Another issue is that the result is heavily affected by the wind direction which determines the direction of gas plume. The sensors must capture the main gas plume. The method also requires the distance between the sensor and the source (fetch) to be large enough to avoid overestimation [
18], but the concentration also decreases with increasing fetch. Another issue with the fetch is that it may not be feasible to deploy the sensor at the required fetch due to terrain, land usage, or other environmental limitations.
All these methods cannot derive emission distribution of the area source. To obtain a two-dimensional (2-D) emission distribution, possible approaches are using sensor network, mobile platform, or multi-path ORS techniques. Multiple fixed-point sensors can form a sensor network to obtain concentrations at different locations. Multiple point concentrations can also be obtained by multi-point survey using sensors mounted on a moving platform [
19,
20]. The multiple point concentrations can be interpolated to derive the concentration distribution. Network methods usually use multiple low-cost point sensors or monitoring stations, and they can provide only limited spatial or temporal resolutions [
21]. Common mobile platforms are vehicles, aircrafts, and satellites. They are used for large spatial scales, including facility to site scale (<1–10 km
2), regional scale (10–1000 km
2), and global scale (100–1000 km
2) [
22]. The limitations are related to spatial, temporal resolutions, lowest detection limit, and automatic, continuous monitoring [
23]. Long-path ORS sensors can be mounted on a scanner to measure multiple line-integrated-concentrations (PICs) over a surface [
24]. It is reported to identify the point leak sources and quantify the leak strength by using an atmospheric dispersion model or a statistical method with a star path geometry over a synthetic area greater than 4 km
2 [
25,
26].
High spatial resolution concentration distribution can be derived by using a CT technique [
27]. ORS coupled with CT provides a powerful tool for sensitive mapping of air pollutants throughout kilometer-sized areas in real time [
28]. In addition, this technique is non-intrusive so that interferences to the site under investigation are minimal compared with other techniques. The sensors can be deployed around the boundary of the site, making it especially useful for measurements on some sites that are difficult to be accessed. However, ORS-CT application is usually reported to measure the concentration distribution. It is not used to obtain emission distribution in the literature. This situation may result from the difficulty of deploying a multi-path ORS system and the uncertainty with the atmospheric dispersion models. On the basis of the requirement of quantifying fugitive area emission and the review of current measurement techniques, this research proposes a combined method using multi-path ORS, inverse-dispersion modeling, and CT technologies. By adapting the inverse-dispersion modeling technique into CT reconstruction, both emission distribution and concentration distribution can be derived. Compared with a single-path inverse-dispersion method, it can also improve the accuracy by eliminating the influence of varying wind directions and obtaining an optimal result.
2.3. Inverse-Dispersion Modeling
The inverse-dispersion technique uses atmospheric dispersion models to calculate the theoretical relationship between a source emission rate and a downwind concentration [
16,
31]. Assuming downwind concentration
C and background concentration
Cb are measured, the relationship between the concentration and the source emission rate
Q is determined by an atmospheric dispersion model. The emission rate can be inferred on the basis of the model prediction of the ratio of concentration as:
where (
C/Q)
model is the model predicted relationship. This equation is the basis of the inverse-dispersion technique. It requires a single
C measurement with flexibility in the choice of the measurement location.
The key to the inverse-dispersion technique is an accurate and easy-to-use dispersion model to calculate the relationship. The simple Gaussian plume model is not suitable under this on-site condition because of the lack of calibrated dispersion parameters in short-range (<100 m) dispersion [
32,
33]. A Lagrangian stochastic (LS) dispersion model is used, which has the capability to represent near-field dispersion by explicitly incorporating turbulent velocity statistics in the trajectories of tracer particles [
34]. The LS model simulates the dispersion process by releasing a large number of tracer particles and tracing the evolution of tracer velocity and location through a Markov process described by the Langevin equation [
35]
where
i and
j take values of 1, 2, and 3 to represent the three components of the Cartesian coordinates,
are the particle’s position and velocity vectors (at along-wind, crosswind, and vertical dictions), respectively,
dWj(t) is an incremental Wiener process, which is Gaussian with zero mean and variance of dt, and
ai and
bi,j are coefficients.
For three-dimensional (3-D), nonstationary, inhomogeneous diffusion in Gaussian turbulence, a simplified solution for the coefficients was given by [
34] by making the following assumptions: a stationary, horizontally homogeneous atmosphere, taking the average vertical velocity
W = 0 and average crosswind velocity
V = 0, with the friction velocity
u* and the horizontal velocity fluctuation variance
constant with height but allowing the vertical velocity variance
to be height-dependent. The detailed formulas of the solutions can be found in [
34].
The input parameters include sensor information (laser path configuration), source information (source geometry and strength), atmospheric conditions (mean wind speed and direction, atmospheric temperature, and pressure), and surface layer model described by the Monin–Obukhov similarity theory (MOST). MOST needs the information of surface roughness length (
z0), Monin–Obukhov length (
L), and friction velocity
u*. Using the 3-D sonic anemometer, one can derive these parameters as well as mean variance and covariance data of wind and temperature from the raw wind and temperature data [
34].
2.4. Tomographic Reconstruction of Emission Distribution
In the literature, ORS-CT technique is usually used to retrieve 2-D concentration map over an area under investigation. A pixel-based reconstruction algorithm is usually used for ORS-CT application due to the limited path number compared with medical CT application. It divides an area into multiple grid pixels and assigns a value inside each pixel. The path integral is approximated by the sum of the product of the pixel value and the length of the path in that pixel. A system of linear equations can be set up for multiple paths. The inverse problem involves finding the optimal set of pixel concentrations. Traditional pixel-based algorithms are algebraic reconstruction techniques (ARTs), non-negative least square (NNLS), and expectation–maximization (EM) [
27,
36,
37]. These algorithms are suitable for rapid CT reconstruction, but they produce maps with poor spatial resolution, owing to the requirement that the pixel number must not exceed the beam number. Otherwise, they may have a problem of indeterminacy associated with substantially under-determined systems [
30]. This problem can be solved by adding smoothness constraints into the inverse problem. One pixel-based smooth algorithm is low third derivative (LTD), which sets the third derivative at each pixel to zero, thus resulting in a new over-determined system of linear equations [
38]. In our previous work, a new smooth algorithm of minimal curvature (MC) was developed on the basis of variational interpolation technique, which requires only approximately 65% of the computation time required by the LTD algorithm [
39].
In this research, the ORS-CT technique is also used to derive the 2-D emission distribution for the first time. To achieve this goal, we adapt the LS dispersion model into the CT reconstruction process. A smooth CT algorithm is a key part of the method. On the basis of our previous work, a MC algorithm is used for the CT inversion. The process of calculating emission distribution is described in the following steps.
- (1)
High-resolution grid division is used. The site is divided into
N =
m ×
n pixels (21× 21 in this application), as shown in
Figure 3. As a result, the number of grids is much larger than the number of paths (3 × 3).
- (2)
Establish the relationship between the PICs and the pixel concentrations. The measured PIC is defined as follows:
where
i and
j are the path and pixel indices, respectively.
bi is the PIC of the
i-th beam, and
cj is the average concentration for the
j-th pixel.
Lij is the length of the
i-th beam passing the
j-th pixel. A system of linear equations can be set up for all paths as follows:
where
L is the kernel matrix that incorporates the specific path geometry,
c is the unknown concentration vector of the pixels, and
b is a vector of the measured PIC data.
- (3)
Establish the relationship between the emission rate and the concentration. The emission map is also divided into high-resolution pixels. For simplification, we use the same division as that for the concentration pixels. The cj is defined as follows:
where
Djk is a coefficient defined as the concentration of the
j-th pixel due to the
k-th emission pixel with unit strength. The value is calculated by using an atmospheric dispersion model.
qk is the emission rate of the
k-th pixel. For all the concentration pixels, a system of linear equations is derived as follows:
where
D is the coefficient matrix calculated by the dispersion model. Thus, we can derive the relationship between the emission rate and the PIC as follows:
where
F =
LD is the kernel matrix for the reconstruction of the emission rates.
- (4)
Use the MC algorithm to introduce additional constraints at each pixel to achieve smooth regularization by using the variational interpolation technique [
39]. The idea of the MC algorithm is to minimize the seminorm, which is defined to be equal to the total squares curvature of the underlying concentration distribution. If we use (
m,
n) to index a pixel located at
m-th row and
n-th column of the pixels, the discrete total squares curvature is:
where
c is the concentration at a pixel, Δ
d is the pixel length, and
Im,n is the curvature at the (
m,
n) pixel, which is approximated as:
where
cm,n denotes the pixel concentration at the pixel located at the
m-th row and
n-th column of the pixels. The additional prior equation at the (
m,
n) pixel can be derived by minimizing the seminorm [
39]:
There is one prior equation at each pixel. The reconstruction becomes a problem of minimizing the regularization, as follows:
where ||·||
2 denotes the Euclidean norm. The regularization operator
M is a matrix defined by the prior equations. The resulting constrained system of linear equations is over-determined.
- (5)
The over-determined system of linear equations is solved by the NNLS optimization algorithm to generate the emission rates [
36].
3. Results and Discussions
3.1. The Multi-Path Scanning TDL System
On the basis of the new combined method, a multi-path scanning ORS system was developed (
Figure 4).
Open-path Fourier-transform infrared (OP-FTIR) and open-path tunable diode laser (OP-TDLAS) are two commonly used ORS techniques for gas detection. The OP-FTIR technique has the advantage of measuring multiple gas components simultaneously. The OP-TDLAS technique can measure only one gas component per laser beam with a fixed wavelength. However, it is less expensive and has a faster response (one second per sample) than the OP-FTIR technique (tens of seconds per sample). In addition, it requires a smaller reflection mirror because the diameter of the laser beam is much smaller than the infrared light beam used by the OP-FTIR. The system is designed to be portable and can be easily deployed in a field. For CH4 detection, we use an OP-TDL analyzer (GasFinder3, Boreal Laser Inc., Edmonton, Canada) to save costs, reduce size and weight, and increase the data rate. It has a path length range of 0.5 to 750 m, precision of 2% RSD, and detection limit of 1.2 ppm.m.
The TDL analyzer emits a beam of laser targeting a mirror. By accepting the reflected light, a path-integrated-concentration (PIC) is measured on the basis of the principle of the absorption of light at characteristic absorption wavelength of the target gas. The analyzer is installed on a fast, accurate, and durable pan-tilt scanner (PTU-48E, Teledyne FLIR LLC, Wilsonville, OR, USA) which is mounted on a tripod. The speed of the scanner is 50°/s with a position resolution of 0.003°. Multiple PICs are measured by scanning the analyzer targeting at different mirrors sequentially and periodically.
On the basis of HRPM path configuration, the area source is divided into multiple grids. In each grid, there is a retroreflector installed. Depending on the distance between the reflector and the analyzer, different types of reflectors will be used. For a distance less than 50 m, only a corner cube retroreflector is needed, which can be installed on a metal pole. This installation allows the equipment to be very easily transported and set up in the field. For a distance of more than 50 m, more corner cube retroreflectors are needed to increase the reflection area of the light beam.
Meteorological information, including wind speed, direction, atmospheric temperature, and atmospheric pressure, is obtained through a 3-D sonic anemometer (Young 81000, Campbell Scientific Inc., Logan, USA) with wind speed range of 0 to 40 m/s, wind speed accuracy of ±0.05 m/s, wind direction range of 0.0 to 359.9°, and wind direction accuracy of ±2°. For field application, the system can be powered by 12 VDC battery if the 120–240 VAC is not available. The length of the laser path is measured by a range finder (Scout DX 1000, Bushnell Corporation, Overland Park, KS, USA).
The system software running on a laptop includes a feedback scanning control module, data acquisition module, air dispersion model module, and CT reconstruction module. The scanning logic is controlled automatically by the feedback control module, which communicates with both the TDL analyzer and the scanner through serial communication protocols with a wired cable connection or wireless transmitter and receivers. The scanning control module records the location information of the mirrors and sends commands to the scanner to change its pan and tilt positions. Meanwhile, it also receives the laser intensity information as feedback to ensure the laser is aiming at a mirror. For continuous monitoring application, automatic alignment of the laser is needed to correct accumulated positioning errors or other interferences which may change the locations of the laser or mirrors.
The scanner stops at each path for 10 s when the PIC data of the path is acquired and recorded by the data acquisition module. Meanwhile, ambient temperature, pressure, and wind data are also acquired and recorded. After each cycle of the scan, multiple PICs from all the paths are measured on the basis of which the CT reconstruction algorithm is executed to generate a 2-D concentration distribution on the measurement plane during this scan period. After a long period (e.g., 10 min), the PICs data and wind data are averaged on the basis of which the CT reconstruction algorithm coupled with an atmospheric dispersion model is executed to generate a 2-D emission distribution during the averaging duration.
3.2. Controlled-Release Experiments of CH4
To evaluate the system and the method, controlled-release experiments of CH
4 were conducted in Calgary, Canada, in September 2021. The site was an open flat area of 40 m × 40 m, which was divided into 3 × 3 grids. Nine retroreflectors were distributed in each grid at the height of 0.8 m according to HRPM beam configuration. The area sources were simulated by 1/2” PVC pipes. Two releases were conducted. The first one used one area source of 2.2 m × 2.2 m. The second one used two area sources of 1 m × 2.2 m separated in a distance of 12 m. The equipment is shown in
Figure 5.
CH
4 gas with a purity of 99.97% was released from a high-pressure gas cylinder. The gas flow was controlled by a regulator. The flow rate was monitored by a rotameter. The gas cylinder was connected to the simulated area sources. CH
4 gas was emitted from small outlets with a diameter of 0.5 mm, uniformly distributed on the pipes at a distance of 0.5 m to each other. Before and after a release period, the gas cylinder was weighed. The 3-D wind speeds and the atmospheric temperature were measured by the sonic anemometer with a data frequency of 32 Hz and a height of 2 m. A computer program controlled the scanner to target the laser beam at the reflectors sequentially and periodically. At each retroreflector, it stayed for 10 s for data acquisition. Background concentration was measured before each release. The path configurations are shown in
Figure 6. Some parameters and PICs are shown in
Table 1.
3.3. Calculation of Emission Distribution
The distribution of emission rates of the area sources is calculated according to the method described in
Section 2.4. The area source is divided into 21 × 21 grids. The same grid division is also used for the concentration map. The concentration of each grid is assumed to be uniform and is represented by a point sensor located at the center of the grid. Therefore, there are 441 small area sources and 441 point sensors. For the LS dispersion model, 50,000 particles were released. The derived emission distributions for Releases One and Two are interpolated and shown in
Figure 7a,b.
We can see that the emission sources are successfully located. The distances between the real and predicted source centers for Releases One and Two are 1.8 m and 1.7 m, respectively, which are approximately equal to the length of one grid (1.9 m). The total emission rates can be derived by integrating the emission maps. The results are 0.193 g/s for Release One and 0.194 g/s for Release Two. Compared with the real emission rates, the error is 2% for Release One and 3% for Release Two. These results illustrate that both the source locations and the emission rates are very accurate.
As a comparison, the concentration maps predicted by the NNLS and MC algorithms are also shown. From
Figure 7c,d, we can see that the non-smooth NNLS algorithm gives incorrect results on both the source number and locations. In
Figure 7e,f, the smooth MC algorithm predicts current source number. However, there are large errors on source locations. In addition, neither algorithm can predict the current size of the sources.
3.4. Comparison with Single-Path Results
The traditional inverse-dispersion method uses one single path to calculate the emission rate of the area source, in which case the location and geometry of the source must be known, and the emission rate of the area source needs to be uniform. Therefore, this method cannot derive the emission rate in real applications. However, for the controlled-release experiments, we can use this method because both the source information and emission rates are known.
The emission rates were calculated on the basis of each observed PIC by using the LS dispersion model with the same dispersion parameters as that used for the CT reconstruction. For Release One, the derived emission rates are 0.52 g/s, 0.17 g/s, 0.11 g/s, 0.39 g/s, and 0.15 g/s according to Paths 2, 3, 4, 7, and 7, respectively. For Release Two, the derived emission rates are 0.38 g/s, 0.24 g/s, 0.24 g/s, 0.21 g/s, 0.22 g/s, and 0.18 g/s according to Paths 3, 4, 6, 7, 8, and 9, respectively. Data from other paths are not valid because there is only a very small or no part of the path passing through the gas plume.
Emission rates from multi-path and single-path methods are shown in
Figure 8. We can see the following results. First, the emission rate is overestimated when the path is very close to the emission source. This is an issue of using LS models near the source [
18]. The trend is shown from Paths 4 and 5 in Release One and Path 2 in Release Two. The errors are 173%, 105%, and 90%, respectively. Second, the emission rates tend to be underestimated as the path is far away from the source. This may result from the increasing uncertainty in the observation when the fetch becomes large. The uncertainty is due to low concentration and nonideal dispersion. Third, the multi-path approach provides a better result than the single-path method. This is because all data observations on the site are used to derive an optimal result. To conclude, the multi-path scanning approach can not only derive the source distribution but also obtain a more accurate emission rate compared with the single-path method.
3.5. Uncertainty Analysis
The measurement uncertainty of the hybrid method is affected by many factors. (1) The accuracy of the PIC and wind data is determined by the equipment. (2) The performance of the CT reconstruction is affected by the path configuration, underlying distribution, and different reconstruction algorithms. (3) The accuracy of the LS dispersion model is affected by the terrain and atmospheric conditions. Over these factors, the equipment performance is fixed. The performance of MC algorithm has been evaluated in [
39], which is also affected by the dispersion model in this study because the outputs of the dispersion model will determine the kernel matrix in the reconstruction. Therefore, the performance of the LS dispersion model will largely affect the overall performance of the hybrid method. However, the model performance is also difficult to evaluate because of the different site environment and continuously varying atmospheric conditions.
To analyze the overall uncertainty of the hybrid method, we studied the performance of the method in a long measurement period. The period was chosen to be when the atmospheric conditions satisfy the assumption of the LS dispersion model, and the wind data are filtered on the basis of the rule that the friction velocity is larger than 0.15 m/s to ensure the data are valid [
34]. As a result, the period of Release One configuration lasts from 14:30 to 15:00. The period of Release Two configuration lasts from 15:36 to 16:10. The wind speeds and wind directions for the Release One and Two periods are shown in
Figure 9.
In the Release One period, the mean and standard deviation of wind speed is 2.41 m/s and 1.07 m/s, and the mean and standard deviation of wind direction is 282.02° and 44.18°, respectively. In the Release Two period, the mean and standard deviation of wind speed is 2.51 m/s and 1.08 m/s, and the mean and standard deviation of wind direction is 293.42° and 30.12°, respectively. We can see that the wind direction variation is larger in Release One than that in Release Two.
The emission rate is calculated on the basis of 15 min averaged data. Multiple emission rates are calculated by using a moving window averaging method with a window length of 15 min and a step of 2 min. The calculated mean emission rate of release one period is 0.198 g/s, with a standard deviation of 0.039 g/s. The emission rate errors range from −18.5% to 47.0%, with a mean error of 4.2%. The calculated mean emission rate of release two period is 0.215 g/s, with a standard deviation of 0.021 g/s. The emission rate errors range from −6.9% to 28.3%, with a mean error of 7.5%. From the results we can see that the mean errors are less than 10% in both release periods. Although the mean error is smaller in the Release One period than that in the Release Two period, it has a larger variation. From
Figure 9b we can see that this large variation is caused mainly by the large variation of wind direction. To conclude, the hybrid method shows good long-time performance even when the atmospheric conditions are not ideal.
4. Conclusions
High spatial and temporal resolutions are required to accurately quantify emission from fugitive area sources. The single-path inverse-dispersion method is a very flexible method to quantify the emission rate of the area source. However, a single-path approach cannot derive the emission distribution. It also requires the path to be located at a downwind location with sufficient fetch and to pass through the main gas plume, which is affected by the wind direction. To improve the resolutions and performance, this paper develops a new hybrid method that is based on multi-path ORS, inverse-dispersion modeling, and CT techniques to derive the emission map and rate of an area source.
The underlying techniques are important for the successful application of the new method. The OP-TDL ORS technique ensures the fast response of the measurement. The LS dispersion model plays a key role to accurately mimic the air dispersion near the source. The use of smooth tomographic algorithm ensures the accuracy of the CT reconstruction. The number of beams greatly affects the performance of the CT reconstruction. To reduce the cost and deployment complicity, the non-overlapped method of HRPM configuration with only one TDL analyzer was used. If the mirrors are difficult to install inside the area source, overlapped beam configuration with two or more analyzers can be used.
The hybrid method is evaluated through two controlled-release experiments of CH4, showing errors of only 2% and 3% relative to the real values. The results prove the efficiency and advantages of the method. Compared with the single-path approach, it can not only derive the source distribution but also obtain more accurate emission rates.
The method is tested on a flat terrain. In practice, the environment of the site can be complicated. The terrain may not be flat and there may be obstacles, such as buildings and trees. The complicated conditions will affect mainly the applicability of the atmospheric dispersion model. One possible method is to improve the performance of the atmospheric dispersion model to support complex terrain. Furthermore, more advanced techniques, such as computational fluid dynamics (CFD), can be used to calculate the wind field in sites with obstacles. Finally, optimal techniques can also be used to tune the input parameters of the atmospheric dispersion model.