Large-Eddy Simulations with an Immersed Boundary Method: Pollutant Dispersion over Urban Terrain

In urban canopies, the variability of pollution may be influenced by the presence of surface heterogeneities like orography and buildings. Using the Meso-NH model enhanced with an immersed boundary method (IBM) to represent accurately the impact of the 3D shape of buildings on the flow, large-eddy simulations are performed over city of Toulouse (France) with the dispersion of a plume following a plant explosion on 21 September 2001. The event is characterized by a large quantity of nitrogen dioxide released in a vertical column after the explosion, quickly dispersed by a moderate wind prevailing in the lower atmospheric layers. Assuming a passive pollutant, the model develops a realistic plume dispersion. A sensitivity analysis of the advection scheme to the spread is presented. The limited population’s exposure to pollution developed by the model appears in good agreement with previous health studies. Beyond this case, IBM is a promising way to represent flow interaction with buildings and orography in atmospheric models for urban applications.


Introduction
Atmospheric pollution is an important societal and ecological challenge. The expansion of urban areas and their associated anthropogenic activities raise many questions concerning the health of urban populations. Among anthropogenic emissions, industry constitutes an important source that can be intensified by accidental releases. Predicting their dispersion requires meteorological models to consider the physics of the boundary layer like urban heat islands and urban flows, with a high accuracy to include microscale urban effects due to buildings. The numerical approaches used in meteorological models are progressing, and modelling the air flow in large-eddy simulations (LES) over urban canopies is currently possible, as demonstrated by comparisons with experimental investigations for the mock urban setting test (MUST) [1,2], over Tokyo districts [3] or LES of fog formation in an airport area [4]. Due to the structured nature of buildings and cities, the implementation of an immersed boundary method (IBM; [5]) is an interesting alternative to unstructured grids classically used in computational fluid dynamics. It completes also the list of various methods representing obstacle impacts such as the drag force parametrization used in LES [6]. One of the IBM techniques, the ghost-cell technique, was first developed in an ocean model by Tseng and Ferziger [7] to simulate a geophysical flow over a three-dimensional Gaussian bump. It was implemented in the WRF atmospheric model [8,9], showing promising results. This method can also be an alternative to represent complex terrain by reducing numerical limitations over steep slopes [10]. The IBM has been adapted to the anelastic condition by [11] in the Meso-NH model [12] and validated with analytical cases and the MUST field campaign. It will be called MNH-IBM for the IBM adapted to the Meso-NH model.
The objective of the present study is to show the potential of this method for urban applications, by applying MNH-IBM in real conditions, corresponding to a brief and intense pollutant episode caused by a plant explosion over a city. The accident corresponds to a major explosion at the Azote de France (AZF) fertilizer plant located in the suburbs of Toulouse (France) on 21 September 2001, causing the destruction of the entire industrial complex. The plume dispersion of nitrogen dioxide (NO 2 ) generated just after the AZF explosion, once the emission cloud formed and reached its maximum altitude, has received considerable attention. Despite a lack of usable data defining the source term and measured concentrations during the dispersion, this study aims to simulate this dispersion realistically with MNH-IBM, considering the NO 2 as a passive tracer due to the very short period of dispersion time involved. Population exposure maps are drawn and compared to previous health impact studies. The study does not attempt an ensemble approach, but acts as a demonstrator to show the potential of the IBM over urban area.
The study is organized as follows: The MNH-IBM code is presented in Section 2 and the AZF case in Section 3. The adaptations of MNH-IBM to the physical AZF case are described in Section 4. Section 5 deals with the dynamical modelling results, including the sensitivity analysis to wind advection scheme and inlet turbulence. Based on an estimated value of the initial pollutant concentration following the explosion, population exposure maps are produced and conclusions considering different health thresholds are drawn in Section 6. In addition, the appendices provide descriptions on how the districts of a city and their associated buildings are embedded in the numerical model.

The Immersed Boundary Method in Meso-NH
Meso-NH [12,13] is a mesoscale meteorological research model initially developed by the Centre National de Recherches Météorologiques and the Laboratoire d'Aérologie (LA, CNRS-UPS), resolving unsteady 3D Euler equations under the anelastic hypothesis. It is a fast and highly parallel code able to run on most international computer hosting platforms. Several parameterisations are available for radiation, turbulence, convection, microphysics, chemistry and aerosols. Except for the resolved buildings, the influence of the ground properties is modelled using the SURFEX externalized surface scheme [14], including land, town, sea, and inland water schemes.
IBM accounts for the fluid-solid interactions and the spatially resolved buildings [11]. Within Meso-NH, customized Cartesian grid methods have been developed for finite-volume discretization, e.g., the cut-cell (CCT) technique [15], and for finite-difference discretization, e.g., the ghost-cell (GCT) technique [7]. These methods introduce forcing in the Meso-NH conservation equations at the vicinity of embedded solid surfaces such as buildings. Both GCT and CCT are used in MNH-IBM. GCT corrects explicit-in-time schemes (e.g., advection and diffusion) and computes the prognostic variables (velocity, temperature, and subgrid turbulence kinetic energy) in the embedded solid volume to satisfy boundary conditions such as the Dirichlet, Neumann or wall laws [16]. To solve the pressure problem [17], incompressibility is ensured via a CCT alteration of the left-hand side of the Poisson equation and via an adaptation of the residual conjugate gradient iterative algorithm. The subgrid turbulence scheme [18] is modified at the mixing and dissipative scales near an immersed fluid-solid interface (the von Kármán limitation). [11] validated IBM on academic cases and an idealized urban-like environment. For further information on the reference version of MNH-IBM, readers are referred to consult to this paper.
The fluid-solid interface is defined by a continuous LevelSet function (LSF) φ [19]. LSF models a non-moving interface and is not time-dependent. | φ(x, y, z) | defines the minimum distance to the interface, and its sign differentiates the virtual embedded solid volume from the fluid volume. In [11], LSF was generated for geometric objects whose surfaces were analytically known. The recovery of LSF in the present study is based on a discrete function describing a real city (i.e., the heights and locations of buildings) and introduces additional errors. The discrete and complicated characteristics of the topographic database represent a new development in MNH-IBM. The associated work is split into three parts: A method to convert real field data into LSF (Appendix A), a technique to smooth the LSF discretization errors (Appendix B), and several tests and validations of this new implementation using idealized obstacles such as cylinders or cubes (Appendix C).

The AZF Case
On 21 September 2001, several dozens of tons of ammonium nitrate granulates exploded in the AZF/Grande Paroisse Plant. This plant is located in the southern part of Toulouse (Figure 1a). The blast originated in hangar 221 where ammonium nitrate granulates were stored. The hangar was destroyed by the explosion. Official assessments of the damage were significant: The explosion caused 31 deaths and more than 2000 victims. The highest number of casualties occurred at the plant and in the vicinity of the factory due to the detonation. Most of the casualties were individuals living or working within a few kilometres of the plant. Hearing problems and psychological distress were other human health consequences associated with the tragedy. The blast intensity was 3.4 on the Richter scale and the explosion was heard 80 km away [20]. Around the plant, about 700 km 2 , including swimming pools, gymnasiums, concert halls and a high school were destroyed. The blast scene was currently characterized by a crater with a diameter of approximately 100 m. An analysis of the explosion damages (e.g., seismic activity, structural damages, and broken windows) led to a detonation comparable to 20-40 trinitrotoluene (TNT) tons. From this, the quantity of detonated ammonium nitrate was estimated to be 40-80 tons [21]. The plume created by the explosion contained several chemical components, some of which are considered toxic, with concentrations that exceeded exposure thresholds. The official reports [20] provide a non-exhaustive list of components present in the plume including ammoniac (NH 3 ), nitric acid (HNO 3 ), nitrogen dioxide (NO 2 ), nitrous oxide (N 2 O), chlorine (Cl 2 ), asbestos particles, nitrates, and nitrites (NO x ). The impact on humans due to the presence of such chemical elements in the air after the explosion was estimated to be low, considering the observations of medical doctors and the data collected by the few available measurement instruments. However, Cassadou et al. [20] mentioned several transient episodes of respiratory difficulties, and eye irritation experienced by persons living near the blast site. These symptoms were associated with the presence of NO 2 in the plume as characterized by its orange colour [22,23].
The explosion occurred around 0815 UTC at (43.5672 N, 1.42743 E). The meteorological conditions were a clear sky, a moderate wind blowing from the south-east near the ground progressively rotating to the south a few hundred metres above the ground. This wind veering induced corrosive deposition over aircrafts in the northern part of Toulouse due to the gravitational settling of the heaviest particles injected into the atmosphere. Several districts in the city were affected by the toxic plume. The presence of NO 2 was confirmed via concentration peaks measured at several instrumented sites of the Observatoire Régional de l'Air en Midi-Pyrénées (ORAMIP (http://oramip.atmo-midipyrenees.org). In particular, the ORAMIP station located at the Maurice Jacquier primary school (approximately 1 km north-west of the plant) miraculously resisted the shock wave and detected a strong increase in the NO 2 concentration with a maximum value approaching 0.4 mg m −3 (∼200 ppb) within 15 min after the explosion. Within 1 h after the detonation, the furthest station in Colomiers (a city in the plume path, approximately 10 km from the plant) observed a NO 2 concentration of 0.06 mg m −3 (∼30 ppb).

Presentation of the AZF Case Simulation
The local orography. Figure 1a provides an aerial view of the main districts affected by the fumes (bounded by a grey contour) and the factory (black circle). The elevation of the land at the boundary ranges from 140 m to 160 m. The 'Pech-David' hills are located upstream of the factory, east of the Garonne River. The characteristic height of the hills is h h = [90:120] m greater than that of the surrounding areas ( Figure 1b). Apart from these hills upstream of the AZF plant, the non-constructed urban area is nearly flat.
The city. The geometric information concerning the studied districts was provided by the IGN-BDTOPO database (IGN-BDTOPO: http://professionnels.ign.fr/bdtopo) and is illustrated in Figure 2a. This database provides the roof heights of the urban structures. The vectors normal to the roofs (resp. walls) are considered to be vertical (resp. horizontal). About 2700 buildings are documented in the database. Most of the buildings have a height of 5-10 m (Figure 2a), buildings of 10-20 m high are common, while the highest and sparsest buildings have heights between 30 m and 60 m.  The MNH-IBM grid. The modelling area covers 5 × 3.2 km 2 ( Figure 1). We use a cartesian grid, isotropic for z lower than 100 m with a spatial grid resolution ∆x = ∆y = ∆z = 2 m. Above 100 m, the mesh increases gradually in the vertical direction with a stretching of 1.05. The flow features (e.g., horseshow vortex, flow separations/reattachments) around buildings lower than 15 m could be considered vertically underresolved with this vertical resolution, but the objective here is to simulate the overall impact of the entire city and the friction effect, and not the specific flow in each canyon. The ceiling of the numerical domain is 0.8 km. The effect of the upstream hills, in addition to the buildings, is modelled by IBM, as the terrain is flat in the coordinate system. Note that the IBM adaptation to terrain-following coordinate will constitute a future work. The discrete and complicated characteristics of the topographic database induce a specific methodology to build a continuous interface (see the Appendices). In the present study, the chosen computational grid dictates that the smaller houses with heights less than 5 m are not modelled and that the narrowest streets are merged with the surrounding buildings. Figure 2b illustrates the generated LSF related to the fluid-solid interface.
The atmospheric initial conditions. To generate the large-scale meteorological conditions on 21 September 2001 for the LES, a first mesoscale run is conducted with Meso-NH and without IBM over the 2 h 15 m prior to the explosion with interactive nested grids (three coarse resolutions: 2.5 km, 500 m, and 100 m), and is forced by the ECMWF operational analyses. Then, the LES is run in a single model with MNH-IBM over a short period (20 min), allowing to suppose steady large-scale fields, using the previous 100 m resolution fields of the nested run at 0815 UTC as initial fields. The morning conditions were sunny without cloud and with a water vapour mixing ratio of about 10 −2 kg kg −1 . The vertical initial profiles of the LES produced by the preliminary run at the AZF location are shown in Figure 3. The first layer above the ground presents a quasi-constant potential temperature and is capped by an inversion and the residual boundary layer, inducing a boundary layer (BL) height of approximately 200 m. The radiative heating at the ground, which forces non-zero values of the surface sensible heat flux, is still low at the time of detonation limiting the convective mixing. In this context, the MNH-IBM LES is assumed to be dry under near-neutral stability condition.
A south-east wind dominates near the ground ( Figure 3b). Up to the top of the BL, the wind rapidly increases approaching a mean horizontal velocity of 8 m s −1 . The wind direction turns towards the south at z ∼ 400 m with a ∼ 4 m s −1 speed. The zonal component in the free atmosphere dominates the meridional component (not shown here). This meteorological situation is frequently observed in Toulouse. The associated current at the lowest altitude is called the Autan wind and often induces moderate and severe wind speeds. The incoming turbulence. While the atmospheric turbulence is not contained in the steady large-scale forcing, the near-neutral stability condition implies inlet turbulence in the BL governed by the surface friction. This is particularly true if large obstacles exist upstream of the studied zone. In the present case, the districts of Toulouse affected by the emissions are in the wake of a hilly region (the height of these hills h h is one order of magnitude higher than most of the buildings). The factory is located 10 h h from the hills. At the industrial plant location, the thickness of the turbulent layer is estimated to be 4-5 h h (the turbulent intensity strongly decreases above this height after the spin-up time). Therefore, it is realistic to assume that the turbulent layer downstream of the hills is predominantly governed by their presence. For these reasons and without IGN-BDTOPO information upstream of the study area, the hills (with a surface idealized by an elliptic shape approaching the real shape) and the buildings upstream (surface idealized by a cosine function with a ∼10 m −1 wave number) are modelled by IBM to generate a realistic incoming turbulence. This is performed during an initial preliminary LES with IBM and without release to establish a turbulent state. During this run, the enstrophy, spatially integrated in the studied zone, increased up to a quasi-stationary value corresponding to a saturation state. This characteristic time of saturation was about 15 min. The four minutes following this saturation state provided four initial turbulent states that will be called the R1 to R4 initial conditions, in order to represent the chaotic nature of the atmospheric turbulence.
The numerical modelling. Far from the immersed interface (i.e., at grid points farther than one grid mesh from the interface in the fluid region), the wind advection scheme is either the fifth-order weight-essential-non-oscillatory (WENO) scheme associated with a third-order (five stages) explicit Runge-Kutta scheme (ERK) [24], or the fourth-order centred scheme associated with a fourth-order ERK and a numerical diffusion. Near the immersed interface (i.e., at grid points closer than one grid mesh from the interface), the wind transport scheme is either the third-order WENO scheme or the second-order centred scheme. Hence, the association of both schemes (far/near) presents an hybrid feature [11]. The subgrid turbulence kinetic energy (sTKE), potential temperature, and tracer are transported using a piecewise parabolic method (PPM) [25] associated with a forward-in-time algorithm. Large-scale fields ( Figure 3) are imposed everywhere at the initialization. The top, bottom, and immersed interfaces are considered to be non-permeable. The homogeneous Neumann condition is applied at the top of the numerical domain on the tangential part of the velocity field (free-slip) and scalars. Open wave-radiation formulations [26] are employed at the lateral boundaries.
The friction coefficients for the ground between buildings are computed by SURFEX using a roughness length z 0 = 0.1 m, as SURFEX deals with the unresolved urban obstacles, while the 3D shapes of buildings are explicitly resolved by the IBM. Additionally, a specific wall law based on a centimetric roughness length models the friction at the immersed interfaces. The surface sensible and latent heat fluxes are set to zero.
The Courant number never exceeds unity with a time step less than 0.2 s. Most of the performed simulations cover the dispersion of the plume during the 15 min following the explosion. The simulations were executed on the OCCIGEN-CINES cluster on 2000 cores. The post-treatment (e.g., spatial integration) is performed in the area delimited by the grey contour in Figure 1.
The NO 2 plume. Only limited information is available concerning the characteristics of the plume released after the explosion over the industrial site. Most people who observed its formation described the plume as an orange column with a O(10 2 ) m diameter and a O(10 3 ) m height. The order of magnitude of the diameter is consistent with the crater size. Cassadou et al. [20] mentioned similar plume scales in its initial stage. From these descriptions, the plume distribution is initialized as a vertical cylinder with a radius of 100 m radius and a height of 800 m (the vertical size of the numerical domain) at the AZF location. The cylinder numerically presents a homogeneous NO 2 concentration C ini in its core (the initial concentration in the plume is experimentally unknown). The pollutant is considered as a passive gazeous (no chemical transformation). Indeed, at the time of the explosion, the NO 2 within the plume can be photodissociated on time scale of about 15 min. It will result in NO production that will eventually react with O 3 to form back NO 2 . Given the rather short duration of the simulations, the net NO production is rather limited and is neglected at this stage. We also assume zero background value. These assumptions and their impact are discussed in Section 6.
The measurement stations. The experimental data available to characterize the evolution of the pollutant plume are also limited. Only a few measurements from the ORAMIP are available and are discussed in Barthélémy et al. [21] and Cassadou et al. [20]. Four stations equipped with nitrogen dioxide passive samplers were within the plume path or in its vicinity. Three stations are located in Toulouse: 'Maurice Jacquier' school (43.5756 N, 1.41843 E) refered to here as 'Jacquier', 'Pargaminières Street', and 'Metz Street'. The Jacquier Station is located 1.5 km north-west of the industrial site (the black cross in Figure 1). The two other urban stations are located in the town centre, 4 km north of the plant, and do not appear in Figure 1. The last station is located in Colomiers 10 km north-west of Toulouse. Between 0100 and 0400 UTC the day of the explosion, the four stations showed an usual nocturnal urban pollution with a NO 2 concentration between 10 µg·m −3 and 60 µg·m −3 . After 0400 UTC, the NO 2 concentration increased ([20:100] µg·m −3 ) due to the urban traffic during this period.
At 0815 UTC, the Jacquier Station measured a normal NO 2 concentration of [40:60] µg·m −3 , and the value climbed to 388 µg·m −3 at 0830 UTC. None of the other stations recorded so significant peak anomalies. In this paper, results are firstly discussed in terms of ratio to the initial concentration (Section 5). An estimation of the population exposure is then conducted assuming an initial concentration (Section 6).

Simulations of the AZF Case
As previously mentioned, the mesoscale nested domains have been run with time evolution and then the conditions at 0815 UTC have been used to run the LES with MNH-IBM in stationary conditions during 20 min, with four initial turbulent conditions, noted R1 to R4. The instantaneous release of the accident at 0815 UTC is introduced in these LES to study the plume with different numerical schemes. The advection schemes are indeed a key component of LES as most of the eddies are resolved. Furthermore, in pollution studies, the implicit numerical diffusion impacts the spatial spread of the plume. Therefore LESs are conducted using either WENO (Wn) or centered (Cn) wind advection scheme where n is the space order. Different combinations of numerical schemes can be used between far and near the immersed interface. Three of them have been tested: W5/W3, C4/C2, and C4/W3 (far/near).
To physically simulate such an accident in a wind tunnel, a number of instantaneous releases (typically 100) would be performed and then averaged to provide reliable statistics. Here, no such ensemble averaging has been constructed, due to the computational cost: Depending on the numerical scheme, between 50,000 and 250,000 h are necessary for each LES. Therefore, a dozen of stationary LESs were performed, considering the different initial turbulent states and numerical schemes.
The characteristic height commonly used in air quality studies is 2 m, corresponding to a human size. In a city, this value gives information near the ground and does not represent the building scale (h b = 10 m is the characteristic height for the Toulouse case). To describe the surface layer near the ground, in the vicinity of the buildings , a vertical Gaussian f (x, y) integration f (x, y) is defined such that:f where g(x, y, z) = (1 − P(x, y, z))e −(ez/h b ) 2 , and P is a building presence function (P = 1 in a solid region and P = 0 elsewhere). Additionally f = 0 if P(z < h b ) = 1. Equation (1) allows to weight f predominantly via f (z < h b ) and especially via f (z → 0), as levels near the ground constitue the area of interest. Table 1 presents the adopted symbol convention. Section 5.1 discusses the sensitivity of these dynamical tests, and Section 5.2 focuses on the dispersive effects.

Sensitivity to Different Numerical Schemes
The 15-min averages of the Gaussian integration of the resolved wind, wind fluctuations, and subgrid turbulence kinetic energy (sTKE) are presented in Figure 4. The left (resp. right) part shows the results of the C4/C2 (resp. W5/W3) advection scheme. The time-averaged resolved wind (Figure 4a,b) shows values close to 5 m s −1 in the areas with low building density, such as near the origin of detonation where the destroyed AZF buildings are not taken into account. The wind in the areas with high building density is often less than 2 m s −1 . Some wind accelerations around buildings and in streets are sparsely detected and are associated with a maximum velocity of approximately 10 m s −1 . The C4/C2 results are in good agreement with the W5/W3 ones. However, the deceleration of the wind due to the effective urban roughness is enhanced by the WENO scheme. Time-averaged wind fluctuations (Figure 4c,d) appear to be sensitive to the advection scheme in the regions of flow acceleration. The mean value of the wind fluctuations is about 1 m s −1 . Some differences between the advection schemes are found in the sTKE results (Figure 4e,f). The production of sTKE with W5/W3 is much lower than that with C4/C2. Several regions of large sTKE values produced by C4/C2 disappear with W5/W3. This means that the implicit diffusion of the WENO scheme limits the explicit diffusion produced by the turbulence scheme.
The Discrete Fourier Transform (DFT) is applied to zonal and vertical resolved wind components in the BL (Figure 5a,b) and above (Figure 5c,d) to get the energy spectra. The hybrid scheme combinations are plotted in each graph (colour code). A line type is associated with each Rm run. The sensitivity study is therefore only based on a dozen of simulations. Except for the high wavelengths, the spectral density values are similar between the horizontal and vertical wind components, meaning a near-isotropic turbulent state. All the schemes match the −5/3 slope well at wavelengths higher than the effective resolution (i.e., the scale from which the model departs from the −5/3 slope, according to Skamarock [27]). But the sensitivity of numerical schemes acting far from the immersed interface exhibits significant differences in terms of effective resolution: C4 presents an effective resolution around 4∆x, in agreement with previous studies [12,28], while it is coarser with W5, around 10∆x (even coarser than Lunet et al. [24]), due to the implicit diffusion. Concerning the scheme used near the immersed interface, C4/C2 and C4/W3 are very similar, except in the BL (Figure 5-top) and near the cut-cell scale, where an energy increase can be found with C2. This increase is related to an energy accumulation generated at the immersed interfaces, due to a too low numerical diffusion. A higher numerical diffusion would be better appropriate to filter the 2∆x numerical mode. Nevertheless, this increase remains weak and does not spread over the largest scales. In the same way, a similar energy increase is observed above the BL due to a too low artificial diffusion associated with C4. The amplitude of the high wave number modes above the BL is two orders of magnitude weaker than those in BL, showing more energy in the BL due to the resolved eddies generated by the canopy. The energy spectra have shown a strong sensitivity of the wind transport schemes far from the fluid-solid interface in terms of effective resolution, with a coarser resolution with W5 than C4. But considering the scheme used in the vicinity of the obstacles, this sensitivity is strongly reduced. However the WENO schemes have been developed to better represent the solution in presence of high gradients, as in complex shock-obstacle interactions with IBM [29]. Therefore both WENO and CEN schemes will be considered in the following study.  Figure 6 at t = (2;4;6) min. The turbulent intensity near the ground is large ( Figure 5) due to the nature of the incoming flow, the friction, the flow around the buildings and the vortex shedding in the building wake. As a consequence, the maximum value of the scalar concentration rapidly decreases and the plume spreads. Figure 6-bottom shows the persistence of around 10% of the initial concentration 6 min after the explosion (i.e., the emission).

Nitrogen Dioxide Dispersion
At this time, the areas affected by the plume are at least 2 (resp. 0.5) km in the direction parallel (resp. perpendicular) to the plume track. In order to quantify the amount of pollutant reaching the Jacquier Station normalized by the initial concentration, Figure 7a shows the temporal evolution of the dimensionless concentration ratio, C jaq /C ini , computed at this location for different model integrations. Three minutes after the beginning of the release, all simulations detect a plume occurrence at Jacquier. The [3:5] min period exhibits systematically a concentration peak. The range of the maximum simulated values at the Jacquier station C max jaq is [0.1:0.5]C ini . The difference between these two concentrations highlights the plume dispersion and/or the distance of the station with the moving plume core.The simulated concentration falls below 10% of the initial concentration after 5 min and continues to decrease thereafter. Figure 7b shows the ( C jaq ; C max jaq ) diagram non dimensionalized by C ini where C jaq is in the range [0.015:0.030]C ini depending on the run. C max jaq using W5 is lower than the maximum value obtained using C4 due to the diffusive behaviour of the WENO scheme. But the time-average value and the global tracer amount transported in the vicinity of the Jacquier Station are weakly dependent on the advection scheme type.
Two parameters affect the health impact due to the inhaled NO 2 pollutant: The concentration value and the exposure time [20,23]. The local exposure time to a concentration higher than selected values is computed during the simulation. Four risk maps (for the C4/C2-R3 simulation) are illustrated in Figure 8 showing exposure times of 20%, 10%, 4%, and 2% of the initial concentration in panels (a, b, c, and d) respectively. The exposure time of C lev /C ini higher than 20% never exceeds 1 min and is located in the vicinity of the AZF plant. The exposure times for the highest C lev /C ini values are slightly lower with W5 far from the immersed interface than those simulated with C4, and the width of the plume is larger by ∼10% for C lev /C ini equal to 0.01 at a distance between the AZF plant and the Jacquier Station.
This study confirms that the wind advection scheme used far from the immersed interface with IBM impacts the spread of the plumes, in agreement with energy spectra. Conversely, the scheme used near the immersed boundary has almost no impact on the energy spectral distribution (the deviation between W3 and C2 is only observed at the small scales) and WENO schemes can be recommended to model the obstacle interactions.

Discussion on Population Exposure
The simulations allow to explore the population exposure in the AZF case. This firstly requires an estimation of the NO 2 concentration at the initial state, considering different assumptions.

The Initial Plume Structure
As described previously, the NO 2 concentration at the initial state of the simulation is assumed to be uniform in a 100 m wide cylinder with an altitude of 800 m. This simple assumption directly impacts the evolution of the pollutant distribution and requires some examination. Indeed, rare pictures of the plume shortly after the explosion (Internet sources) show a relatively homogeneous ochre colour due to the presence of NO 2 . However, observers of the plume just after the explosion reported an increase in the colour intensity with altitude. This potential NO 2 stratification could be related to a temperature increase near the ground at the time of detonation. The associated buoyancy effects could have induced a convective ascent transporting most of the NO 2 to higher altitudes and contributing to variations in the opacity.
Nevertheless, retaining the assumption of an uniform distribution, three methods can be proposed to estimate the initial concentration as follows: • Barthélémy et al. [21] mentioned an explosion of O(10 5 −10 6 ) kg of ammonium nitrate. The maximum produced NO 2 mass can be evaluated assuming that oxidation was complete and that all the nitrogen atoms formed the nitrogen dioxide. Assuming [4:8] dozen of tons of detonated ammonium nitrate [20], a plume with a volume of V 0 = 5.10 7 m 3 and an uniform NO 2 distribution results in a mean concentration approaching [1:2] g m −3 .

•
Considering the ORAMIP measurement at the Jacquier Station, the passive sampler used for the measurement provides a NO 2 value mean over 15 min which gives a dose estimate. Knowing the experimental value of C exp jaq = 350 µg m −3 (the difference of the observed concentrations at Jacquier between 0830 and 0815 UTC), considering the plume dilution to be well-modelled by MNH-IBM and following the numerical results for C jaq (Figure 7b), the range of the presumed initial NO 2 concentration is estimated to [10:30] mg m −3 with a more likely value of C ini ∼ 20 mg m −3 ∼ 10 ppm (ratio defined by the mole fraction). The range of the NO 2 estimate with complete oxidation is inconsistent with the other ones. While the assumption of an uniform vertical distribution is a substantial simplification, C ini ∼ 20 mg m −3 ∼ 10 ppm seems realistic near the ground and therefore this value will be considered in the following.

The Population Exposure
Considering this initial concentration, C max jaq is estimated to be in the [2:10] mg·m −3 range (Figure 7a). The risk maps (Figure 8) state that about 0.02 km 2 were exposed to concentration of 4 mg·m −3 during 1-2 min, and 0.2 km 2 to 0.4 mg·m −3 during 5-10 min. A near-linear dependence appears between the area covered by C lev /C ini concentration and ( C lev /C ini ) −1 (not shown here).
Considering the central axis of the plume, the evolution of the simulated pollutant concentration depending on the exposure time is shown in Figure 9. The perception (0.2 ppm) and reversibility (5 ppm) thresholds proposed by Tissot and Lafon [23] are also given, while the irreversibility threshold of 50 ppm is out of the range of simulated values. The perception threshold corresponds to the concentration leading to olfactory sensory detection. The reversibility one indicates when reversible effects (e.g., respiratory problems, nasal irritations, headaches, and asthenia) are encountered by the exposed population. In the immediate neighbourhood of the plant (a few hundreds of metres), population was exposed to reversible effects (Figure 8a). Beyond that distance, the exposition is a limited olfactory sensation (Figure 8d). In any case, no one should have experienced irreversible effects. These results are in agreement with the observations of doctors and firemen [21] and support the choice of C ini near the ground. To give an example of further investigation, the concentration level function of the exposure time can be fitted by the law C lev ∼ C ini e −(1/2)t * 2/3 , where the dimensionless time is t * = u p t h p , where u p = 5 m s −1 is the characteristic horizontal plume velocity and h p = 100 m is the horizontal scale of the plume (Figure 9). This formula gives an insight: When the velocity decreases, the diameter increases and the exposure time is longer.

Conclusions and Perspectives
This study reported numerical simulations of a pollutant episode in an urban environment linked to an explosion at a plant producing ammonium components. The selected case refered to the nitrogen dioxide emissions dispersed by the Autan wind after the dramatic explosion of the AZF factory that affected Toulouse in 2001.
The atmospheric flow has been simulated by the Meso-NH code coupled with SURFEX and including an immersed boundary method to account for buildings. A level-set function characterized the topography with an algorithm converting the BDTOPO-IGN database. Using the meteorological conditions that prevailed on the day of the explosion, a dozen of Large-Eddy Simulations were performed to conduct a preliminary sensitivity study on some dynamical parameters.
The results showed the ability of the numerical code to create a consistent energy spectrum in the surface boundary layer and in the lowest part of the atmospheric boundary layer. The impact of the wind advection scheme on the dynamic was also studied. The energy spectrum highlighted a more diffusive behaviour of the fifth-order WENO advection scheme at the micro scales and the relatively better behaviour of the fourth-order centred scheme coupled to an artificial hyper-viscosity term for which the diffusive amplitude has to be optimized. It pointed out that the wind advection scheme used far from the immersed interface impacted the spread of the plume, and may constitute a source of tuning in dispersion modelling, as well as a source of uncertainty for ensemble simulations. Conversely, the scheme used near the immersed boundary had almost no impact on the kinetic energy spectra and the WENO scheme can be recommended as it has been designed for obstacle interactions inducing sharp gradients.
Experimental measurements collected near the plant were used to estimate the released pollutant concentration under some assumptions. Assuming a passive pollutant, the numerical results allowed a risk map to be drawn showing the exposure time to concentrations of this potentially toxic gas. In reference to the thresholds concerning nitrogen dioxide adopted by French authorities (Quality threshold: 0.04 mg m −3 ; Information threshold: 0.2 mg m −3 during 1 h; Alert threshold: 0.4 mg m −3 during 3 h), the pollution episode due to the AZF case appeared to have been too brief to reach any of these thresholds, in good agreement with the conclusions of Cassadou et al. [20].
However, the first objective of this study was not to bring accurate answers to the AZF accident, but to show the potential of the immersed boundary method for urban applications. The method implemented in Meso-NH appears to be suitable to conduct preventative risk studies of pollution episodes within the near environment of industrial plants or airports, strongly affected by the impact of buildings.
In terms of perspective concerning the AZF case, the remaining uncertainties due to the crude description of the vertical distribution of the nitrogen dioxide concentration would need to be reduced to predict the absolute concentrations. The lack of experimental usable data is very challenging: The knowledge of the temperature increase due to the explosion would be a first input required to model the buoyancy effects, and to improve the source term. A numerical way could also be to directly model the explosion dynamics and the chemical reactions; however, this is a real scientific challenge (due to the multiple scales, compressibility hypothesis, reactive species, and code coupling). The nitrogen dioxide species is focused on here, but the risk maps can be applied to other potentially dangerous and reactive species.
To physically simulate an environmental accident, and to properly approach the chaotic character of the atmosphere, a number of instantaneous releases (typically 100) could be performed and then averaged to provide reliable statistics. However, the construction of an ensemble with LES is very expensive. Combining LES with a reduced-order model could be a solution to limit the computational cost and increase the set size. obstacles. The red (resp. green) lines indicates the solid (resp. fluid) region at the ground elevation. The obstacle in Figure A1a is assumed to be spatially and horizontally resolved. Figure A1b,c illustrate two unresolved cases. Depending on |SSI|, the process classification is summarized in Table A1. The spatial scales of the structure (SSI > 0 + ) and the spacing between the structures (SSI < 0 − ) are estimated as indicated by the ± symbol. SSI l (resp. SSI s ) corresponds to the Surface State Index of the studied node (resp. surrounding nodes).
• P1: If |SSI l | = 3, no process is expected ( Figure A1a). • P2: If |SSI l | = 2 and |SSI s | < 3, the 'increasing' process is activated corresponding to a transition from the case presented in Figure A1b to that presented in Figure A1a. • P3: If |SSI l | = 1 and |SSI s | < 2, the 'filtering' process is activated. The obstacle shown in Figure A1c disappears. • P4: The proximity of the obstacles (or spacings) can induce a 'merging' process testing SSI l = ±1 and SSI s ∓ 1 (opposite sign). This is performed once in all grid directions and a second time only in the diagonal directions to limit the 'stair-step shaped edges' effect ( Figure A1d-f). Figure A1d shows a small obstacle in the domain centre and another obstacle in the top-left corner. The 'increasing' process (P2) is activated and the obstacle in the centre is extended. The first step of the 'merging' process (P4) affects the median height of the two obstacles for cells detected at the blue points ( Figure A1e). The second 'merging pass' softens the border of the merged blue region ( Figure A1f).
Note that the previous standardized procedure applied to SSF is recalled during the SSI processes (only one node type is shown here).  The constructed SSF(x, y, n = 4) is then used to generate the φ(x, y, z, n = 7) (seven points are defined in each cell, Appendix B). The first step of the φ-generation is the φ-initialization without obstacles only taking into account the ground (i.e., the vertical stratification of the function). The second step computes the | φ | distance to the vertical walls, which affects the minimal distance to the ground or the vertical walls. The third step computes the minimal distance to the roofs indicating the φ-sign: if the point (x, y, z, n) presents a non-zero value of SSF(x, y, n) = h, φ(x, y, z < h) > 0; otherwise, φ(x, y, z < h) < 0. The standardized procedure applied to the φ-sign eliminates some of the spurious interface effects. Finally, the smoothing technique is performed on the φ-function (Appendix B).

Appendix B. Smoothing Technique and LevelSet Function
A fluid-solid interface characterized by a discrete signal introduces some numerical errors to the computation of the LevelSet Function (LSF, φ). Figure A2a illustrates this via two cases. The mesh point in the left-bottom part of the figure has a correct | φ |-value due to the interface distance defined by the green arrow. Otherwise, the point in the right-top part of the figure computes a too large minimum | φ |-value with respect to the interface location (the black dashed line). The first method to limit such discretization errors is to estimate φ at several points per mesh cell: At the mass point denoted P, at the velocity points denoted U/V/W and at the vorticity points denoted A/B/C. The high number of points per cell improves the mathematical operations leading to a better approximation of the surface characteristics (i.e., the n vector normal to the interface and the local curvature, κ). Moreover, a φ-artificial diffusion is implemented to preserve ∇φ → 1 satisfaction, e.g., a redistanciation procedure. Note that this property is not intrinsically compatible with shape interfaces presenting singularities even if a redistanciation step is performed.
Defining a smooth weighting, Q, φ i (i = [P/U/V/W/A/B/C])) is spread such that: where [E/W/S/NO/F/B] are the neighbouring points. An iterative procedure applied for Equation (A1) is activated everywhere, with N the iteration number. Q and N are two unknown control parameters. A parametric study of the φ-generation depending on the resolutions ratio (Appendix C), for various objects showed a relative error that evolved as φ/φ theo ∼ 1 + O(e − |φ| ∆X ). This formulation is briefly illustrated in Figure A2b showing the φ-error exponentially decreasing with the interface distance (cylinder case). This behaviour motivates the Q-formulation, e.g., Q = e −Q 2 |φ| ∆X . The dimensionless Q 2 constant is closely related to N. The first parameter acts as a smooth strength and the second one as a smooth propagation. Note that several attempts were made to introduce a convergence criterion based on the formulation ∇φ → 1; however, the results were not found to be relevant.
The bell case (Appendix C) has been thoroughly investigated (via grid resolution and resolution ratio parametric studies) for two Agnesi shapes presenting large or small percent slopes. This investigation examines a sensitivity of the smoothing formulation to Q 2 . Low (Q 2 ; N) values do not sufficiently correct the spurious effects. Too high (Q 2 ; N) values cause the solution to diverge. Q 2 ∼ O(1); N ∼ O(10)) fits the expected interface well.

Appendix C. Geometric Properties and LevelSet Function
The accuracy of the interface recovery is primarly affected by the fineness of the resolution of the field data and that of the numerical domain (Appendix A). This section attempts to show the impact of these discretizations and that of the smoothing technique (Appendix B) on the modelling of ideal obstacles via the φ-function (LSF).
The fineness of the resolution, ∆L, of some discrete functions (the known field data) is studied here by varying the regular Cartesian grid (the ∆X resolution). Three bodies are approached: A disk or infinite cylinder (2D case, R is the radius), a cube (3D case, L is the side length), and an Agnesi hill (3D case, H is the height). The grid resolution is chosen to satisfactorily model the flow around the solid body [11]: R = 24∆X, L = 16∆X, and H = 24∆X. In addition to the resolution study, the benefits of the φ-smoothing are discussed.
The cylinder case. The disk perimeter is discretized by 2πR/∆L points. The studied resolutions ratios are ∆L ∆X ∈ [1 : 1 16 ]π with an illustration of the π-case in Figure A3a. |φ(x, y)| is constructed using the procedures described in Appendix A. Most of the numerical errors are found in the vicinity of the interface. Figures A3b-d plot four dozens φ-values (| φ |< R with respect to the scale of Figure A3a) and highlight the spurious effects at the interface. The maximum relative error on the φ = 0 location for ∆L ∆X = π (resp. π 4 ) is about 4% (resp. 1%). Figure A3. (a) Illustration of the ∆L ∆X = π resolution ratio where ∆X is the mesh (resp. interface) space step in green symbols, where ∆L is the interface space step in red symbols. The φ-contours are shown without smoothing for ∆L ∆X = [(b) π; (c) π 2 ; (d) π 4 ] (cylinder case).
The numerical interface curvature, κ = −∇ · n, is compared to the theoretical one (κ theo = R −1 ) and plotted in Figure A4a for ∆L ∆X ∈ [1 : 1 16 ]π. The nature of the interface can be perceived numerically either as concave or convex, which can be problematic during a CFD simulation (a strong mistake compromises the incompressibility ∇ · (ρ f V) = 0, where V is the velocity field and ρ f is the fluid density). The κ-results become consistent for ∆L ∆X π 4 . Figure A4b shows the impact of the smoothing technique (Appendix B) on the local curvature fixing the resolution ratio to ∆L ∆X = π 8 and varying the N iterations number of the smoothing technique. Without smoothing (N = 0), the κ-relative error is ∼20%. With N = 16, the relative error is decreased by a factor of O (10). Note that the case of the sphere has been investigated and demonstrated similar results. For the cylinder case, X is the projection of the interface location on an arbitrary axis of the R radius. For the cube case, R is the radial distance of the interface to the cube centre with a length L.
The cube case. The orientations of the cubic body (L is the side length) and the Cartesian grid influence the accuracy of the perceived geometric properties of the cube. A fully aligned cube with a resolution ratio ∆L ∆X = 1 2 yields the theoretical solution for φ (the factor 1/2 comes from the use of a staggered grid). However, the numerical errors significantly increase when the parallelepipeds are not aligned with the grid. The most extreme case studied here corresponds to a cube whose faces present an angle of 45 degrees to those of the mesh ( Figure A5a). The range of the resolutions ratio is ∆L ∆X ∈ [2 : 1 8 ]. Figure A4c plots the curvature (nondimensionalized by 2/L) depending on the resolutions ratio (colour code) and the location of the interface points (defined by the distance to the cube centre, written R, and nondimensionalized by L/2). 2R.L −1 ∼ 1 corresponds to the centre of the cube surfaces and 2R.L −1 ∼ √ 2 (resp. √ 3) corresponds to the edges (resp. corners) of the cube. The local curvature at the cube interface is theoretically zero at all points except at singularities (edges and corners), where | κ |→ ∞. Figure A4c demonstrates that the planar characteristic of the cube surfaces is clearly not respected for ∆L ∆X 1 2 with N = 0 smoothing (the black, red, and green points). Some computed curvature values are as large as those of a sphere with a radius of L/2. The results become acceptable for ∆L ∆X 1 4 (the blue and orange points in Figure A4c) even if the singularities have an influence for 2R.L −1 < √ 2. The anisotropic nature of the cubic body is not compatible with the isotropic smoothing procedure. Indeed the corners and edges are rounded and the expected shape is not preserved. Both changes are illustrated in Figure A5b-e fixing the resolution ratio to ∆L ∆X = 1 2 for several N iteration number (from right to left: N = 64 in blue, N = 32 in green, N = 16 in red and N = 0 in grey). The disappearance of the rough surface depending on the N-smoothing is visible; however, a non-desired decrease of the cube volume appears with the smooth technique. The more N increases the more κ higher value decreases; the high κ-value vanishes by a ∼ 4 factor for N = 64. In the same way, a finer grid resolution (L up to O(20)∆X) limits the spread and confines the smoothing action to the corner and ridge zones.
The bell case. Figure A6 shows the geometrical properties of an interface discretized by ∆L ∆X = 1. From left to right are: (a) the altitude z(φ = 0); (b and c) the components of the vector normal to the interface n = ∇φ ∇φ , and (d) the local curvature κ = −∇ · n. The black contours are the analytical solutions. The numerical solutions are plotted in red without (resp. with) the smooth technique in the middle (resp. bottom) panels of Figure A6. Without the smoothing technique, the E() maximum relative error are E(z) = 3%, E( n. x) = 160% and E( n. z) = 10%. With the smoothing technique N = 16, the results are E(z) = 0.5%, E( n. x) = 5%, and E( n. z) = 1%. To summarize, the coupling of the interpolating and smoothing procedures appears to be the most viable method to properly estimate the complex and discrete surfaces. Associated with ∆L ∆X ∼ 1/2 resolution ratio due to the staggered grid, N ∼ O(10) iterations of the smoothing technique remain relevant and appear to be a good solution compared to the too expensive computation times (or the non-accessibility) of higher resolutions ratios.