1. Introduction
Turbulent flows, common in nature and technology, are still challenging from the standpoint of physical modelling and computation [
1]. Further complexity is due to the fact that turbulent flows sometimes feature additional variables, such as the chemical compositions and concentrations of contaminants or admixtures, etc. These are called scalar variables and are governed by their own transport equations that express, basically, the mass balance of species/admixtures in the flow, to be solved jointly with the fluid dynamics [
2]. Additionally, the temperature field can sometimes be treated this way, through an appropriate energy balance. In non-uniform scalar fields, either due to initial conditions or when inlets are present, the mixing process will take place [
3,
4]; it consists of a non-trivial interplay between advection by turbulence and molecular diffusion. An overview of such processes is the main aim of the present work.
Environmental flows with scalars include ocean dynamics with the temperature and salinity gradients, dispersion of contaminants in waters or in the atmosphere [
5,
6,
7], air pollution in urban areas [
8,
9], cloud microphysics [
10,
11,
12] and numerical weather prediction and climate models [
13,
14]. On the industrial side, mixing is of utmost importance for various operations and devices in the area of chemical and process engineering [
15]. Among those intensely studied because of the ecology, economy and safety concerns are the processes of combustion, including the diffusive flames and the environmentally friendly concept of flameless oxidation [
16,
17].
Although turbulent flows are governed by a well-known system of macroscopic conservation equations, i.e., the Navier–Stokes (N–S) equations for Newtonian fluids, the detailed solution in terms of the relevant hydrodynamic variables with no simplifying assumptions (unsteady, fully 3D flow) remains computationally costly or even unfeasible for many practical problems involving complex geometry or a wider range of eddying motions. However, this method, called the direct numerical simulation (DNS), is a precious tool to gain fundamental knowledge about turbulence, including the mixing process, and to validate closure proposals; the pioneering studies on fully resolved scalar transport in turbulent flows date back to before the year 2000 [
18,
19,
20,
21,
22]. The DNS, although limited to simple geometries, is particularly useful in studying turbulent reactive flows. This is due to a sound description of chemical reactions that involves the length scales ranging from the system size down to the smallest ones. Those disipative eddies, of the order of the Kolmogorov length scale, are resolved in DNS but have to be modelled in other methods. Thus, there is still ample room for development and use of numerical models within the scope of a simplified description of turbulence. They represent an alternative to the DNS and provide a partial (approximate) description of the flow. Two such approaches are: the statistical closures, usually formulated in terms of the Reynolds-averaged Navier–Stokes (RANS) equations, and the large eddy simulation (LES) where the subfilter scales are not resolved. Let us note that hybrid RANS-LES models with zonal coupling have undergone a rapid development, in particular for cases when both wall-bounded turbulence (statistically-resolved for computational efficiency) and large vortical structures controlling some flow aspects (resolved by LES) are of importance [
23].
Generally speaking, the statistical approach is usually applied for the phenomena (processes) where it is either too expensive or of no interest to gather detailed information about the system under scrutiny. Admittedly, a remarkable exception is quantum physics where the statistical description is the only one possible. Additionally, in turbulence, randomness can be purposefully introduced, even though it is not explicitly present in the governing flow equations. As an alternative to more established Eulerian moment closures (RANS), where averaged equations representing conservation laws are considered, turbulent flow can be described with the use of the one-point probability density for instantaneous velocities of notional fluid elements, and additional scalars. This is the starting point of the probability density function (PDF) method. As discussed in the following, the molecular diffusion persists down to the smallest flow scales (and even becomes dominant there), so the need to model the micro-mixing applies both to the statistical turbulence models and to the LES.
The present review is meant to provide an introduction to the issue of mixing, in particular in turbulent flows, considering some recent developments in the field and addressing both the physics and modelling, with no particular emphasis on combustion though. The account of existing closures for reacting flow and their ramifications for various combustion regimes would require a comprehensive review on its own, beyond the expertise of the authors. For recent work on turbulent combustion models, including the LES and PDF methods, interested readers may refer to [
24,
25,
26,
27]. Other specific topics are mixing in non-Newtonian fluids (as in polymer processing), mixing in plasmas [
28], dispersed solid-fluid flows (e.g., blending in food industry) and macromixing of components featuring interphasial surfaces, i.e., liquid–liquid or gas–liquid systems [
29], that often aims at maximising the interfacial area. Although of importance in various applications, the latter processes involve other physical phenomena such as surface tension (rather than molecular diffusion); they are left out of the present overview.
The paper is organised as follows. In
Section 2, we provide a short account of the problem of mixing in turbulent flows, its physical aspects and practical importance. The notion and classification of additional scalar variables in turbulent flows are introduced. Turbulent flows with scalars are considered in
Section 3 where the closure problem is presented together with the available models. We introduce the probability density function approach, accompanied by the trajectory (Lagrangian) point of view. In particular, we identify the molecular transport term pertaining to mass diffusion or heat conductivity. It is this so-called micro-mixing term that causes fundamental difficulties in one-point modelling of turbulent flows. This is discussed in a separate section (
Section 4) because of the role of micro-mixing both in the statistical turbulence modelling and in the LES. A few mixing models are presented, with more details on the bounded Langevin model; the problem of scalar mixing in isotropic turbulence serves as an illustration. In
Section 5, we address the filtered density function (FDF) approach to scalar variables in the context of large-eddy simulations. A short summary is made in
Section 6: some emerging trends are reported, as prompted by recent progress in computing technology, and some suggestions are put forward for possible further developments in the modelling of turbulent flows with scalars.
2. Mixing Processes
In a fluid flow, and in a turbulent flow in particular, one may be interested in the evolution of additional variables such as the chemical composition, the concentration of contaminants or admixtures, the temperature field, etc., called scalars for brevity.
In this section, we briefly recall some facts about the mixing processes in the flow. To fix our attention, the starting point is a generic scalar transport equation; the corresponding scalar PDF equation in a turbulent flow will be introduced in
Section 3. The evolution equation for a scalar variable
with a generic molecular transport coefficient
(the diffusivity of mass
D or thermal diffusivity
) has got the structure of an advection–diffusion equation, possibly also containing a source term
. The scalar equation is written in the Cartesian tensor notation (the summation over repeated dummy indices is implied) as
where
is the fluid velocity field. In general, scalar variables are classified as passive in case they do not influence the flow dynamics, or active, in case they do. Examples of active scalar are considerable temperature differences that affect the fluid density and/or viscosity. Then, the scalars can be treated as conserved (inert) or reactive, depending on whether the source term (e.g., due to chemical reactions) occurs in their transport equations. In a general multiscalar case, a vector quantity
appears in Equation (
1) and the source term
may involve, e.g., the chemical composition of reactive species and the temperature.
The largest length scales of the scalar field are usually determined by the initial conditions (e.g., two layers, or slabs, of width L and different values of , featuring thus a so-called scalar interface) or the boundary conditions (e.g., two inlets to the flow domain, characterised by a length scale L). Then, in the case of pure molecular diffusion of scalar (no advection), the process is governed by the parabolic-type partial differential equation. The time scale is readily found from the dimensional analysis: where L is a characteristic scale (e.g., the width of a slab/lamella). The local uniformity of the scalar field means that it is homogeneous at the molecular level, or well micro-mixed. In contrast, the field can be uniform on a larger scale, i.e., well macro-mixed, which is the case for the often-used benchmark: binary scalar distribution in homogeneous turbulence.
In various situations of process engineering, one is interested in efficient mixing, resulting in a well-micromixed system in a possibly short time. As the molecular diffusion is generally a slow process, except in microfluidics perhaps, a standard practice is to enhance it by engaging the advection term of Equation (
1). The general idea has been to promote the occurrence of smaller length scales and to increase the area of scalar interfaces. Basically, the mixing enhancement can be passive or active. In the former case, one designs a specific geometry for the purpose. In a laminar flow, the task is usually more demanding; see a study on serpentine micromixers [
30]. As for active mixers, an external energy source is built into the microfluidic system in terms of the electric and magnetic fields or ultrasonic vibrations. The role of such actuators is to produce flow perturbations to make the mixing process more efficient. For recent general reviews on active and passive micromixers in microfluidics, often called lab-on-a-chip devices, see [
31,
32].
Additionally, the fractal concepts have found their way to enhance mixing. A canonical case is grid turbulence. Fractal grids have been experimentally studied and found to increase the mixing efficiency [
33], and the scalar diffusion [
34]. Concerning the case of active micromixers, the use of fractal-like blades of impellers has been examined in [
35]. As the outcome of their DNS, the authors state that the power consumption of such an impeller with fractally-designed blading is reduced with respect to the standard layout, while a considerably shorter mixing time is observed.
Back to passive mixers, an interesting endeavour has recently been reported in [
36] where the specific geometry of flow obstacles or deflectors has been conceived through the adjoint optimisation method, resulting in a fairly complex structure (resembling a porous medium) to promote the occurrence of flow separation zones and enhance the mixing process. Unfortunately, a typical drawback of passive methods featuring extra obstacles or walls is the resulting increased pressure loss.
Often-used configurations to reduce the scalar length scale at the flow inlet are the annular jets with co-flowing streams, such as those of fuel and oxidant, possibly also of hot co-flow, typical of non-premixed combustion [
16,
17]. Additionally, strong gradients of tangential velocity in the shear layers promote the growth of scalar interfaces due to the Kelvin–Helmholtz instability. Alternatively, a system of many inlet nozzles may be conceived to reduce the inlet length scale of the scalar. An ingenious configuration of this type has been found to result in more efficient mixing than the T-junction system of inlets [
37].
Assuming that the Schmidt number is unity, for the scalar structures of the Kolmogorov scale one readily finds that ; i.e., the mixing time scale is equal to the Kolmogorov time scale, which is expected. For the Schmidt numbers larger than 1, occurring in some liquids, the smallest length scales of the scalar field in a turbulent flow, called the Batchelor scale , are smaller than the dissipative eddies: . This means, in particular, that for a well-resolved mixing process under such conditions, the DNS mesh should be correspondingly finer.
For a simple experiment using household appliances, a qualitative picture of the mixing process is shown in
Figure 1. An initially arranged system consists of three concentric spots of coloured liquids; the blobs get stirred by a rotating central disk (an electric toothbrush is used for the purpose). As the time elapses, thinner and thinner structures appear, as advected and deformed by the flow, and finally disappear due to the prevailing action of molecular diffusion. This very observation seems to be in line with the recent and comprehensive review of Villermaux [
38]. He describes a lamellar representation of the mixing problem where a mixture is seen as a set of stretched sheets (in 3D) or lamellae (in 2D). In that paper, the differences between stirring, blending and mixing are clearly explained: in stirring and blending (or macromixing), the length scales of the scalar field decrease through stretching and folding with no substantial diffusion, whereas mixing is perceived as “stretching-enhanced diffusion”. Additionally, a caveat is formulated in [
38] that “the sequential vision of the process (stirring, then diffusion) is fundamentally incorrect”, since some molecular diffusion occurs already at larger scales of the scalar field. Another consequence of the scalar interfaces being stretched and folded by the flow, aside from the occurrence of smaller-scale structures, is the increase of scalar gradients [
39]. This happens in a way analogous to the increase of local velocity gradients in turbulence, ultimately being destroyed by the action of viscosity.
To reiterate the classification of scalar variables: Dimotakis [
3] distinguishes among three levels of mixing: (i) passive scalars, (ii) dynamically active ones with mixing coupled to the dynamics and (iii) mixing that alters the fluid composition and/or density (as it occurs in chemically reactive flows). To the category (i), i.e., scalars that do not affect the flow dynamics, belongs a concentration of tracer species or a small excess in temperature that does not trigger natural convection. The category (ii) includes the buoyancy-affected flows, for example, due to the temperature or salinity gradients in the ocean, the unstable stratification, the breaking of internal waves or the Rayleigh–Taylor instability. That paper also presents considerations on scalar dissipation and mixing as interdependent on the full spectrum of turbulent flow scales. A nontrivial model that mimics scalar evolution at different scales was advanced by Kraichnan [
40]. Therein, the velocity in Equation (
1) was replaced by a stochastic Gaussian field with a prescribed two-point correlation tensor and with time correlation that decays infinitely rapidly. Although the advecting flow was idealised, that model, exactly solvable, was able to correctly predict scalar mixing at small scales, known for their complex, intermittent behaviour; see also an insightful review paper by Warhaft [
41].
In [
4], the very concept of mixing is addressed, with its irreversibility and possible emergence of complexity. An interesting perspective on turbulent mixing is offered by Sreenivasan [
42]. He analyses the dependencies of the mixing state on the Schmidt and Reynolds numbers, and focuses on the scalar spectra resulting from a large-scale stirring. In [
43], the DNS findings are analysed as being relevant to the fine-scale scalar mixing and dissipation. The authors also study the multifractal properties of scalar dissipation which are related to another fascinating topic in turbulence theory, i.e., fractals (see [
44]), as some features of turbulence may be assumed this way for further modelling [
45,
46].
6. Conclusions and Perspectives
In this paper, we recalled the salient features of turbulent mixing and presented the modelling approaches for scalar variables with an emphasis on the Lagrangian PDF and FDF formulations. Despite a number of closures proposed to date, the micro-mixing models for scalars still represent a challenge. In particular, a further pursuit of models for the multiscalar mixing is of importance for reactive flows. The issue of scalar mixing is also relevant for flows in near-wall regions where molecular transport effects become dominant. A persistent difficulty is a physically sound estimation of the ratio of scalar to dynamical time scales (more too often taken as a constant) in various turbulent flows. The zonal coupling of the PDF/FDF approaches to an Eulerian code remains another direction of research. The local coupling should be performed according to a well-determined criterion to provide more detailed flow information in regions of interest. Arguably, with such a coupling the full value of the PDF method could manifest itself in practical applications for physically complex flows in a non-trivial geometry.
Apart from the scalar PDF/FDF open issues, there are a number of recent ideas beyond the topics discussed in the present overview. In particular, new trends in algorithm development are prompted by the recent advances in computing technology. In [
27], progress in LES of reacting flows is addressed together with some emerging trends in computational combustion. According to the authors, uncertainties in simulation need to be estimated for practical use of LES. Then, on even more practical side, the authors notice that a new generation of approaches are needed to explore and utilise a vast amount of data collected during the operation of combustors. They are tentatively called “heterogeneous, data driven environment,” enabling better efficiency, lower emissions, operational safety, and reduced maintenance. Some information on data-driven modelling is provided in that paper. A related (and increasingly fashionable) topic is deep learning, in its various possible applications. As far as the presumed PDF models for LES are concerned, the DNS evidence is used to develop training data and different algorithms are evaluated in [
102]. Another paper, directly related to turbulent mixing, is [
103]. Using deep learning ideas, the authors propose a framework to deduce models from the experimental measurements of the PDF.
For several years now, high performance computing (HPC) has been increasingly implemented using graphic processing units (GPU) for massively parallel code execution. In parallel (nomen omen), a shift in paradigm is noticed: the numerical methods to solve a particular problem are purposefully chosen, and further developed, to better utilise the hardware capabilities and its specific architecture. In this respect, the statement of John Argyris (the title of his famous lecture in 1965): “The computer shapes the theory” remains of ever more evident actuality. In other words: the available hardware may, and does, impact the choice of numerical solution procedure or even physical models. As an example from the general CFD, consider computational methods (otherwise quite mature) for incompressible flow. The weakly compressible (WC) approach to solve otherwise zero-divergence velocity fields becomes a method of interest to avoid the elliptic solver of the Poisson equation for the pressure correction in truly incompressible approaches. The main reason is that the numerical schemes that are explicit in time and local in space may be tremendously efficient when run on GPU (even despite the restrictions on the allowable time step), as recently demonstrated for a wall-bounded turbulent flow case [
104]. The Lagrangian PDF approach readily lends itself for parallel computing as well. An example of massively parallel, GPU-accelerated computation of turbulent mixing is described in [
105]. The DNS of mixing at high Schmidt numbers is reported: different grid resolutions are used, as the scalar fluctuations persist down to scales smaller than
. Consequently, the Fourier pseudospectral method for velocity has been used together with the compact finite difference scheme for the scalar. The core of the paper is very technical, focusing on HPC issues with GPU-specific programming, but the overall message of [
105] is clear: the software developments successfully accompany the progress on the hardware side, resulting in highly efficient simulations. Another example of GPU-accelerated computation for environmental application is the paper of Kristóf and Papp [
9]. They performed a LES study of dispersion, referred to as a virtual (numerical) wind tunnel. The urban boundary layer (UBL) was solved, and the inlet BC were suitably chosen to mimic the mean velocity and turbulent intensity profiles of the UBL. The GPU-based simulation was aimed to estimate the urban air quality and to study possible options to mitigate high pollution conditions.
A technological breakthrough is looming on the horizon: the advent of quantum computers. Before it becomes reality, quantum algorithms (QA) must be tested and run on classical computers. To the best of the authors’ knowledge, the works in the CFD domain are still rare. A recent example is [
106] (see also Supplementary Information to that paper, available online) where a QA is conceived to solve a simplified N–S equation in a transonic nozzle. The results agree well with the reference solution and a significant speed-up with respect to classical algorithms is reported to be possible. Then, definitely relevant for the topic of the present overview, [
107] deals with turbulent mixing: a QA is advocated that accelerates the computation of the transported PDF and its moments. The computational efficiency is illustrated with the binary scalar mixing problem using a family of C-D micromixing models. The work is rooted in the recent interest in QA to speed up the Monte Carlo techniques.
As the bottom line of this overview: there has been significant progress to date, ranging from the understanding of physical aspects of the turbulent mixing process, through the fully resolved simulations able to provide a wealth of data to feed the models of practical applicability, up to impressive full-fledged computations of advanced systems such as combustion devices. Despite this enormous progress, there is still room for improvements concerning physically-sound and computationally efficient micromixing closures, in particular for multiple scalars, and efficient hybrid formulations, such as coupled LES/FDF approaches. Additionally, a vigilant eye should be kept on new, emerging software and hardware opportunities, such as machine learning, GPU-acceleration and possibly quantum computing.