Development and Validation of a Machine Learned Turbulence Model^{ †}
Abstract
:1. Introduction
 (1)
 Machine learning has been primarily used for RANS model augmentation, where either turbulence production is adjusted, or nonlinear stress components are added to linear eddy viscosity term. Limited effort has been made to develop a standalone model, except for some recent effort focusing on modeling of subgrid stresses for LES.
 (2)
 Studies have used wide range input flow features for machine learned model training. There is a consensus that combining flow features into physically relevant flow feature is desirable, as this helps incorporate physics in machinelearning. Use of a large numbers of input features have been found to be helpful to some extent as it allows output features to be uniquely identified in the different flow regimes. However, they introduce additional sources of inaccuracy. For example, unsmooth solutions have been reported due to inaccurate calculation in the input features involving higherorder formulations of the derivative terms.
 (3)
 Machine learned models have been applied for both apriori and aposteriori tests for both unseen geometry and flow conditions, including Re extrapolation mode. The model in general perform well for unseen flow conditions, but issues have been reported for unseen geometries. In general, the machine learned models are most accurate when the test flow has similar complexity to the training flow.
 (4)
 Studies have reported issues during training due to overlap in the output features in different flow regimes. It has been tacked by using more input features, as discussed above, and by segregating the flow domain into regions with similar flow characteristics, such as nearwall, wake and farfield regions, and train separate models in each region.
2. Machine Learning Approach
3. Test Cases and Database for Model Training
3.1. Plane Channel Flow
3.1.1. Governing Equation
3.1.2. DNS/LES Database
3.1.3. ML Model Training and Refinement Using Apriori Tests
 ML1: No weighting
 ML2: Weighted using curvature of the profiles
 ML3: ${\tau}_{uv}$ levels were expanded to separate out the curves
 ML4: Notweighted for ${\tau}_{uv}\le {10}^{3}$ but weighted for ${\tau}_{uv}>{10}^{3}$
3.2. Oscillating Plane Channel Flow
3.2.1. Governing Equation
3.2.2. DNS Database
3.2.3. ML Model Training Using Apriori Tests
4. Aposteriori Tests of the ML Model
4.1. Steady Plane Channel Flow
4.2. Oscillating Plane Channel Flow
5. Conclusions and Future Work
 The choice of input parameters. Feature engineering was used to find the optimal input features for the neural network training. It was identified that grouping flow variables into a problem relevant parameter improves the accuracy of the model. For example, a model trained using Re based on local flow velocity and wall distance is more accurate compared to the model trained using Re based on global flow. Furthermore, higher order functions of an input variable, such as square of the rateofstrain along with rateofstrain, does not help in improving the accuracy of the map. However, they may be used as weighting function to reduce the overlap in the datasets; and
 How the database is weighted to minimize the overlap between the datasets. This requires a trialanderror method to come up with an appropriate weighting function. A better way to improve the accuracy of the regression surface is to include physical constraints to the loss function during training, which is referred to as the PIML approach. However, it is not straightforward to incorporate physical constrains during the training due to issues in calculation of the derivates, such as temporal derivatives, for unsteady problem. Data clustering is also identified to be a useful tool to improve accuracy of the machine learned model and reduce computational cost, as it avoids skewness of the model towards a dominant flow feature.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
Nomenclature
Symbols  
Q  Second invariant of rateofstrain tensor 
u  Local flow velocity vector 
u′  Turbulent velocity fluctuation vector 
${U}_{c}$  Freestream (global) velocity (or centerline channel velocity) 
H  Half channel height 
u_{τ}  Friction velocity 
p  Pressure 
k  Turbulent kinetic energy 
ε  Dissipation 
ν  Kinematic molecular viscosity 
ν_{T}  Turbulent eddy viscosity 
d  Distance from the wall 
ω  Specific dissipation, ε/k 
y^{+}  Wall distance normalized by friction velocity, d${u}_{\tau}/\nu $ 
τ_{w}  Wall shear stress 
∇  Gradient operator 
S  Rateofstrain tensor 
Ω  Rotation tensor 
S  Magnitude of rateofstrain tensor, $\sqrt{2\mathit{S}:\mathit{S}}$ 
Ω  Magnitude of rotation tensor, $\sqrt{2\mathit{\Omega}:\mathit{\Omega}}$ 
P  Production of turbulent kinetic energy 
Re  Reynolds number based on global flow variables, such as ${U}_{c}$ and geometry length 
Re_{t}  Turbulent Re based on distance from the wall, $\sqrt{k}d/\nu $ 
Re_{d}  Re based on distance from wall, d${U}_{c}$/ν 
Re_{l}  Reynolds number based on the local velocity and distance from the wall, $ud/\nu $ 
$\mathit{\tau}$  Shear stress tensor 
τ_{uv}  Turbulent shear stress component in xy plane, $\overline{{u}^{\prime}{v}^{\prime}}$ 
ML  Machine Learning 
$\xb7$  Dot product 
$:$  Double dot product 
Terminology  
Database  Curated DNS/LES datasets for ML training 
Response surface  Output from ML training 
Input features  Input flow variables for ML 
Output features  Flow variable for which the response surface is generated 
Query inputs  Input variables to query the response surface 
Query output  Output variables obtained from the query of the response surface 
Unseen case (flow)  Geometry (or flow condition) not used during ML training 
Apriori test  ML model is applied as a postprocessing step 
Aposteriori test  ML model is coupled with CFD solver and its prediction is used during runtime 
Appendix A
Reference  Response Surface  Turbulence Model  Input Features  Training Flows  Validation Case  Comments 

Parish and Duraisamy [9]  TKE production multiplier: $\beta $(η_{i}) (Neural network)  kω RANS: ${\nu}_{T}{\left(\frac{\partial \overline{u}}{\partial y}\right)}^{2}\beta \left(y\right)\alpha \ast k\omega +\frac{\partial}{\partial y}\left[\left(\nu +{\sigma}_{k}{\nu}_{T}\right)\frac{\partial k}{\partial y}\right]$  4 features: η_{i} = $\frac{\leftS\rightk}{\epsilon},\frac{\sqrt{k}d}{\nu},\frac{P}{},{y}^{+}$  DNS: Plane channel flow, Re_{τ} = 180, 550, 950, 4200  Plane channel flow, Re_{τ} = 2000 

Singh et al. [10]  Turbulence production multiplier: $\beta $(η_{i}). (Neural network)  SpalartAllmaras RANS: $\frac{D\tilde{\nu}}{Dt}=\beta \left(x\right)\underset{Production}{\underset{\u23df}{P\left(\tilde{\nu},U\right)}}$ $\underset{Destruction}{\underset{\u23df}{D\left(\tilde{\nu},U\right)}}+\underset{Transport}{\underset{\u23df}{T\left(\tilde{\nu},U\right)}}$  5 features: η_{i} = $\left\Omega \right,\chi =\frac{{\nu}_{T}}{\nu},\frac{\leftS\right}{\left\Omega \right},\frac{\tau}{{\tau}_{w}},\frac{P}{\epsilon}$  Experiment: lift coefficient (C_{L}) and surface pressure (C_{P}) for wind turbine airfoils S805, S809, S814, Re = 10^{6}, 2 × 10^{6}, 3 × 10^{6}  C_{L} and C_{P} for S809, α = 14°, Re = 2 × 10^{6}  
He et al. [11]  Adjoint equations for solution error. β distribution to minimize error.  Velocities  Experiment: At several crosssections in the flow.  Cylinder flow, Re = 2 × 10^{4}; Round jet, Re = 6000; Hump flow, Re = 9.4 × 10^{5}, Wall mounted cube, Re = 10^{5} 
 
Yang and Xiao [12]  Correction term for the Transition timescale correction, β; (Random forest, Neural Network)  Transition model timescale for first mode: τ_{nt1}= βτ_{nt1}  d, streamline curvature, $\frac{{d}^{2}\left\mathsf{\Omega}\right}{\leftU\right}$, Q, ∇p  DNS: NLF(1)0416 airfoil, α = 0° and 4°  NLF(1)0416 airfoil, α = 2° and 6°; NACA 0012, α = 3° 

Ling et al. [13]  Coefficients of nonlinear stress terms: ${g}^{n}\left({\lambda}_{1},{\lambda}_{2},{\lambda}_{3},{\lambda}_{4},{\lambda}_{5}\right)$ (Deep Neural Network)  $\mathit{\sigma}={{\displaystyle \sum}}_{n=1}^{10}{g}^{n}{\mathit{T}}^{\left(n\right)}$; ${\mathit{T}}^{\left(n\right)}=\mathit{S}$;$\mathit{S}\xb7\mathit{\Omega}\mathit{\Omega}\xb7\mathit{S}$;${\left(\mathit{S}\xb7\mathit{S}\right)}^{*}$;${\left(\mathit{\Omega}\xb7\mathit{\Omega}\right)}^{*};\mathit{\Omega}\xb7\mathit{S}\xb7\mathit{S}\mathit{S}\xb7\mathit{S}\xb7\mathit{\Omega}$;${\left(\mathit{\Omega}\xb7\mathit{\Omega}\xb7\mathit{S}\mathit{S}\xb7\mathit{\Omega}\xb7\mathit{\Omega}\right)}^{*}$;$\Omega \xb7\mathit{S}\xb7\mathit{\Omega}\xb7\mathit{\Omega}\mathit{\Omega}\xb7\mathit{\Omega}\xb7\mathit{S}\xb7\mathit{\Omega}$;$\mathit{S}\xb7\mathit{\Omega}\xb7\mathit{S}\xb7\mathit{S}\mathit{S}\xb7\mathit{S}\xb7\mathit{\Omega}\xb7\mathit{S}$; ${\left(\mathit{S}\xb7\mathit{S}\xb7\mathit{\Omega}\xb7\mathit{\Omega}\right)}^{*}$; $\mathit{\Omega}\xb7\mathit{S}\xb7\mathit{S}\xb7\mathit{\Omega}\xb7\mathit{\Omega}\mathit{\Omega}\xb7\mathit{\Omega}\xb7\mathit{S}\xb7\mathit{S}\xb7\mathit{\Omega}$ ^{*}Anisotropic component  Invariants of S and Ω: ${\lambda}_{1}=\mathit{S}:\mathit{S};{\lambda}_{2}=\mathit{\Omega}:\mathit{\Omega};$ ${\lambda}_{3}=\left(\mathit{S}\xb7\mathit{S}\right):\mathit{S};$ ${\lambda}_{4}=\left(\mathit{\Omega}\xb7\mathit{\Omega}\right):\mathit{S};$ ${\lambda}_{5}=\left(\mathit{\Omega}\xb7\mathit{\Omega}\right):\left(\mathit{S}\xb7\mathit{S}\right)$  DNS/LES: Duct flow, Re = 3500; Channel flow, Re_{τ} = 590; normal (Re = 5000) and inclined (Re = 3000) jet in crossflow; Square cylinder, Re = 2000; convergingdiverging channel, Re_{τ} = 600  Duct flow Re = 2000 Wavy channel, Re = 6850 

Wang et al. [14]  Stress prediction error: ${f}_{n}=\Delta \tau \left({q}_{n}\right)$; Δτ = τ^{RANS} − τ^{DNS/LES} (Random forest regression)  kε RANS model: τ^{RANS} + Δτ(${q}_{n}$)  10 features ${q}_{n}$: Q, k, $\sqrt{k}d/\nu $, $\left(\mathit{u}.\nabla \right)p$, $\left(\mathit{u}.\nabla \right)k$, k/ε, $\nabla p\xb7\nabla p$, $\mathit{u}\mathit{u}:\mathit{u}$, streamline curvature etc.  DNS: Duct flow, Re = 2200, 2600 and 2900  Duct flow, Re = 3500 

DNS: Periodic hill, Re = 1400, 5600  Periodic hill, Re = 10,595  
DNS: Wavy channel Re = 360 LES: Curved backward facing step, Re = 13,200  
Wu et al. [15]  DNS: Periodic hill, Re = 1400, 2800, 5600; Curved backward facing step, Re = 13200; Convergingdiverging channel, Re = 11,300; Backward facing step, Re = 4900; Wavy channel, Re = 360  Periodic hill, Re = 10,595  
Wang et al. [16]  kω RANS model: τ^{RANS} + Δτ(${q}_{n}$)  47 features ${q}^{\left(n\right)}$ based on combination of S, Ω, ∇k and ∇T  DNS: Flatplate boundary layer for Ma = 2.5, 6 and 7.8, Re_{τ} _{~} 400  Flatplate boundary layer, Ma = 8 
 
Wu et al. [17]  Eddy viscosity for linear stress: ${v}_{\mathrm{T}}$; Nonlinear anisotropic stress: ${\mathit{b}}^{\perp}$ (Random forest regression)  $\mathit{\sigma}={v}_{T}\mathit{S}+{\mathit{b}}^{\perp}$  S,Ω, ∇k,∇p, $\frac{\sqrt{k}d}{\nu}$, k, k/ε  DNS and RANS: Duct flow, Re = 2200 LES and RANS: Periodic hill—Re = 5600  Duct flow, Re = 3500, 1.25 × 10^{5} (Shallower) Periodic hill, Re = 5600 

Yin et al. [18]  Stress prediction error: ${f}_{n}=\Delta \tau \left({q}_{n}\right)$; Δτ = τ^{RANS} − τ^{DNS}(Neural Network)  kω RANS model: τ^{RANS} + Δτ(${q}_{n}$)  47 features ${q}^{\left(n\right)}$ based on combination of S, Ω, ∇p and ∇k  DNS: Periodic hill with different steepness, L = 3.858 α +5.142, α = 0.8, 1.2, Re = 5600  Periodic hill, α = 0.5, 1, 1.5 (Re = 5600) 

Yang et al. [19]  τ_{w} = f(η_{i}) (Fastforward Neural Network)  Wallmodeling for Lagrangian dynamic Smagorinsky (LES) model  η_{i}: wall parallel velocity (u_{}), d, grid aspect ratio and ∇p  DNS: Channel flow Re_{τ} = 1000  Channel flow Re = 1000 to 10^{10} 

Weatheritt and Sandberg [20]  Analytic function of anisotropic stress coefficients: β_{i}; (Symbolic regression)  σ = −2ν_{T}S+2k$\mathit{a}$;${\nu}_{T}$ kω model $\mathit{a}={\beta}_{1}\left({I}_{1},{I}_{2}\right)\mathit{S}{+}_{2}\left({I}_{1},{I}_{2}\right)\left\{\mathit{S}\xb7\Omega \Omega \xb7\mathit{S}\right\}{+}_{3}\left({I}_{1},{I}_{2}\right){\left(\mathit{S}\xb7S\right)}^{*}$  ${I}_{1}=\mathit{S}:\mathit{S}$ ${I}_{2}=\mathit{\Omega}:\mathit{\Omega}$  Hybrid RANS/LES: Duct flows (Re = 10^{4}, Ar = 3.3), and diffuser flow (Re = 10^{4}, Ar = 1)  Duct (Re = 10^{4}, Ar = 3.3 to 1), and diffuser (Re = 5000, 10^{4}, Ar = 1.7) 

Jian et al. [21]  Model coefficients: C_{μ}, b_{mn}_{,} C_{mn}, d_{mn} (Deep neural network)  RANS: $\mathit{\sigma}=2{\mathrm{C}}_{\mu}\frac{{k}^{2}}{\epsilon}\mathit{S}+{b}_{mn}\frac{{k}^{3}}{{\epsilon}^{2}}{\left(\mathit{S}\xb7\mathit{S}\right)}^{*}$ $+{c}_{mn}\frac{{k}^{3}}{{\epsilon}^{2}}\left\{\mathit{S}\xb7\mathit{\Omega}\mathit{\Omega}\xb7\mathit{S}\right\}+{d}_{mn}\frac{{k}^{3}}{{\epsilon}^{2}}{\left(\mathit{\Omega}\xb7\mathit{\Omega}\right)}^{*}$  $\frac{\leftS\rightk}{\epsilon}$  DNS: Plane channel flow, Re_{τ} = 1000, 1990, 2020, 4100  Plane channel flow, Re_{τ} = 650, 1000, 5200 

Xie et al. [22]  Model coefficients C_{1} and C_{2} for mixed SGS model (Neural networks)  LES, subgrid stresses and heat flux $\mathit{\tau}={C}_{1}{\Delta}^{2}\leftS\right\mathit{S}+{C}_{2}{\Delta}^{2}\left(\nabla \mathit{u}\xb7\nabla {\mathit{u}}^{T}\right)$  Vorticity magnitude, velocity divergence, $\left\nabla \mathit{u}\xb7\nabla {\mathit{u}}^{T}\right$, S, ∇T  DNS: Compressible isotropic turbulence, Re_{λ} = 260, Ma = 0.4, 0.6, 0.8, 1.02  Compressible isotropic turbulence, coarser grids 

Reference  Response Surface  Turbulence Model  Input Features  Training Flows  Validation Case  Comments 

Schmelzer et al. [23]  Analytic formulation of anisotropic stress; (Symbolic regression)  RANS: $\mathit{\sigma}$ = f (S, $\mathit{S}\xb7\mathit{\Omega}\mathit{\Omega}\xb7\mathit{S}$, ${\left(\mathit{S}\xb7\mathit{S}\right)}^{*}$, $\mathit{S}:\mathit{S}$, $\mathit{\Omega}:\mathit{\Omega}$)  S, Ω, k,τ  DNS: Periodic hill, Re = 1.1 × 10^{4}; Convergingdiverging channel, 1.26 × 10^{4}; Curved backwardfacing step 1.37 × 10^{4}  Periodic hill, Re = 3.7 × 10^{4} 

Fang et al. [24]  Shear stress: τ_{uv} (Deep neural network)  RANS, τ_{uv}  du/dy, Re_{τ}, nearwall vanDriest damping, spatial nonlocality  DNS: Channel flow, Re_{τ} = 550, 1000, 2000, 5200  Channel flow (unseen data) 

Zhu et al. [25]  Turbulent eddy viscosity: ν_{T}; (Radial basis function neural network)  RANS: τ = ν_{T} S  U, ρ, d, d^{2} Ω, velocity direction, vorticity, Entropy, strainrate  SA RANS: NACA0012 α = 0, 10, 15, Ma = 0.15, Re = 3 × 10^{6}; RAE2822 α = 2.8, Ma = 0.73 and 0.75, Re = 6.2–6.5 × 10^{6}  Airfoil flow different α 

King et al. [26]  Stress tensor τ  LES, subgrid stresses τ  U, p, filter width, resolved rateofstrain  DNS: Isotropic and sheared turbulence  Isotropic and sheared turbulence on coarse grids 

Gambara and Hattori [27]  Stress tensor τ (Feedforward neural network)  LES, subgrid stresses τ  Different sets: S, d; S, Ω, d; ∇u, d; ∇u  DNS: Channel flow, Re_{τ} = 180, 400, 600 and 800  Channel flows for unseen flow conditions. 

Zhou et al. [28]  ∇u, Δ (filter width)  DNS: Isotropic decaying turbulence, Re_{λ} = 129 and 302  Isotropic decaying turbulence, Re_{λ} = 205 
 
Yuan et al. [29]  Stress tensor τ (Deconvolutional neural network)  LES, subgrid stresses τ  Filtered velocity  DNS: Isotropic decaying turbulence, Re_{λ} = 252  Isotropic decaying turbulence, Re_{λ} = 252 

Maulik et al. [30]  Subgrid term π (Artificial neural network)  LES, Subgrid term π  Vorticity, streamfunction, rateofstrain, vorticity gradient  DNS: Decaying 2D turbulence, Re = 3.2 × 10^{4}, 6.4 × 10^{4}  Decaying 2D turbulence 

Case  Reference  Flow Conditions  #Points  Distribution of Data Points  

Re_{τ}  Re_{c}  Sublayer, y^{+} < 6  Buffer Layer, 6 ≤ y^{+} ≤ 40  LogLayer, y^{+} > 40  
Channel Flow (DNS)  
1  Iwamoto et al. [39]  109.4  1918  65  13  22  29 
2  191.8  3345.5  65  6  19  39  
3  150.18  2681.082  73  13  21  38  
4  297.9  5788.15  193  24  40  128  
5  395.76  7988.02  257  28  45  183  
6  642.54  13843.3  193  16  27  149  
7  Alamo and Jimenez [40]  186.34  3406.97  49  7  13  28 
8  Moser et al. [41]  180.56  3298.5  96  17  25  53 
9  392.24  7896.97  129  14  23  91  
10  587.19  12,485.42  129  11  19  98  
11  Lee and Moser [42]  541.232  11,365.96  192  8  41  142 
12  Alamo and Jimenez [40]  546.74  11,476.1  129  12  19  97 
13  933.96  20,962.51  193  13  22  157  
14  Abe et al. [43]  1016.36  23,433.9  224  15  30  179 
15  Lee and Moser [42]  997.4  22,534.1  256  20  28  207 
16  1990.64  48,563.2  384  21  30  332  
17  Hoyas and Jimenez [44]  2004.3  48,683.87  317  8  17  291 
18  Bernardini et al. [45]  994.7  22,292.1  192  13  22  157 
19  2017.4  48621.8  384  19  30  335  
20  4072.6  105,702.4  512  18  28  466  
21  Lee and Moser [42]  5180.73  137,679.2  768  13  32  722 
Flatplate (DNS)  
Schlatter and Orlu [46]  Re_{θ}  Re_{τ}  #Points  Sublayer, y^{+} < 6  Buffer layer, 6 ≤ y^{+} ≤ 40  Loglayer, y^{+} > 40  
22  670  252.2550  513  13  19  481  
23  1000  359.3794  13  20  480  
24  1410  492.2115  13  20  480  
25  2000  671.1240  513  13  21  479  
26  3030  974.1849  14  21  478  
27  3270  1043.4272  14  21  478  
28  3630  1145.1699  14  21  478  
29  3970  1244.7742  14  22  477  
30  4060  1271.5350  14  22  478  
31  Jimenesz et al. [47]  1100  445.4685  345  10  19  316 
32  1551  577.7820  10  20  315  
33  1968  690.4122  10  21  314  
34  Sillero et al. [48]  4000  1306.9373  535  10  19  506 
35  4060  1271.5350  14  22  499  
36  4500  1437.0660  10  19  506  
37  5000  1571.1952  14  19  502  
38  6000  1847.6544  10  19  502  
39  6500  1989.4720  10  19  502  
Flatplate (LES)  
Re_{θ}  Re_{τ}  #Points  Sublayer, y^{+} ≤ 6  Buffer layer, 7 ≤ y^{+} ≤ 40  Loglayer, y^{+} >40  
40  Schlatter et al. [49]  670  257.1964  385  10  14  361 
41  1000  359.5164  9  14  362  
42  1410  491.7486  10  15  360  
43  2150  721.5341  10  14  361  
44  2560  839.5576  10  16  359  
45  3660  1162.2723  11  16  358  
46  4100  1286.7014  11  16  358  
47  EitelAmor et al. [50]  5000  1367.3586  512  10  15  487 
48  6000  1561.062  10  15  487  
49  7000  1750.5198  10  16  486  
50  8000  1937.3113  10  16  486  
51  9000  2118.0861  10  16  486  
52  10000  2299.2119  10  16  486  
53  11000  2478.9901  10  18  486  
Total  19,919  670 (3.4%)  1134 (5.7%)  18,115 (90.9%) 
Flow Parameters  High Frequency  Med. Frequency  Low Frequency 

Baseline flow Re_{τ,0}  350  
Baseline flow Re_{c,0}  7250  
Baseline flow centerline velocity U_{c}  1  
Half channel height H  1  
Kinematic viscosity ν  1.38 × 10^{−4}  
Domain size  3π × 2 × π  
Grid  192 × 129 × 192  
Baseline flow $\partial {P}_{0}/\partial x$  ${u}_{\tau ,0}^{2}$ = 0.002331  
Density ρ  1  
Baseline flow u_{τ,0}  0.048276  
α  200  50  8 
$\mathsf{\alpha}d{P}_{0}/dx$  0.4662  0.11655  0.01865 
Nondimensional pulse frequency ${\omega}^{+}=\omega /{u}_{\tau ,0}^{2}$  0.04  0.01  0.0016 
Pulse frequency ω  0.67565  0.16891  0.02703 
Boundary layer thickness ${l}_{s}=\sqrt{2\nu /\omega}$  0.2021  0.4042  1.0106 
${l}_{s}{}^{+}=\sqrt{2{u}_{\tau ,0}^{2}/\nu \omega}=\sqrt{2/{\omega}^{+}}$  7.071  14.142  35.355 
$R{e}_{s}={U}_{o}\sqrt{2/\omega \nu}$  100  200  500 
$R{e}_{s}/{l}_{s}{}^{+}={U}_{o}/{u}_{\tau}$  $10\sqrt{2}$  
${U}_{o}/{U}_{c}$  0.03296  
Time step size (Δt)  0.0002325  0.00093  0.000969 
Timesteps per period (2π/ωΔt)  40000  40,000  240,000 
Pressure pulse  $\frac{dP}{dx}=\frac{d{P}_{0}}{dx}\left[1+\alpha \mathrm{cos}\left(\omega t\right)\right]$ 
Appendix B. Simplification of NavierStokes Equation
References
 Brunton, S.L.; Noack, B.R.; Koumoutsakos, P. Machine Learning for Fluid Mechanics. Annu. Rev. Fluid Mech. 2020, 52, 477–508. [Google Scholar] [CrossRef][Green Version]
 Duraisamy, K.; Iaccarino, G.; Xiao, H. Turbulence Modeling in the Age of Data. Annu. Rev. Fluid Mech. 2019, 51, 357–377. [Google Scholar] [CrossRef][Green Version]
 Milano, M.; Koumoutsakos, P. Neural network modeling for near wall turbulent flow. J. Comput. Phys. 2002, 182, 1–26. [Google Scholar] [CrossRef][Green Version]
 Hocevar, M.; Sirok, B.; Grabec, I. A turbulent wake estimation using radial basis function neural networks. Flow Turbul. Combust. 2005, 74, 291–308. [Google Scholar] [CrossRef]
 Jin, X.W.; Cheng, P.; Chen, W.L.; Li, H. Prediction model of velocity field around circular cylinder over various Reynolds numbers by fusion convolutional neural networks based on pressure on the cylinder. Phys. Fluids 2018, 30, 047105. [Google Scholar] [CrossRef]
 ObiolsSales, O.; Vishnu, A.; Chandramowlishwaran, A. CFDNet: A Deep LearningBased Accelerator for Fluid Simulations. In Proceedings of the 34th ACM International Conference on Supercomputing, Barcelona, Spain, 29 June–2 July 2020. [Google Scholar]
 Edeling, W.N.; Cinnella, P.; Dwight, R.P.; Bijl, H. Bayesian estimates of parameter variability in the kε turbulence model. J. Comput. Phys. 2014, 258, 73–94. [Google Scholar] [CrossRef][Green Version]
 Ling, J.; Templeton, J. Evaluation of machine learning algorithms for prediction of regions of high Reynolds averaged Navier Stokes uncertainty. Phys. Fluids 2015, 27, 085103. [Google Scholar] [CrossRef]
 Parish, E.J.; Duraisamy, K. A paradigm for datadriven predictive modeling using field inversion and machine learning. J. Comput. Phys. 2016, 305, 758–774. [Google Scholar] [CrossRef][Green Version]
 Singh, A.P.; Medida, S.; Duraisamy, K. Machinelearningaugmented predictive modeling of turbulent separated flows over airfoils. AIAA J. 2017, 55, 2215–2227. [Google Scholar] [CrossRef]
 He, C.X.; Liu, Y.; Gan, L. A data assimilation model for turbulent flows using continuous adjoint formulation. Phys. Fluids 2018, 30, 105108. [Google Scholar] [CrossRef][Green Version]
 Yang, M.; Xiao, Z. Improving the kωγA_{r} transition model by the field inversion and machine learning framework. Phys. Fluids 2020, 32, 064101. [Google Scholar]
 Ling, J.; Kurzawski, A.; Templeton, J. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. J. Fluid Mech. 2016, 807, 155–166. [Google Scholar] [CrossRef]
 Wang, J.X.; Wu, J.L.; Xiao, H. Physics informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data. Phys. Rev. Fluids 2017, 2, 034603. [Google Scholar] [CrossRef][Green Version]
 Wu, J.L.; Wang, J.X.; Xiao, H.; Ling, L. A priori assessment of prediction confidence for datadriven turbulence modeling. Flow, Turbul. Combust. 2017, 99, 25–46. [Google Scholar] [CrossRef]
 Wang, J.X.; Huang, J.; Duan, L.; Xiao, H. Prediction of Reynolds stresses in highMachnumber turbulent boundary layers using physicsinformed machine learning. Theor. Comput. Fluid Dyn. 2019, 33, 1–19. [Google Scholar] [CrossRef][Green Version]
 Wu, J.L.; Xiao, H.; Paterson, E. PhysicsInformed Machine Learning Approach for Augmenting Turbulence Models: A Comprehensive Framework. Phys. Rev. Fluids 2018, 3, 074602. [Google Scholar] [CrossRef][Green Version]
 Yin, Y.; Yang, P.; Zhang, Y.; Chen, H.; Fu, S. Feature selection and processing of turbulence modeling based on an artificial neural network. Phys. Fluids 2020, 32, 105117. [Google Scholar] [CrossRef]
 Yang, X.I.A.; Zafar, S.; Wang, J.X.; Xiao, H. Predictive largeeddysimulation wall modeling via physicsinformed neural networks. Phys. Rev. Fluids 2019, 4, 034602. [Google Scholar] [CrossRef]
 Weatheritt, J.; Sandberg, R.D. The development of algebraic stress models using a novel evolutionary algorithm. Int. J. Heat Fluid Flow 2017, 68, 298–318. [Google Scholar] [CrossRef]
 Jiang, C.; Mi, J.; Laima, S.; Li, H. A Novel Algebraic Stress Model with MachineLearningAssisted Parameterization Energies. Energies 2020, 13, 258. [Google Scholar] [CrossRef][Green Version]
 Xie, C.; Wang, J.; Li, H.; Wan, M.; Chen, S. Artificial neural network mixed model for large eddy simulation of compressible isotropic turbulence. Phys. Fluids 2019, 31, 085112. [Google Scholar]
 Schmelzer, M.; Dwight, R.P.; Cinnella, P. Discovery of algebraic Reynolds stress models using sparse symbolic regression. Flow Turbul. Combust. 2020, 104, 579–603. [Google Scholar] [CrossRef][Green Version]
 Fang, R.; Sondak, D.; Protopapas, P.; Succi, S. Neural network models for the anisotropic Reynolds stress tensor in turbulent channel flow. J. Turbul. 2020, 21, 9–10. [Google Scholar] [CrossRef]
 Zhu, L.; Zhang, W.; Kou, J.; Liu, Y. Machine learning methods for turbulence modeling in subsonic flows around airfoils. Phys. Fluids 2019, 31, 015105. [Google Scholar] [CrossRef]
 King, R.N.; Hamlington, P.E.; Dahm, W.J.A. Autonomic closure for turbulence simulations. Phys. Rev. E 2016, 93, 031301. [Google Scholar] [CrossRef] [PubMed][Green Version]
 Gamabara, M.; Hattori, Y. Searching for turbulence models by artificial neural network. Phys. Rev. Fluids 2017, 2, 054604. [Google Scholar] [CrossRef]
 Zhou, G.; He, G.; Wang, S.; Jin, G. Subgridscale model for largeeddy simulation of isotropic turbulent flows using an artificial neural network. Comput. Fluids 2019, 195, 104319. [Google Scholar] [CrossRef][Green Version]
 Yuan, Z.; Xie, C.; Wang, J. Deconvolutional artificial neural network models for large eddy simulation of turbulence. Phys. Fluids 2020, 32, 115106. [Google Scholar] [CrossRef]
 Maulik, R.; San, O.; Rasheed, A.; Vedula, P. Subgrid modelling for twodimensional turbulence using neural networks. J. Fluid Mech. 2019, 858, 122–144. [Google Scholar] [CrossRef][Green Version]
 Nathan, K.J. Deep learning in fluid dynamics. J. Fluid Mech. 2017, 814, 1–4. [Google Scholar]
 Bhushan, S.; Burgreen, G.W.; Martinez, D.; Brewer, W. Machine Learning for Turbulence Modeling and Predictions. In Proceedings of the ASME 2020 Fluids Engineering Division Summer Meeting FEDSM2020, Orlando, FL, USA, 12–16 July 2020. [Google Scholar]
 Bhushan, S.; Burgreen, G.W.; Bowman, J.; Dettwiller, I.; Brewer, W. Predictions of Steady and Unsteady Flows using Machine Learned Surrogate Models. In Proceedings of the 2020 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC) and Workshop on Artificial Intelligence and Machine Learning for Scientific Applications (AI4S), Supercomputing 2020, Online Conference, 12 November 2020. [Google Scholar]
 LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
 Lee, M.; Kim, H.; Joe, H.; Kim, H.G. Multichannel PINN: Investigating scalable and transferable neural networks for drug discovery. J. Cheminform. 2019, 11, 46. [Google Scholar] [CrossRef]
 Kingma, D.; Ba, J. Adam: A method for stochastic optimization. arXiv 2015, arXiv:1412.6980. [Google Scholar]
 Maziar, R.; Paris Perdikaris, P.; Karniadakis, G.E. Physicsinformed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar]
 Warsi, Z.U.A. Fluid Dynamics: Theoretical and Computational Approaches; CRC Press: Boca Raton, FL, USA, 1998. [Google Scholar]
 Iwamoto, K.; Kasagi, N.; Suzuki, Y. Direct Numerical Simulation of Turbulent channel Flow at Re_{τ} = 2320. In Proceedings of the 6th Symposium Smart Control of Turbulence, Tokyo, Japan, 6–9 March 2005. [Google Scholar]
 Alamo, J.C.; Jimenez, J.; Zandonade, P.; Moser, R.D. Scaling of the Energy Spectra of Turbulent Channels. J. Fluid Mech. 2004, 500, 135–144. [Google Scholar]
 Moser, R.D.; Kim, J.; Mansour, N.N. Direct Numerical Simulation of Turbulent Channel Flow Up to Re_{τ} = 590. Phys. Fluids 1999, 11, 943–945. [Google Scholar] [CrossRef]
 Lee, M.; Moser, R.D. Direct numerical simulation of turbulent channel flow up to Re_{τ} = 5200. J. Fluid Mech. 2015, 744, 395–415. [Google Scholar] [CrossRef]
 Abe, H.; Kawamura, H.; Matsuo, Y. Surface heatflux fluctuations in a turbulent channel flow up to Re_{τ} = 1020 with Pr = 0.025 and 0.71. Int. J. Heat Fluid Flow 2004, 25, 404–419. [Google Scholar] [CrossRef]
 Hoyas, S.; Jimenez, J. Scaling of the Velocity Fluctuations in Turbulent Channels up to Re_{τ} = 2003. Phys. Fluids 2006, 18, 011702. [Google Scholar] [CrossRef][Green Version]
 Bernardini, M.; Pirozzoli, S.; Orlandi, P. Velocity statistics in turbulent channel flow up to Re_{τ} = 4000. J. Fluid Mech. 2014, 742, 171–191. [Google Scholar] [CrossRef][Green Version]
 Schlatter, P.; Orlu, R. Assessment of direct numerical simulation data of turbulent boundary layers. J. Fluid Mech. 2010, 659, 116–126. [Google Scholar] [CrossRef]
 Jimenez, J.; Hoyas, S.; Simens, M.P.; Mizuno, Y. Turbulent boundary layers and channels at moderate Reynolds numbers. J. Fluid Mech. 2010, 657, 335–360. [Google Scholar] [CrossRef][Green Version]
 Sillero, J.A.; Jimenez, J.; Moser, R.D. Onepoint statistics for turbulent wallbounded flows at Reynolds numbers up to Re~2000. Phys. Fluids 2013, 25, 105102. [Google Scholar] [CrossRef][Green Version]
 Schlatter, P.; Li, Q.; Brethouwer, G.; Johansson, A.V.; Henningson, D.S. Simulations of spatially evolving turbulent boundary layers up to Re_{θ} = 4300. Int. J. Heat Fluid Flow 2010, 31, 251–261. [Google Scholar] [CrossRef]
 EitelAmor, G.; Örlü, R.; Schlatter, P. Simulation and validation of a spatially evolving turbulent boundary layer up to Re_{θ} = 8300. Int. J. Heat Fluid Flow 2014, 47, 57–69. [Google Scholar] [CrossRef]
 Martinez, D.; Brewer, W.; Strelzoff, A.; Wilson, A.; Wade, D. Rotorcraft virtual sensors via deep regression. J. Parallel Distrib. Comput. 2020, 135, 114–126. [Google Scholar] [CrossRef]
 Rolnick, D.; Tegmark, M. The power of deeper networks for expressing natural functions. arXiv 2018, arXiv:1705.05502. [Google Scholar]
 Scotti, A.; Piomelli, U. Numerical Simulation of Pulsating Turbulent Channel Flow. Phys. Fluids 2001, 13, 1367–1384. [Google Scholar] [CrossRef][Green Version]
 Bhushan, S.; Walters, D.K. Development of a Parallel PseudoSpectral Solver Using the Influence Matrix Method and Application to Boundary Layer Transition. Eng. Appl. Comput. Fluid Mech. 2014, 8, 158–177. [Google Scholar] [CrossRef][Green Version]
 Bhushan, S.; Muthu, S. Performance and Error Assessment of Parallel PseudoSpectra Methods for Direct Numerical Simulations. Eng. Appl. Comput. Fluid Dyn. 2019, 13, 763–781. [Google Scholar]
 Jamal, T.; Bhushan, S.; Walters, D.K. Numerical Simulation of NonStationary Turbulent Flows using Double Exponential Dynamic Time Filtering Technique. In Proceedings of the ASME 2020 Fluids Engineering Division Summer Meeting FEDSM 2020, Orlando, FL, USA, 12–16 July 2020. [Google Scholar]
 Muthu, S.; Bhushan, S. Temporal Direct Numerical Simulation for FlatPlate Boundary Layer Bypass Transition. J. Turbul. 2020, 21, 311–354. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. 
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bhushan, S.; Burgreen, G.W.; Brewer, W.; Dettwiller, I.D. Development and Validation of a Machine Learned Turbulence Model. Energies 2021, 14, 1465. https://doi.org/10.3390/en14051465
Bhushan S, Burgreen GW, Brewer W, Dettwiller ID. Development and Validation of a Machine Learned Turbulence Model. Energies. 2021; 14(5):1465. https://doi.org/10.3390/en14051465
Chicago/Turabian StyleBhushan, Shanti, Greg W. Burgreen, Wesley Brewer, and Ian D. Dettwiller. 2021. "Development and Validation of a Machine Learned Turbulence Model" Energies 14, no. 5: 1465. https://doi.org/10.3390/en14051465