Uncertainty propagation through a point model for steady-state two-phase pipe ﬂow

: Uncertainty propagation is used to quantify the uncertainty in model predictions in the presence of uncertain input variables. In this study, we analyze a steady-state point-model for two-phase gas-liquid ﬂow. We present prediction intervals for holdup and pressure drop that are obtained from knowledge of the measurement error in the variables provided to the model. The analysis also uncovers which variables the predictions are most sensitive to. Sensitivity indices and prediction intervals are calculated by two different methods, Monte Carlo and polynomial chaos. The methods give similar prediction intervals, and they agree that the predictions are most sensitive to the pipe diameter and the liquid viscosity. However, the Monte Carlo simulations require fewer model evaluations and less computational time. The model predictions are also compared to experiments while accounting for uncertainty, and the holdup predictions are accurate, but there is bias in the pressure drop estimates.


Introduction
Multiphase flow models are used in a range of applications, such as petroleum transport, nuclear energy and biomechanics. Accuracy in the model output is required to ensure the models to be useful decision support tools. Consequently, there is a rapid development in methods for quantifying the uncertainty in these models.
Lee and Chen [1] compared several types of uncertainty propagation methods, including Monte Carlo methods (MC), polynomial chaos expansions (PC), full-factorial numerical integration (FFNI) and univariate dimension reduction (UDR). They explain the relative strengths of each method, and one conclusion is that PC is most viable in comparison to FFNI and UDR when input distributions are normal but output distributions are not. This is the situation in our analysis. Later, Cremaschi et al. [2] discussed applications of the methods discussed in Lee and Chen [1] to multiphase flows. Furthermore, they asked in the short-term that vendors of multiphase simulators implement tools for propagating uncertainty and yield sensitivities and prediction intervals. It was also requested that the simulators incorporate uncertainty in closure laws and that researchers accompany experimental data with uncertainty estimates. In the long term, Cremaschi et al. [2] recommended producing scale-up data to improve extrapolation in multiphase models. A case study is presented quality, but also increase the experiment cost and computational time significantly, and is not applied in this paper. The flow is categorized as either stratified, bubbly or slug flow. All cases are covered by a unit-cell model, as introduced in Dukler and Hubbard [17]. The holdup is the weighted average where s is slug fraction, H s is the slug zone holdup and H l is the bubble zone holdup. Slug flow is illustrated in Figure 1. The model allows for gas bubbles in the slug. Note that s ≤ 0 is stratified flow and s ≥ 1 is bubbly flow, and s is then limited to 0 and 1, respectively, in the weighting. The estimation procedure consists of two main steps, deciding the flow regime and computing the holdup conditional on that regime. In general, we need the conditional holdups in order to decide on the regime.

Slug zone
The slug zone holdup is computed first. We apply Gregory et al. [18] for low liquid viscosities and Kora et al. [19] for higher ones. It is convenient to introduce average superficial velocities, defined as where the index p refers to phase, with g for gas and l for liquid. The slug zone holdup is 1+0.05{U m } 1.39 , µ l < 0.02 1, µ l ≥ 0.02, k ≤ 0. 15 1.012 · e −0.085k , µ l ≥ 0.02, 0.15 < k < 1.5 0.9473 · e −0.041k , µ l ≥ 0.02, k ≥ 1.5 (3) where k = U 1.2 m D −0.9 g −0.7 µ 0.2 l ρ 0.5 l (ρ l − ρ g ) −0.7 and U m = U g + U l is the mix velocity with the numerical value denoted as {U m }. Using the slug zone holdup, we can easily check for bubbly flow. The holdup for bubbly flow is The requirement corresponds to a slug fraction greater than 1. In order to find the average holdup in slug flow, we need the bubble nose velocity, which is also empirical. Smith et al. [12] used a modified version of the function proposed in Bendiksen [20]. The bubble nose velocity is assumed linear in the mix velocity with an intercept determined by the experiments conveyed in Jeyachandra et al. [21]. The slope C 0 is interpolated from a laminar value and a turbulent value, as proposed in Nuland [22], with some additional restrictions. Details are given in section 2.1.5. The laminar and turbulent values are where f t s is the slug friction factor defined later. The turbulent value is as reported in Hinze [23]. Using this approach, the bubble nose velocity is where U 0 = cos θ Dg(ρ l − ρ g )/ρ l and F = 0.53 exp(−13.7 D −0.89 (gρ l ) −0.33 (ρ l − ρ g ) −0.23 µ 0.46 l σ 0.1 ).

Bubble zone
Let H g = 1 − H l be the gas fraction in the bubble zone. The liquid holdup in the bubble zone is the solution to the momentum balance for both phases, that is H l such that where the perimeters S p and S i are defined in Figure 1, and the friction factors f p and f i are defined below. The superficial velocities u p must be chosen according to the flow regime. The superficial velocities equal the average superficial velocities in Equation (2) for stratified flow, while they are functions of the bubble zone holdup for slug flow, namely Stratified flow: Slug flow: Furthermore, the interface friction factor is a modified version of the expression proposed in Andreussi and Persen [24] and is given by where h is the line fraction approximated by and we impose a minimum of f i0 = f g (ε = 0). The friction factors f i0 , f g , f l and f s are interpolated from laminar and turbulent values, as described in section 2.1.5. We use the Hagen-Poiseuille and Haaland formulas found in White [25] given as where and ρ p m = (1 − H p )ρ g + H p ρ l is the mix density where p is l or s for bubble zone or slug zone, respectively. The comparative study in Brkić and Praks [26] suggests a more accurate and computationally efficient approximation than Equation (11b) for the Colebrook turbulent friction factor. However, model tuning is not the main objective for this work. Instead, we use the Haaland approximation to allow for comparison to Smith et al. [12]. The friction factor is a small contributor to the computational cost of the point model. Thus, it is not essential to find the most efficient approximation.

Slug fraction
The fraction of the unit-cell covered by the slug is called the slug fraction, and it may be computed as Figure 2 shows a flowchart of the steady-state point model solution procedure. The first step is to compute the slug zone holdup from (3). Next, we determine the correct flow regime and average liquid holdup. Finally, the pressure drop is computed.
Pressure drop (14) yes no no yes no

Pressure gradient
The pressure gradient is the weighted average where the slug zone and bubble zone pressure gradients are where H g and H l is the solution to Equation (7).

Interpolation by Reynolds number
Several dimensionless numbers g in the model are computed as g l for laminar cases and as g t for turbulent cases. By interpolation, we ensure continuity in g(Re p ), also in the transition from laminar to turbulent. Let the laminar region be Re p < a, the transitional region be a < Re p < b and the turbulent region be b < Re p . A natural interpolation is We chose a = 1700 and b = 4000 for the friction factors in Equation (11) except for f s . For f s and the slopes in Equation (5), we use b = 3000 and a such that f l (Re = a) = f t (Re = a). However, using weights w will not produce a smooth function g. In fact, the derivative of g with respect to Re is discontinuous at a and b. This far, we have outlined the model given by Smith et al. [12], but we suggest replacing the weights by w = sin 2 (πw/2). These weights provide continuity in the derivative of g. The change is demonstrated in Figure 3. The histograms in the left panel show the distribution of holdup estimates obtained by perturbation of a certain set of inputs with a Reynolds number close to 3000. Blue gives the holdup estimates using the original model with weights w, while orange gives the estimates obtained using the new weights w . The right panel shows C 0 (Re) in the transition from laminar to turbulent Reynolds numbers. We prefer the modified model because the distribution with small changes in input is more straightened out. However, the distribution of pressure drop is nearly unchanged.

Uncertainty quantification
Section 2.1 describes how we can predict holdup or pressure drop from measured inputs. In this section, we explain how to compute the effect of measurement error in input variables on the predictions. First, we will discuss the measurement error in each input and output. The uncertainties are attained from Table 2 in Smith et al. [12] and follow-up discussion with the laboratory staff. Additional details regarding the uncertainty estimates can be found in Khaledi et al. [27]. The uncertainties should be understood as defined by the Guide to the Expression of Uncertainty in Measurement [28], and the uncertainties are quantified as one standard deviation. Next, we have summarized the discussion on measurement error in each input variable. We refer to Section 2.3 in Smith et al. [12] for details on the uncertainty in output measurements. Table 2. Uncertainty in the model variables given as one standard deviation.

Variable Unit
Uncertaintẏ Mass rate. The devices used to measure the mass rates have uncertainties relative to the measured value. The uncertainties are reported as 0.05% of the measured value for liquid and 0.4% of the measured value for gas.
Viscosity. The gas viscosity is found from reference data, and the value for various gases are in the range 1 · 10 −5 Pa s to 2 · 10 −5 Pa s, and the viscosity has only a slight dependence on pressure and temperature. The uncertainty in the reference data is quoted as 2 %, and it is reasonable to use this value for the uncertainty in gas viscosity. Liquid viscosity is difficult to measure under relevant conditions and is significantly affected by temperature. If the viscosity of a hydrocarbon fluid is measured, a typical uncertainty will be 3 % of the reading. The viscosity of a single-compound fluid such as water can be obtained from reference data. The uncertainty in water reference data in the relevant range is 0.5 %.
Density. For well-known gas compositions, the gas density can be calculated accurately from reference data. Alternatively, the density can be measured by weighing. In both cases, the uncertainty will typically be 0.2 kg/m 3 . This value also includes the effect of various degrees of saturation of vapors from the liquid present in the loop. Liquid density can be measured using Coriolis meters and a reasonable uncertainty in such measurements is 1 kg/m 3 .
Pipe diameter. If the pipe diameter is obtained from the nominal diameter, the production tolerance must be used to infer the uncertainty. Typically, such an analysis will yield an uncertainty in diameter of 1 %. If the diameter is measured by filling experiments, an uncertainty in diameter of 0.2 % can be obtained. Note that the pipe diameter enters into many calculated quantities, and usually to a high degree. This includes the superficial velocities and hydraulic roughness. In the evaluation of the uncertainty in these quantities, the contribution from the uncertainty in pipe diameter is not included. The contribution from error in pipe diameter is unique because it will be the same for all experiments carried out in one particular test section.
Hydraulic roughness. The hydraulic roughness is inferred from single phase liquid flow experiments, and the uncertainty in roughness in the current case is 1 µm.
Surface tension. Surface tension is a parameter that can only be measured off-line. The actual value of the surface tension in situ is hardly known due to contamination and dynamic effects. The uncertainty in surface tension is set to 30 %.
Pipe inclination. The uncertainty in pipe inclination is estimated based on how the pipes are mounted. By inspection of the setup, we believe that the pipe can deviate 6 mm in the vertical direction over a section of 30 m. This corresponds to an uncertainty in the pipe inclination of 2 · 10 −4 rad.

Uncertainty propagation
Uncertainty propagation is a term for how the measurement error in each input is propagated through the model; for instance, whether the measurement error in the mass rates results in uncertainty in the estimated holdup. We can write the model as where Y is either holdup or pressure drop. Our model is represented as a function y, which takes the vector Z of random variables as input. The uncertainties of Z are propagated through the model y to produce a new random variable Y.
We can simulate the effect of measurement error by changing the inputs slightly and observe the change in the output. If we do this many times, we will get a distribution for the output. The change in input represents the measurement error. We sample the measurement errors based on the uncertainties presented in the previous section and Table 2a. We assume independent measurement errors from normal distributions with standard deviations given in the table. All the variables except pipe inclination are truncated at zero.

Input sampling
The measurement error is sufficiently simulated without true randomness. Instead, we use a classical pseudo-random sequence denoted z (j) n j=1 , where n is the sample size. The error in the estimated statistics decays by 1/n, while the rate is only 1/ √ n for truly random sampling. Furthermore, the pseudo-random sequence cover the input space almost uniformly, while a random sequence may have clusters and holes. Pseudo-random normal samples of input are generated by applying a copula to the sequence, which is a transformation function for uniform sequences. A dependency between the measurement errors in the inputs could easily be simulated by the use of a different copula.

Statistics
The uncertainty analysis can be summarized by some key figures. We have a good overview of the propagated uncertainty if we know the mean E [Y], variance Var [Y] and the quantiles y 0.025 , y 0.05 , y 0.95 and y 0.975 . Furthermore, we can list the contribution to Var [Y] from each input. If the input Z i contributes much to Var [Y], we have much to gain from reducing the measurement error in Z i . The reduction in Var [Y], if we could eliminate the measurement error in Z i , is equal to where Z ∼i are all inputs except Z i . The relative reduction in output uncertainty is which is known as the first-order sensitivity index proposed in Sobol [29]. The same article defines the total sensitivity index S Ti which also includes the interaction effect with other variables. The total index is the remaining output variance when we fix all inputs but Z i . That is where S c i is the first-order sensitivity index of input Z i for case c, and Var [Y c ] is the output variance for case c.

Monte Carlo methods
Monte Carlo methods treat the model as a black box. We get estimates for sensitivities by computing changes in the output for systematic changes to the input. This is done by dividing the samples into two parts. Let the first half of the sample be the matrix A and the second half B.
Denote A (i) B a matrix equal to A but with column i from B. The preferred estimators for the mean output and output variance are the sample average and the unbiased sample variance. We will use estimators for the sensitivity indices based on the best practices discussed in Saltelli et al. [30]. Since the work in Sobol [29], improvements have been proposed in Saltelli [31] and Sobol et al. [32]. Further improvements for the first-order indices are suggested in Saltelli et al. [30]. The total indices are estimated as proposed in Jansen [33]. The estimators arē where F n is the empirical distribution of y z (j) and 1 − α is the confidence level. The number of model evaluations with ten inputs is 5n for A (i) B and n/2 each for A and B. Thus, a total number of 6n evaluations is required.

Polynomial chaos
When the model y is not on a simple explicit form, directly computing the distribution of y(Z) is not feasible. However, we can first approximate the model by a simplified version, namely a polynomial expansion. This is known as the general polynomial chaos (gPC) expansion. An introduction of gPC is found in the book of Xiu [34]. Let the polynomial expansion be where a j are coefficients found by regression and Φ j (Z) are orthonormal polynomials constructed from three terms recursion. Orthonormality is not required but simplifies estimators. We terminate the recursion when it reaches the desired polynomial order. A high polynomial order corresponds to a close approximation, but note that the number of polynomials p = (10 + order)!/(10!order!) grows fast with the order. Next, we draw an input sample of size n as described in Section 2.2.3. Let be the differences in output between the model and the expansion, that is      y z (1) . . .
Ordinary least squares provides estimatesâ 1 , . . . ,â p . These inserted in Equation (22) gives an explicit representation of the flow model. Furthermore, estimates for the statistics in Section 2.2.4 arẽ is the set of polynomials depending solely on z i , and F p is the empirical distribution of Y p for ten thousand Monte Carlo samples and 1 − α is the confidence level.

Simulations
The Monte Carlo simulations are initialized at 6000 samples and expanded by 30 % for each iteration until estimates of Equation (21) converge. For both MC and PC, we define convergence as a change from previous iteration less than 0.01 for sensitivity indices and a relative change less than 0.01 for the mean, the variance and the quantiles.
For polynomial chaos, we first use order two and increase the order until estimates of Equation (24) converge. For each order, we increase the sample size repeatedly by p + 1 until the fit on a test set does not longer improve. The test set consists of 6006 combinations of input, and we deem the fit satisfactory when the mean absolute deviation in the fitted output for consecutive iterations changes less than 20 %. This indicates that we have enough evaluations of the model for an accurate polynomial approximation.
The pseudo-random sampling is most efficient if we first construct a large sample matrix and evaluate the point model for an incrementally larger subset when required. For PC, we construct a sample of size ten times the number of polynomials in the three terms recursion of order five. A sample size of 300,000 seems to suffice for the MC method.

Results
The input variables to the pipe flow model are listed in Table 1. From these variables, the point model predicts the liquid holdup (volume fraction) in the pipe and the pressure drop per meter. The input variables are taken from 240 gas-liquid experiments in a horizontal pipe from the SINTEF Multiphase Flow Laboratory. We compare the measured holdup and pressure drop with the results from the fluid model. The presented approach is implemented in Python 3.6, and the uncertainty analysis is based on the Python module Chaospy presented in Feinberg and Langtangen [35]. The uncertainty in each experiment is computed with Monte Carlo (MC) simulations and polynomial chaos (PC) expansions. The details on the uncertainty methods are given in Section 2.2. Figure 4 shows the estimated average sensitivity for the holdup predictions. The sensitivities quantify how sensitive the holdup predictions are to each input variable. In other words, it is the contribution of the uncertainty in each input to the uncertainty in holdup predictions. The estimated total sensitivity index and the estimated first-order index never differ more than 0.02. Thus, we use first-order indices in plots and refer to them simply as sensitivities. In the left panel, we see the averages weighted by the variance in each experiment. The right panel gives the plain averages with standard errors. The combined effect of liquid viscosity, pipe diameter and gas density account for ninety percent of the uncertainty in the holdup predictions. We have removed 7 out of the 240 cases from the results because the polynomial chaos expansions for pressure drop do not converge with polynomial order. The criteria for convergence is a change in estimates for the sensitivities, the output mean and the output variance from one order to the next less than 0.01. For the output mean and variance, we use the relative change. The criteria must be reached the latest at order 5. Table 3 contains information about the pressure drop statistics for the seven cases that do not converge. The holdup statistics actually change less than the threshold of 0.01, but we still exclude these results because we treat the pressure drop and the holdup as a joint variable in the simulations. For each case, we show the variable with the largest change from order 4 to 5 and the values of that variable for order 3, 4 and 5. All seven cases are on the border between two regimes, meaning that the model changes regime based on the sampled measurement error. The regimes assigned by the flow model are listed in the last column of the table.   Table 3. Seven cases where the polynomial chaos expansions for pressure drop do not converge with polynomial order. The first column denotes the variable with the maximal change in the last iteration (absolute change for the sensitivities and relative change for the mean and the variance). The values of that variable for polynomial orders 3-5 are given in the next columns, and the last column gives the regime assigned by the flow model.

Variable
Polynomial  Figure 5 provides a more refined view of the holdup sensitivities. Each panel gives the histogram of sensitivity to one input based on the 233 cases. We plot separate histograms for slug flow (blue) and stratified flow (red). Furthermore, we compare MC (solid lines) to PC (dotted). There are only small deviations between the two methods. Half of the input variables have sensitivities consistently under 2 %. The gas mass rate and density are moderately sensitive, while the diameter and liquid viscosity are in some cases highly sensitive, but not for all cases.
We compare predicted holdup to measured holdup in Figure 6. Only cases with converging prediction intervals from both MC and PC are included. In the left panel, each experiment is drawn as a cross. The horizontal part represents measurement error, and the vertical part (much smaller) represents prediction uncertainty. To be more precise, the crosses are the measured holdup with two standard deviations either way and the predicted mean and interval from MC simulations. The standard deviation in holdup measurements is set to 0.03. Furthermore, the diagonal line is where measurements and predictions are equal, and cases where the uncertainty box does not cover this line are highlighted. All cases with over-predicted holdup are observed slug flow. Conversely, under-predicted cases are stratified. The 90 % intervals do not cover the observations in 20 (10.3 %) out of 194 cases, while the 95 % intervals are off in 12 (11.4 %) out of 105 cases.
The right panels of Figure 6 show the relative difference in the PC predictions and MC predictions. The mean holdup (solid line) is very similar, the upper quantile (dashed) is slightly larger, and the lower quantile (dash-dotted) is slightly smaller with PC. Thus, the PC predictions are overall similar to those of MC, but the intervals are wider.

Pressure drop
As for holdup, we summarize the pressure drop results in terms of sensitivity and prediction. The averaged sensitivities are given in Figure 7. The weighted average is similar to the plain average. The uncertainty in the diameter measurement is responsible for 90 percent of the uncertainty in pressure drop predictions. The liquid viscosity also contributes, and in some low-variance cases, the pipe inclination. Figure 8 shows the sensitivities by regime. The distributions of sensitivities are similar for stratified flow and slug flow.

Computational cost
The computational cost of the Monte Carlo simulations is mainly from evaluating the model many times. The polynomial chaos expansion requires fewer model evaluations but also involves large regressions to obtain expansion coefficients. In Figure 10, the computation time (left) and number of model evaluations (right) required for convergence are compared between MC and PC. The colors represent the criteria used for convergence. Blue is only convergence in sensitivities while orange and green is the cost if we also want convergence in prediction intervals on confidence level 90 % and 95 %, respectively. All cases converge for MC while some do not for PC. The number of cases without convergence is given as an entry on the right hand side. Notice the different scales on the axes for MC and PC.
The computation time is obtained from timing python scripts on the Norwegian HPC infrastructure. We run one MC script and one PC script for each of the 240 cases for each of the three convergence criteria. We terminate each uncertainty analysis at convergence or after 15 hours. Thus, the maximum total computation time is 2 · 240 · 3 · 15 h = 900 days. However, because we can run hundreds of scripts in parallel and many cases finish soon, the results are available after one day. We have also implemented the option of parallel evaluations of the model within each script, but chose serial evaluation for this comparison. Model evaluations (thousands)
(b) Polynomial chaos. Figure 10. Frequency histogram of computational cost across 240 experiments. Each color is one type of simulation criterion, namely only convergence in sensitivities (blue) or also 90 % prediction intervals (orange) or 95 % (green). The number of cases that did not converge is given as an entry on the right. The computation time is obtained from timing python scripts on the Norwegian HPC infrastructure. We run one MC script and one PC script for each of the 240 cases for each of the three convergence criteria. We terminate each uncertainty analysis at convergence or after 15 hours. Thus, the maximum total computation time is 2 · 240 · 3 · 15 h = 900 days. However, because we can

Discussion
The sensitivity estimates are similar using Monte Carlo or polynomial chaos. The averages AS i gives the clearest picture as they measure how much uncertainty each input brings to the output estimates across all cases. From the first panels of Figures 4 and 7, we see that the pipe diameter and liquid viscosity are important for both pressure drop and holdup, while the gas mass rate, gas density and pipe inclination only matters for the holdup estimates. We can utilize the sensitivity indices for efficiently reducing the uncertainty in the output estimates. The focus should be on reducing the measurement error of the most sensitive variables; in this case, the pipe diameter and the liquid viscosity. Reducing uncertainty in measurements of these variables will efficiently improve predictions. Keep in mind that sensitivity indices are not general but depend on the flow conditions. See Smith et al. [12] for a description of the experiments. The results can not directly be extrapolated to different experiments. A new analysis is required, but the methods described in Section 2 may be applied.
Also note that because first-order and total indices are similar, there are no decisive uncertainty interactions. Furthermore, we cannot conclude that the flow regime is important for sensitivity estimates.
The sensitivities are similar across each regime. However, the moments of the polynomial chaos expansions do not converge with order for some cases on the boundary between regimes. Two cases are on the boundary bubbly/slug and neither converge with PC. Among 24 cases on the boundary slug/stratified, five cases do not converge with PC. Thus, the current implementation of PC expansions is unreliable on the regime boundaries. The expansions do not capture the behavior of the flow model well on the boundaries because the model is not smooth there. Adding higher-order terms to the expansion would make the expansion better resemble non-smooth behavior, but this is not immediately possible due to computational expense. It is possible to construct high-order approximations with low complexity by applying variable selection, but this approach is less applicable.
We have also explored the technical details in the uncertainty computations, specifically the performance of Monte Carlo simulations compared to that of polynomial chaos methods. For the fluid model in question, we clearly prefer MC because this method provides uncertainty estimates for all cases, and it does so in the least amount of time. In contrast, PC fails in many cases and has a larger computation cost. The strength of PC lies in the low number of required model evaluations. Compared to MC, polynomial chaos is likely to perform better if the fluid model required more time for each evaluation.
We have compared measured holdup with predicted holdup accounting for uncertainty in both. The equivalent comparison was applied to the pressure drop. The holdup prediction matches the measured values well. We can observe that over-predicted cases are slug flow and under-predicted cases are stratified. The predictions of pressure drops are less accurate, with under-prediction for small values and over-prediction for large pressure drops. There is a clear bias in the estimates, which suggests there are physics that are not captured by the model. The authors of Smith et al. [36] pointed to the over-prediction of the slug velocity variable C 0 . The claim is supported by follow-up experiments, which they discuss in their Section 3.4.
In summary, estimates for sensitivities and output predictions using MC are similar to those of PC, and the pipe diameter and the liquid viscosity have the largest sensitivity indices. The Monte Carlo method is preferred because it is more robust and requires less time. This conclusion applies to the flow model used and the implementation of each uncertainty method. The uncertainty analysis also provides evidence that holdup predictions are accurate, while pressure drop predictions are biased.

Future research
We have seen that half of the input variables contribute less than 2 % to the output uncertainty in all cases. For polynomial chaos, it is possible to construct the polynomial approximation by attempting to prioritize the important variables. One idea is to iteratively introduce higher order polynomials in significant variables. This way, we can reach a sufficiently high polynomial order without introducing too many regressors.
Applying the methods of Hoyer et al. [5] to create probability distributions for closure laws will make the analysis of the uncertainty in the flow model more complete. Currently, the closure laws are treated as known. We think it is possible to tune the closure law distributions by comparing the output predictions with measurements. An applicable tuning method is the minimum continuous ranked probability score (CRPS) estimation, as demonstrated in Gneiting et al. [37].