Next Article in Journal
A Hybrid Approach to Representing Shared Conceptualization in Decentralized AI Systems: Integrating Epistemology, Ontology, and Epistemic Logic
Next Article in Special Issue
Simulation and Analysis of Line 1 of Mexico City’s Metrobus: Evaluating System Performance through Passenger Satisfaction
Previous Article in Journal / Special Issue
Assessing by Simulation the Effect of Process Variability in the SALB-1 Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Expectations and Variance Components in Two-Level Nested Simulation Experiments

by
David Fernando Muñoz
Department of Industrial and Operations Engineering, Instituto Tecnológico Autónomo de México, Rio Hondo 1, Mexico City 01080, Mexico
AppliedMath 2023, 3(3), 582-600; https://doi.org/10.3390/appliedmath3030031
Submission received: 7 May 2023 / Revised: 28 July 2023 / Accepted: 1 August 2023 / Published: 7 August 2023
(This article belongs to the Special Issue Trends in Simulation and Its Applications)

Abstract

:
When there is uncertainty in the value of parameters of the input random components of a stochastic simulation model, two-level nested simulation algorithms are used to estimate the expectation of performance variables of interest. In the outer level of the algorithm n observations are generated for the parameters, and in the inner level m observations of the simulation model are generated with the values of parameters fixed at the values generated in the outer level. In this article, we consider the case in which the observations at both levels of the algorithm are independent and show how the variance of the observations can be decomposed into the sum of a parametric variance and a stochastic variance. Next, we derive central limit theorems that allow us to compute asymptotic confidence intervals to assess the accuracy of the simulation-based estimators for the point forecast and the variance components. Under this framework, we derive analytical expressions for the point forecast and the variance components of a Bayesian model to forecast sporadic demand, and we use these expressions to illustrate the validity of our theoretical results by performing simulation experiments with this forecast model. We found that, given a fixed number of total observations n m , the choice of only one replication in the inner level ( m = 1 ) is recommended to obtain a more accurate estimator for the expectation of a performance variable.

1. Introduction and Notation

Simulation is usually regarded as a powerful tool for producing forecasts, evaluating risk (see, e.g., [1]), and animating and illustrating a system’s performance over time (see, e.g., [2]). When a component of a simulation model has a certain degree of uncertainty, it is said to be a random component, and it is modeled by using a probability distribution and/or a stochastic process that is sampled throughout the simulation run to produce a stochastic simulation. A random component typically depends on the value of certain parameters; we denote by θ a particular value for the vector of parameters of the random components of a stochastic simulation, and Θ denotes a random vector that corresponds to the parameter values when there is uncertainty in the values of these parameters.
Following the notation of [1], the output of a stochastic (dynamic) simulation can be regarded as a stochastic process { Y ( s ) : s 0 ; Θ } , where Y ( s ) is a random vector (of arbitrary dimension d) representing the state of the simulation at time s 0 . The term transient simulation applies to a dynamic simulation that has a well-defined termination time, so the output of a transient simulation can be viewed as a stochastic process { Y ( s ) : 0 s T ; Θ } , where T is a stopping time (which may be deterministic); see, e.g., [3] for a definition of stopping time.
A performance variable W in a transient simulation is a real-valued random variable (r.v.) that depends on the simulation output up to time T, i.e., W = f ( Y ( s ) , 0 s T ; Θ ) , and the expectation of a performance variable W is a performance measure that we usually estimate through experimentation with the simulation model. When there is no uncertainty in the parameters of the random components, the accepted technique for estimating a performance measure in transient simulation is the method of independent replications. This method consists of running experiments with the simulation model to produce n replications W 1 , W 2 , , W n that can be regarded as independent and identically distributed (i.i.d.) random variables (see Figure 1).
In the method of independent replications, a point estimator for the expectation α = E [ W 1 ] is the average α ^ ( n ) = i = 1 n W i n . If E [ | W 1 | ] < , it follows from the classical Law of Large Numbers (LLN) that α ^ ( n ) is consistent, i.e., it satisfies α ^ ( n ) α , as n (where ⇒ denotes weak convergence of random variables); see, e.g., [3] for a proof. Consistency ensures that the estimator approaches the parameter as the number of replications n increases, and an asymptotic confidence interval (ACI) for the parameter is often used to evaluate the accuracy of the simulation-based estimator. Typically, a Central Limit Theorem (CLT) for the estimator is used to derive the expression for an ACI (see, for example, chapter 3 of [4]). For the case of the expectation α in the algorithm of Figure 1, if E [ W 1 2 ] < , the classical CLT implies that
n ( α ^ ( n ) α ) σ N ( 0 , 1 ) ,
as n , where σ 2 = E [ ( W 1 α ) 2 ] , and N ( 0 , 1 ) denotes an r.v. distributed as normal with a mean of 0 and variance of 1. Then, if E [ W 1 2 ] < , it follows from (1) and Slutsky’s Theorem (see Appendix A) that
n ( α ^ ( n ) α ) σ ^ ( n ) N ( 0 , 1 ) ,
as n , where σ ^ ( n ) denotes the sample standard deviation, i.e., σ ^ 2 ( n ) = i = 1 n ( W i α ^ ( n ) ) 2 n 1 . This CLT implies that
lim n P [ α ^ ( n ) α z β σ ^ ( n ) / n ] = 1 β ,
for 0 < β < 1 , where z β is the ( 1 β / 2 )-quantile of a N ( 0 , 1 ) , so the CLT of Equation (2) is sufficient to establish a 100 ( 1 β ) % ACI for α with the following halfwidth:
H W α = z β σ ^ ( n ) / n .
The standard measurement used in simulation software (e.g., Simio; see [2]) to evaluate the accuracy of α ^ ( n ) as an estimator of expectation α is a halfwidth in the form of (2).
The parameters of the random input components of a simulation model are typically estimated from real-data observations (denoted by a real-valued vector x), in contrast to an estimation of output performance measures that uses observations from simulation experiments. While the majority of applications covered in the related literature assume that there is no uncertainty in the value of input parameters, the uncertainty can be significant when these parameters are estimated by using small amounts of data. In these situations, Bayesian statistics can be used to incorporate this uncertainty in the output analysis of simulation experiments via the use of a posterior distribution p ( θ | x ) . A two-level nested simulation algorithm (see, e.g., [5,6,7]) is a methodology that is currently proposed for the analysis of simulation experiments under parameter uncertainty. In the outer level, we simulate n observations for the parameters from a posterior distribution p ( θ | x ) , while in the inner level, we simulate m observations for the performance variable with the parameters fixed at the value θ generated in the outer level. In this paper, we focus on the case where the observations at both levels of the algorithm are independent (as illustrated in Figure 2). We first show how the variance of a simulated observation can be decomposed into parametric and stochastic variance components, and then we obtain CLTs for the estimator of the point forecast and the estimators of the variance components. Our CLTs allow us to compute an ACI for each estimator. Our results are validated through experiments with a forecast model for sporadic demand reported in [8]. The main theoretical results reported in this paper were first stated in [9] (although the proofs were omitted), and in this paper, we provide the missing proofs, a more comprehensive literature review, and a more complete set of experiments with different values for the parameters of our experiments.
The existing literature on quantifying the impact of uncertainty on the input components of a stochastic simulation is very extensive; detailed reviews can be found, e.g., in [10,11,12] and the references therein. However, in order to situate our results within the framework of the bibliography related to the input analysis of simulation experiments, next, we present a brief discussion of the different approaches that have been proposed on this topic.
According to Barton et al. [13], input analysis in the simulation literature has been addressed essentially in two ways: sensitivity analysis and the characterization of the impact of input uncertainty to provide an ACI (to evaluate the accuracy of a point estimator) that explicitly considers this uncertainty. A sensitivity analysis is performed by running simulation experiments under different distributions for the random components and/or different parameters in order to investigate and describe the changes in the main performance measures of the simulation experiments (see [14,15] for early proposals). Although formalization of this approach was initially proposed by using techniques (e.g., design of experiments and/or regression; see [16]) that were previously proposed to analyze real-world experiments (see, e.g., [17]), some other techniques were proposed for the special purpose of simulation; for example, Freimer and Schruben [18] discussed methods for the design of experiments to search for the sample size of input data that ensured that the difference in the results of two simulation experiments was dominated by the stochastic variance (induced by the simulation experiments) so that the parametric variance (induced by input uncertainty) was not significant for decision making. As pointed out by Bruce Schmeiser in his discussion in [19], sensitivity analysis has a wide range of applications, since it can handle model uncertainty, as well as situations where no real-world data exist; however, a significant drawback is that it does not provide a statistical characterization of input uncertainty. This characterization can be achieved through the construction of an ACI that explicitly considers input uncertainty based on sample data.
According to several authors (e.g., [10,13]), for the construction of an ACI that explicitly considers the impact of input uncertainty, there have been essentially three approaches: the delta method, resampling, and Bayesian methods. Let us denote by η ( θ ) the expectation of W 1 (of Figure 1) as a function of θ , and let θ ^ r be an estimator for parameter θ (where r is the sample size of real-world observations); in this notation, the main idea of the delta method is to consider a Taylor series expansion for η ( θ ^ r ) around θ to investigate convergence properties (as r ) of η ( θ ^ r ) as an estimator of η ( θ ) . Cheng and Holland ([20,21,22]) introduced the use of the delta method to propose the construction of confidence intervals for the expectation of a performance variable of a stochastic simulation under uncertainty in the parameters of a proposed (known) parametric family of distributions for a random component. In [20,22], the authors did not formally prove the asymptotic validity of their proposed confidence intervals but justified them by appealing to the asymptotic normality (as r ) of the estimator θ ^ r (which is the case, under regularity conditions, when θ ^ r is the maximum likelihood estimator). In a later publication ([23]), Cheng and Holland provided proof of the asymptotic validity of their confidence intervals based on the delta method under regularity conditions as r and n . In [20,23], the authors also proposed the construction of asymptotic confidence intervals under parameter uncertainty based on a resampling technique known as parametric bootstrapping; this proposal basically consisted of using the algorithm of Figure 1 but sampling the values for parameter θ from the likelihood evaluated at the maximum likelihood estimator. Some other proposals for the construction of an ACI are based on resampling from the empirical distribution of real-data observations (see, e.g., [13,24,25]). A drawback of proposed confidence intervals based on the delta method and resampling is that their asymptotic validity (see the Theorem of [23] and Theorem 1 of [13]) requires that the sample size of real observations be large ( r ), which is a condition that is probably true for big data, in which case parameter uncertainty may not be significant. Another drawback of techniques based on the delta method and parametric bootstrapping is that parameter θ is assumed to be deterministic (although unknown) so that, at some point, the value of θ is replaced by θ ^ , and this is one reason for why these methods are called frequentists in the statistics community. On the contrary, under a Bayesian approach, a parameter is regarded as a random variable Θ , and the uncertainty about Θ is assessed through a posterior distribution p ( θ | x ) that explicitly incorporates available information from sample data x.
Bayesian methods have solid theoretical foundations (see, e.g., [26]), and they have been proposed to assess not only parameter uncertainty, but also model uncertainty (see [27]). Bayesian methods for input analysis in simulation experiments were introduced by Chick in [28], and since then, there has been a fair number of publications on Bayesian methods for input simulation analysis (see, e.g., [7,29,30,31]). Bayesian methods require the specification of a prior distribution on the input parameters of the simulation model, and some users object to this requirement; however, there is a well-developed theory on objective priors (see, e.g., [32]), and some authors (e.g., [10,33]) consider that this requirement is actually a strength of the approach. Another strength of Bayesian methods for input analysis in stochastic simulation is that some Bayesian methods have been developed to construct an ACI for parameters that are not the expectation—for example, the variance and quantiles for a consistent estimation of a credible interval for the performance variable W (see [34]). It is worth mentioning that some methods need the extra assumption that the simulation output satisfies a meta-model in order to justify the asymptotic validity of their proposed confidence intervals (e.g., [13,35]); as we will see, this extra assumption is not required to establish an ACI for the expectation and variance components of two-level simulation experiments in a Bayesian framework.
The organization of this article is as follows. After this introduction, our proposed methodologies for the computation of an ACI for the point forecast and the variance components in a two-level simulation experiment are then described, and the mathematical results that support the validity of the proposed ACIs are stated (the corresponding proofs are provided in Appendix A). In the next section, we illustrate our notation by obtaining the analytical solutions for the point forecast and the variance components of a Bayesian model to forecast sporadic demand. This solution is used in the next section to illustrate and support, through simulation experiments, the validity of the ACIs proposed in this article. In the final section, we summarize our findings and suggestions for future research.

2. Theoretical Results

To identify the parameters that we wish to estimate by using the two-level nested algorithm in Figure 2, we denote μ ( Θ ) = d e f E [ W 11 | Θ ] , and σ 2 ( Θ ) = d e f E [ W 11 2 | Θ ] μ 2 ( Θ ) . In this notation, the point forecast is the expectation α = E [ μ ( Θ ) ] , and the variance of each W i j is:
V [ W i j ] = d e f E [ W i j 2 ] E [ W i j ] 2 = E [ E [ W i j 2 | Θ ] μ ( Θ ) 2 ] + E [ μ ( Θ ) 2 ] E [ μ ( Θ ) ] 2 = σ S 2 + σ P 2 ,
for i = 1 , , n ; j = 1 , , m , where σ P 2 = V [ μ ( Θ ) ] = d e f E [ μ ( Θ ) 2 ] E [ μ ( Θ ) ] 2 , and σ S 2 = E [ σ 2 ( Θ ) ] , where σ 2 ( Θ ) was previously defined. We mention that, in the relevant literature, σ S 2 is usually referred to as stochastic variance, and σ P 2 is usually referred to as parametric variance.

2.1. Point Estimators

In this paper, we are interested in both the estimation of the point forecast α = E [ μ ( Θ ) ] and the estimators of the variance components of every observation generated in the algorithm in Figure 2 and defined in (3); thus, we first consider the natural point estimators
α ^ ( n ) = 1 n i = 1 n α ^ i , σ ^ T 2 ( n ) = 1 n 1 i = 1 n ( α ^ i α ^ ( n ) ) 2 , σ ^ S 2 ( n ) = 1 n i = 1 n S i 2 ,
where α ^ i = m 1 j = 1 m W i j , and S i 2 = ( m 1 ) 1 j = 1 m ( W i j α ^ i ) 2 , i = 1 , , m . Note that the α ^ i s are i.i.d. with expectation E [ α ^ 1 ] = α and variance
σ T 2 = d e f E [ ( α ^ 1 α ) 2 ] = m 2 ( m E [ ( W 11 α ) 2 ] + m ( m 1 ) E [ ( W 11 α ) ( W 12 α ) ] ) = m 1 ( σ S 2 + σ P 2 ) + m 1 ( m 1 ) σ S 2 = σ S 2 + m 1 σ P 2 .
In addition, note that the S i 2 values are i.i.d. with expectation E [ S 1 2 ] = σ S 2 , i = 1 , , n . Thus, the next proposition follows from the classical LLN.
Proposition 1.
If m 1 and E [ W 11 2 ] < , then α ^ ( n ) and σ ^ T 2 ( n ) are unbiased and consistent (as n ) estimators for α and σ T 2 (as defined in (5)), respectively.
Furthermore, if m 2 and E [ W 11 2 ] < , then σ ^ S 2 ( n ) is an unbiased and consistent (as n ) estimator for σ S 2 (as defined in (3)).

2.2. Accuracy of the Point Estimators

As we stated in Proposition 1, under mild assumptions, the point estimators proposed in (4) are consistent and, thus, converge to the corresponding parameter value (as n ). However, to determine the degree of accuracy of these estimators, we must establish a CLT for each estimator to derive a valid expression for the corresponding ACI. Note that both α ^ ( n ) and σ ^ S 2 ( n ) are averages of i.i.d observations; thus, the next proposition follows from the classical CLT for i.i.d. observations.
Proposition 2.
If m 1 and E [ W 11 2 ] < , then
n ( α ^ ( n ) α ) σ T N ( 0 , 1 ) ,
as n .
  Furthermore, if m 2 and E [ W 11 4 ] < , then
n ( σ ^ S 2 ( n ) σ S 2 ) V S N ( 0 , 1 ) ,
as n , where σ S 2 is defined in (3), σ T 2 is defined in (5), α ^ ( n ) , σ ^ S 2 ( n ) are defined in (4), and V S = E [ ( S 1 2 σ S 2 ) 2 ] , where S 1 2 is defined in (4).
Since we have consistent estimators for σ T 2 and V S (under mild assumptions), the next corollary follows from Proposition 1 and Slutsky’s Theorem; the details of the proof are given in Appendix A.
Corollary 1.
Under the same notation and assumptions as in Proposition 2, for m 1 , we have
n ( α ^ ( n ) α ) σ ^ T 2 ( n ) N ( 0 , 1 ) ,
as n , and for m 2 , we have
n ( σ ^ S 2 ( n ) σ S 2 ) V ^ S ( n ) N ( 0 , 1 ) ,
as n , where σ ^ S 2 ( n ) and σ ^ T 2 ( n ) are defined in (4), and
V s ^ ( n ) = 1 n 1 i = 1 n ( S i 2 S 2 ) 2 , S 2 = 1 n i = 1 n S i 2 .
In order to obtain a CLT for σ ^ T 2 ( n ) , note that this estimator is the sample variance of i.i.d. observations; thus we can use the following lemma. A proof using the delta method (see, e.g., Proposition 2 of [36] for a proof) is provided in Appendix A.
Lemma 1.
If X 1 , X 2 , is a sequence of i.i.d. random variables with E [ X 1 4 ] < , then
n ( S 2 ( n ) σ 1 2 ) σ 2 2 N ( 0 , 1 ) ,
as n , where σ 1 2 = μ 2 μ 1 2 , σ 2 2 = μ 1 2 μ 2 4 μ 1 4 4 μ 1 μ 3 + μ 4 μ 2 2 , μ k = E [ X 1 k ] , k = 1 , 2 , 3 , 4 ; S 2 ( n ) = ( n 1 ) 1 i = 1 n ( X i μ ^ 1 ) 2 , μ ^ 1 = n 1 i = 1 n X i .
Note that, for k = 1 , 2 , 3 , 4 , μ ^ k of Lemma 1 is an unbiased and consistent estimator for μ k , so the next corollary follows directly from Lemma 1.
Corollary 2.
Under the same assumptions as those in Lemma 1, we have
n ( S 2 ( n ) σ S 2 ) σ ^ 2 2 ( n ) N ( 0 , 1 ) ,
as n , where σ ^ 2 2 = 8 μ ^ 1 2 μ ^ 2 4 μ ^ 1 4 4 μ ^ 1 μ ^ 3 + μ ^ 4 μ ^ 2 2 , μ ^ k = n 1 i = 1 n X i k .
Since σ ^ T 2 ( n ) is the sample variance of α ^ i , i = 1 , , n , the next corollary follows directly from Lemma 1.
Corollary 3.
If m 1 and E [ W 11 4 ] < , then
n ( σ ^ T 2 ( n ) σ T 2 ) V ^ T ( n ) N ( 0 , 1 ) ,
as n , where V ^ T ( n ) = 8 α ¯ 1 2 α ¯ 2 4 α ¯ 1 4 4 α ¯ 1 α ¯ 3 + α ¯ 4 α ¯ 2 2 , α ¯ k = n 1 i = 1 n α ^ i k , k = 1 , 2 , 3 , 4 .
Corollaries 1, 2, and 3 are the CLTs required to establish an ACI for the point forecast α , the stochastic variance σ S 2 , and the variance σ T 2 = σ S 2 + m 1 σ P 2 , respectively. According to these corollaries, for 0 < β < 1 , the 100 ( 1 β ) % ACIs are centered in the corresponding point estimator, and have halfwidths are given by:
H W α = z β σ ^ T 2 ( n ) n , H W σ S 2 = z β V ^ S ( n ) n , and H W σ T 2 = z β V ^ T ( n ) n ,
for α , σ S 2 , and σ T 2 , respectively, where σ ^ T 2 ( n ) is defined in (4), and V ^ S ( n ) and V ^ T ( n ) are defined in Corollary 1 and in Corollary 3, respectively.
As we have seen, by using the algorithm in Figure 2, we can build valid ACIs for the parameters of interest in this article, although a relevant question is that of how to distribute the computing time between the two loops to obtain more accurate point estimators—that is, given a budget k = n m , which value of m provides the most accurate estimators? For the case of the estimation of the point forecast α , we can answer this question, as we explain below. Since the point estimator α ^ ( n ) is an average of i.i.d. random variables, for fixed values of k = n m , it follows from Equation (5) that the variance of α ^ ( n ) is given by
n 1 σ T 2 = k 1 ( m σ S 2 + σ P 2 ) ,
and takes its minimal value when m = 1 , suggesting that the point estimator α ^ ( n ) defined in (4) is more accurate when m is smaller (i.e., it takes the value of 1). However, note that a smaller value of m is convenient (from the point of view of running time) for a fixed budget of k = n m when the computing time needed to generate a random variate from p ( θ | x ) in the algorithm in Figure 2 is negligible compared to the computing time needed to generate W i j , as is the case in most real applications.
Note that the TLCs stated in Xorollaries 1, 2 and 3 were obtained for a fixed value of m as n in the algorithm in Figure 2, which means that the accuracy of the corresponding point estimator increases as the number of observations in the outer level increases and m remains fixed. An interesting result is that we can also obtain a TLC for the point forecast α if we allow m to increase with n, as we state in the following proposition (a proof using the Lindeberg–Feller theorem is provided in Appendix A).
Proposition 3.
Given 0 < p 1 , if m = n 1 + 1 / p and E [ W 11 2 ] < , then
n ( α ^ ( n ) α ) σ T 2 N ( 0 , 1 ) ,
as n , where σ T 2 is defined in (5) and, for any real number s, s denotes the integer part of s.
Note that the last proposition implies that the ACI defined in Equation (6) for the point forecast α is also valid under the assumptions of Proposition 3. If, once again, we set the total number of iterations in the algorithm in Figure 2 to k = n m , we let n k p , m k 1 p , and nm = k, as in Proposition 3, and it follows from Equation (5) that the variance of α ^ ( n ) is n 1 σ T 2 k p ( σ S 2 + k ( 1 p ) σ P 2 ) = k p σ S 2 + k 1 σ P 2 for every 0 < p 1 . In this case, for a fixed value of k, n 1 σ T 2 reaches its minimum value when p = 1 , that is, when n = k and m = 1 . However, note that we need m 2 in order to estimate σ S 2 . In Section 4, we report some empirical results that confirm our theoretical results. It is worth mentioning that the case of n = k and m = 1 has been reported in the literature as the posterior sampling algorithm (see, e.g., [34,37]).

3. A Forecast Model for Inventory Management with an Analytical Solution

The following model was proposed in [8] to forecast sporadic demand by incorporating data on times between arrivals and customer demand, and uncertainty in the model parameters was incorporated by using a Bayesian approach. For this model, we will show analytical expressions for the performance measures defined in Section 2. These expressions are used in Section 4 to provide empirical evidence of the validity of the ACIs proposed in Section 2. This application example will also illustrate the notation that we used in the previous sections.
The arrivals of customers who enter a store to buy a certain item follow a Poisson process. There is uncertainty in the value of the arrival rate Θ 0 , although we assume that given [ Θ 0 = θ 0 ] , the times between customer arrivals are i.i.d. with exponential density:
f ( y | θ 0 ) = θ 0 e θ 0 y , y > 0 , 0 , otherwise ,
where θ 0 S 00 = ( 0 , ) . We assume that any client can order j units of this item with probability Θ 1 j , j = 1 , , q , q 2 . Let Θ 1 = ( Θ 11 , Θ 1 ( q 1 ) ) and Θ 1 q = 1 j = 1 q 1 Θ 1 j ; then, Θ = ( Θ 0 , Θ 1 ) is the vector of parameters, and S 0 = S 00 S 01 is the parameter space, where S 01 = { ( θ 11 , , θ 1 ( q 1 ) ) : j = 1 q 1 θ 1 j 1 ; θ 1 j 0 , j = 1 , , q 1 } .
We are interested in forecasting the total demand for this item over a period of length T:
D = i = 1 N ( T ) U i , N ( T ) > 0 0 , otherwise ,
where, for any s 0 , N ( s ) is the number of customers who came to buy the item during the interval [ 0 , s ] , and U 1 , U 2 , are the individual demands, which are assumed to be conditionally independent relative to Θ 0 . The vector of real-data observations is denoted by x = ( v , u ) and consists of i.i.d. observations v = ( v 1 , , v r ) , u = ( u 1 , , u r ) of past customers, where v i is the interarrival time between customer i and customer ( i 1 ), and u i is the number of units ordered by client i. By taking Jeffrey’s non-informative prior as the prior density for Θ , we obtain the posterior density (see [8] for details) p ( θ | x ) = p ( θ 0 | v ) p ( θ 1 | u ) , where θ = ( θ 0 , θ 1 ) , and
p ( θ 0 | v ) = θ 0 r 1 ( i = 1 r v i ) r e θ 0 i = 1 r v i ( r 1 ) ! , p ( θ 1 | u ) = ( 1 j = 1 q 1 θ 1 j ) c q 1 / 2 Π j = 1 q 1 θ 1 j c j 1 / 2 B ( c 1 + 1 / 2 , , c q + 1 / 2 ) ,
where c j = i = 1 r I [ u i = j ] , and B ( a 1 , a q ) = Π j = 1 q Γ ( a j ) / Γ ( j = 1 q a j ) for a 1 , , a q > 0 . With this notation, we can show that (see [1] for details)
α = E [ T Θ 0 ] j = 1 q j p j ,
σ P 2 = E [ T 2 Θ 0 2 ] ( q 0 + 1 ) j = 1 q j 2 p j + E [ T Θ 0 ] 2 [ ( q 0 / n ) 1 ] ( q 0 + 1 ) ( j = 1 q j p j ) 2 ,
σ S 2 = E [ T Θ 0 ] j = 1 q j 2 p j ,
where E [ T Θ 0 ] = T r ( i = 1 r v i ) 1 , E [ T 2 Θ 0 2 ] = T 2 r ( 1 + r ) ( i = 1 r v i ) 2 , p j = q j / q 0 , q j = c j + 1 / 2 , j = 1 , , q , q 0 = j = 1 q q j , and c j is defined in (10).

4. Empirical Results

To validate the ACIs proposed in (4), we conducted some experiments with the Bayesian model of Section 3 to illustrate the estimation of α , σ S 2 , and σ T 2 . We considered the values T = 15 , r = 20 , i = 1 r x i = 10 , q = 5 , c 1 = 5 , c 2 = 3 , c 3 = 2 , c 4 = 3 , and c 5 = 7 . With these data, the point forecast is α 95.333 , and the variance components are σ S 2 380.667 and σ P 2 568.598 . Note that σ S 2 < σ P 2 in this case, for which we also ran the same experiments with T = 5 , so α 31.778 , σ S 2 162.889 , σ P 2 62.066 , and σ S 2 > σ P 2 in the latter case. The empirical results that we report below illustrate a typical behavior that we should experiment with for any other feasible dataset.
In each of the estimation experiments carried out for this research, we considered 1000 independent replications of the algorithm in Figure 2 for different numbers of observations in the outer loop (n) and in the inner loop (m); in each replication, we computed the point estimators for α , σ S 2 , and σ T 2 , as well as the corresponding halfwidths of 90% ACIs according to Equation (6). Because we are estimating parameters whose values we know a priori, we can report (for a given n and m) the empirical coverage (i.e., the proportion of independent replications in which the corresponding ACI covers the true value of the parameter), the average and the standard deviation of halfwidths, and the square root of the empirical mean squared error defined by
R M S E = 1 n 0 i = 1 n 0 ( θ ^ i θ ) 2 ,
where θ ^ i is the value obtained in replication i for the point estimation of a parameter θ , i = 1 , 2 , , n 0 (we set the number of replications to n 0 = 1000 ).
In the first set of experiments, we considered n m = 240 , 2400 , 24,000 and m = 1 , 2 , 3 , 4 , 5 for each value of n m to compare the effect of increasing the number of observations in the inner loop for a given value of n m . The main results of this set of experiments are summarized in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8. Note that we do not consider m = 1 in Figure 5 and Figure 6 to be able to construct an ACI for the stochastic variance σ S 2 .
In Figure 3, we illustrate the performance measures for the quality of the estimation procedure that we obtained for the estimation of the point forecast α when T = 15 . As we can observe in Figure 3, the coverages are acceptable (very close to the nominal value of 0.9, even for n m = 240 ). These results validate the ACI defined in (6) for the point forecast α . We also observe in Figure 3 that the RMSE, average halfwidth, and standard deviation of the halfwidths improve (decrease) as the number of observations in the outer loop (n) increases, as suggested by Corollary 1. Note also in Figure 3 that a smaller value of m provides smaller RMSEs, average halfwidths, and standard deviations of the halfwidths, thus validating our suggestion that m should be as small as possible to improve the accuracy in the estimation of α . In Figure 4, we illustrate the corresponding results for T = 5 , where we can observe that the same conclusions mentioned for T = 15 are fulfilled, and the main difference from the previous case is that the RMSE, average halfwidths, and standard deviation of halfwidths are smaller, which is consistent with the fact that the point forecast α is smaller than in the case in which T = 15 . Note that both in the case of Figure 3 and in the case of Figure 4, the RMSE, average halfwidth, and average standard deviation of the halfwidths seem to be a linear function of m.
In Figure 5, we illustrate the performance measures for the quality of the estimation procedure that we obtained for the estimation of the stochastic variance σ S 2 when T = 15 . As we can observe in Figure 5, the coverages are acceptable (very close to the nominal value of 0.9, even for n m = 240 ). These results validate the ACI defined in (6) for the stochastic variance σ S 2 . We also observe in Figure 5 that the RMSE, average halfwidth, and standard deviation of halfwidths improve (decrease) as the number of observations in the outer level (n) increases, as suggested by Corollary 2. However, contrary to what we observed for the estimation of α , a larger value of m provides smaller RMSE, average halfwidths, and standard deviations of the halfwidths, suggesting that, for a fixed value of n m , the quality of the estimation for the stochastic variance σ S 2 improves as the number of the observations in the inner loop (m) increases, although the values are very close for n m = 2400 , 24,000 . In Figure 6, we illustrate the corresponding results for T = 5 , where we can observe that the same conclusions as those mentioned for T = 15 are fulfilled, and the main difference from the previous instance that we observed is that, now, the RMSE, average halfwidths, and standard deviation of halfwidths are smaller, which is consistent with the fact that the point forecast α is smaller than in the case T = 15 . Contrary to what we observed for the case of the estimation of α , note that both in the case of Figure 5 and in the case of Figure 6, the RMSE, average halfwidth, and average standard deviation of the halfwidth do not seem to be a linear function of m. We emphasize that the case in which m = 1 is not considered in Figure 5 and Figure 6 because σ S 2 cannot be estimated when m = 1 .
For the estimation of the total variance σ T 2 (illustrated in Figure 7 for T = 15 , and in Figure 8 for T = 5 ), we obtained results for the quality of the estimation that were similar to those for the estimation of the point forecast α , except that larger values of n were required to obtain reliable coverages. As we can observe in Figure 7 and Figure 8, the coverages are acceptable (very close to the nominal value of 0.9, only for n = 2400 and 24,000). These results validate the ACI defined in (6) for the total variance σ T 2 . We can also observe in Figure 7 and Figure 8 that the RMSE, average halfwidth, and standard deviation of the halfwidths improve (decrease) as the number of observations in the outer loop (n) increases, as suggested by Corollary 3. Note also in Figure 7 that a smaller value of m provides smaller RMSEs, average halfwidths, and standard deviations of halfwidths. However, for the case in which T = 5 (illustrated in Figure 8), where σ S 2 > σ P 2 , we observe that the RMSE and the average halfwidth decrease from m = 1 to m = 2 and then increase, showing that the best value of m for the estimation of σ T 2 depends on the value of the ratio of σ S 2 / σ P 2 .
In a second set of experiments, we considered n m = 100 , 1000 , 10,000 , with m = 1 , m ( n m ) 1 / 3 , and m ( n m ) 1 / 2 for each value of n m , to compare the quality of the estimation procedures by using the value of m that we suggested as appropriate for the estimation of the point forecast α with other choices of p to illustrate the validity of Proposition 3.
Note that m ( n m ) 1 / 3 is equivalent to p = 2 / 3 in Proposition 3, and m ( n m ) 1 / 2 corresponds to p = 1 / 2 . Note also that m ( n m ) 1 / 3 corresponds to the value of m suggested in [5], which is a good option for the case of biased estimation in the inner loop of the algorithm in Figure 2. The results of this set of experiments are summarized in Figure 9, Figure 10, Figure 11 and Figure 12. Note that we do not consider the estimation of the stochastic variance σ S 2 in this set of experiments because m 2 is required to construct an ACI for the stochastic variance σ S 2 . Note also that we consider a = 100 , b = 1000 , c = 10,000 , a 1 / 3 5 , c 1 / 3 20 , and c 1 / 2 32 , and we use the same color (red) for m = a 1 / 3 , b 1 / 3 , and c 1 / 3 , as well as the same color (yellow) for m = a 1 / 2 , b 1 / 2 , and c 1 / b , to report our results in Figure 9, Figure 10, Figure 11 and Figure 12.
In Figure 9 and Figure 10, we illustrate the performance measures for the quality of the estimation procedure that we obtained for the estimation of the point forecast α in our second set of experiments. As we can observe in Figure 9 and Figure 10, the coverages are acceptable (very close to the nominal value of 0.9, even for n = 100 ). These results validate the ACI defined in (6) for the point forecast α and the ACI suggested by Proposition 3. We can also observe in Figure 9 and Figure 10 that the RMSE, average halfwidth, and standard deviation of the halfwidths are worse than m = 1 for m ( n m ) 1 / 3 , and they are even worse for m ( n m ) 1 / 2 , thus confirming our finding that, for a fixed number of simulated observations k = n m , a smaller value of m produces better point estimators for α , confirming the result of Proposition 3.
Finally, in Figure 11 and Figure 12, we show our results for the second set of experiments and the estimation of the total variance σ T 2 .
In Figure 11 and Figure 12, we found similar results to those for the case of the estimation of the point forecast α , the coverages were very good (even for n = 100), and all measures of the quality of the point estimation (RMSE, average and standard deviation of halfwidths) were worse than those in the case in which m = 1 for m ( n m ) 1 / 3 , and they were even worse for m ( n m ) 1 / 2 , suggesting that, as in the case of the estimation of α , a smaller value of m produces better point estimators for σ T 2 given a fixed number of replications k = n m , with the only exception in Figure 12 ( σ S 2 > σ P 2 ) being for the case of the average halfwidths and n m = 100 , where the average halfwidth seems to decrease with the value of m.

5. Conclusions

In this article, we discussed methods for the estimation of both the point forecast and the variance components of Bayesian forecast models from the output of stochastic simulations by using a two-level nested algorithm in which the simulated observations at both levels of the algorithm are independent. Our main contribution is the development of valid asymptotic confidence intervals for assessing the accuracy of the simulation-based point estimators. These methods are particularly useful when there is uncertainty in the parameter values of the random input components of a forecast model, and we wish to incorporate this uncertainty into the simulation-based forecasting of the model’s performance measures.
The proposed point estimators and their corresponding halfwidths are asymptotically valid, as shown by the theoretical and experimental results, which show that the point estimators converge to the corresponding parameter values and that the halfwidths converge to the nominal coverage as the number of replications n of the outer level increases. In addition, the halfwidths corresponding to the proposed ACIs tend to zero (as n ), which normally occurs with the appropriate simulation-based estimators of performance measurements from the outputs of simulation experiments.
We also investigated the best option for the number of observations m in the inner loop of the algorithm given a fixed number of observations k = n m , and we found that the choice of only one observation ( m = 1 ) is the best option for obtaining the smallest variance of the point estimator for the expectation of a performance variable. However, for the estimation of the stochastic variance σ S 2 in a two-level nested algorithm, m 2 is required. On the other hand, for the estimation of the stochastic variance σ S 2 , our experimental results show that larger values of m are better, and the best choice of m depends on the ratio of σ S 2 / σ P 2 for the case of the estimation of the total variance ( σ T 2 ).
We remark that we did not consider the case of correlated observations in the outer loop of the two-level algorithm or the case of steady-state simulations, which may have important applications for simulation-based estimation by using Markov chain Monte Carlo (see [38]). In addition, experimentation with other computational procedures, such as quasi-Monte Carlo (see [6]) or Simpson integration, may be other directions for future research.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/appliedmath3030031/s1.

Funding

This research was supported by the Asociación Mexicana de Cultura A.C. and the National Council of Science of Technology of Mexico under Award Number 1200/158/2022.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data corresponding to the results of our experiments are available in the repository of AppliedMath (Supplementary Materials).

Acknowledgments

The author is very grateful for the valuable suggestions and comments from the Academic Editor and the three anonymous referees.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

For completeness, we first cite three well-known theorems. Proofs of Theorems A1 and A2 can be found, e.g., in [39], and a proof of Theorem A3 can be found, e.g., in [36]. In what follows, we write ⇒ for weak converge (as n without explicit mention). In addition, will denote the space of real numbers, and, for an integer k, k will denote the k-dimensional space of real numbers.
Theorem A1.
(Slutsky). Let X , Y , X 1 , X 2 , , Y 1 , Y 2 , be random variables and let c be a real constant. If X n X and Y n c , then:
(i) 
X n + Y n X + c
(ii) 
X n Y n c X
(iii) 
X n / Y n X / c , if c 0 .
Theorem A2.
(Continuous mapping). Let X , X 1 , X 2 , be k -valued random vectors, and let g : k be a function such that P [ X D ( g ) ] = 0 , where D ( g ) = { x : g ( x ) is not continuous at x } ; then, g ( X n ) g ( X ) .
Theorem A3.
(Delta method). Let Y 1 , Y 2 , be k -valued random vectors, and let g: I R k be a function that is differentiable in a neighborhood of μ k . If there exists a k × k matrix G such that the TLC
n [ Y ¯ ( n ) μ ] G N k ( 0 , 1 )
is satisfied, where Y ¯ ( n ) = n 1 i = 1 m Y i , and N k ( 0 , I ) denotes a (k-variate) normal distribution with a mean of 0 and variance of I (the identity), then
n [ g ( Y ¯ ( n ) ) g ( μ ) ] σ N ( 0 , 1 ) ,
where σ = g ( μ ) T G G T g ( μ ) ) .
Proof of Corollary 1.
Since α 1 ^ , α 2 ^ , are i.i.d. with E [ α ^ 1 2 ] < , it follows from the Law of Large Numbers that n 1 ( i = 1 n α ^ i 2 , i = 1 n α ^ i ) ( E [ α ^ 1 2 ] , E [ α ^ 1 ] ) . Therefore, by taking g ( x 1 , x 2 ) = x 1 x 2 2 for x 1 x 2 2 0 in Theorem A2, we have ( n 1 ) σ ^ T 2 ( n ) / n σ T 2 , so Y n = σ ^ T 2 ( n ) / σ T 2 1 . Finally, by taking X n = n ( α ^ ( n ) α ) / σ T 2 in Theorem A1, it follows from Proposition 2 that
n ( α ^ ( n ) α ) σ ^ T 2 ( n ) N ( 0 , 1 )
Similarly, since S 1 2 , S 2 2 , are i.i.d. with E [ S 1 4 ] < , it follows from Theorem A1 and Proposition 2 that
n ( σ ^ S 2 ( n ) σ S 2 ) V ^ S ( n ) N ( 0 , 1 ) .
Proof of Lemma 1.
Let k = 2 , Y i = ( X i , X i 2 ) , μ = ( μ 1 , μ 2 ) ; then, the TLC of Theorem A3 is satisfied for
G G T = μ 2 μ 1 2 μ 3 μ 1 μ 2 μ 3 μ 1 μ 2 μ 4 μ 2 2 .
Taking g ( μ ) = μ 2 μ 1 2 = σ 1 2 , we have g ( Y ¯ ( n ) ) = ( n 1 ) S 2 ( n ) / n , g ( μ ) T = ( 2 μ 1 , 1 ) , and
f ( μ ) T G G T f ( μ ) = 8 μ 1 2 μ 2 4 μ 1 4 4 μ 1 μ 3 + μ 4 μ 2 2 = σ 2 2 .
It follows from Theorem A3 that
n ( n 1 ) S 2 ( n ) / n σ 1 2 σ 2 N ( 0 , 1 ) ,
and the final conclusion follows from Theorem A1. □
Proof of Proposition 3.
In this proof, we follow the notation of the Lindeberg–Feller Theorem as stated in Theorem 7.2.1 of [3].
For n = 1, 2, …, let m n = n 1 + 1 / p and α j ( n ) = ( i = 1 m n W i j ) / m n , j = 1 , , n . Then, α 1 ( n ) , α 2 ( n ) , , α n ( n ) are independent, and for X n j = ( α j ( n ) α ) / n σ T 2 , we also have that X n 1 , X n 2 , X n n are independent.
Then, if Y n j = ( α j ( n ) α ) / σ T , we have E [ Y n j ] = 0 and E [ Y n j 2 ] = 1 , so given ϵ > 0 , there exists η 0 > 0 such that | y | < η 0 y 2 d F y n j ( y ) < ϵ .
Therefore, given η > 0 , for n m a x { 1 , ( η 0 / η ) 2 } , we have
j = 1 n | x | < η x 2 d F x n j ( x ) j = 1 n 1 n | y | < η 0 y 2 d F y n j ( y ) < ϵ ,
so (1) of Theorem 7.2.1 of [3] is satisfied, and it follows from this Theorem that S n N ( 0 , 1 ) , where
S n = j = 1 n X n j = n ( α ^ ( n ) α ) σ T 2 .

References

  1. Muñoz, D.F. Simulation output analysis for risk assessment and mitigation. In Multi-Criteria Decision Analysis for Risk Assessment and Management; Ren, J., Ed.; Springer: Heidelberg, Germany, 2021; pp. 111–148. [Google Scholar]
  2. Smith, J.S.; Sturrock, D.T. Simio and Simulation: Modeling, Analysis, Applications, 6th ed.; Simio LLC: Sewickley, PA, USA, 2022. [Google Scholar]
  3. Chung, K.L. A Course in Probability Theory, 3rd ed.; Academic Press: Cambridge, MA, USA, 2001. [Google Scholar]
  4. Asmussen, S.; Glynn, P.W. Stochastic Simulation Algorithms and Analysis; Springer: Heidelberg, Germany, 2007. [Google Scholar]
  5. Andradóttir, S.; Glynn, P.W. Computing bayesian means using simulation. ACM TOMACS 2016, 26, 10. [Google Scholar] [CrossRef] [Green Version]
  6. L’Ecuyer, P. Quasi-Monte Carlo methods with applications in finance. Financ. Stoch. 2009, 13, 307–349. [Google Scholar] [CrossRef] [Green Version]
  7. Zouaoui, F.; Wilson, J.R. Accounting for parameter uncertainty in simulation input modeling. IIE Trans. 2003, 35, 781–792. [Google Scholar] [CrossRef]
  8. Muñoz, D.F.; Muñoz, D.F. Bayesian forecasting of spare parts using simulation. In Service Parts Management: Demand Forecasting and Inventory Control; Altay, N., Litteral, L.A., Eds.; Springer: Heidelberg, Germany, 2011; pp. 105–123. [Google Scholar]
  9. Muñoz, D.F. Estimation of expectations in two-level nested simulation experiments. In Proceedings of the 29th European Modeling and Simulation Symposium, Barcelona, Spain, 18–20 September 2017; pp. 233–238. [Google Scholar]
  10. Henderson, S.G. Input model uncertainty: Why do we care and what should we do about it? In Proceedings of the 2003 Winter Simulation Conference, New Orleans, LA, USA, 7–10 December 2003; pp. 90–100. [Google Scholar]
  11. Song, E.; Nelson, B.L.; Pegden, C.D. Advanced tutorial: Input uncertainty quantification. In Proceedings of the 2014 Winter Simulation Conference, Savannah, GA, USA, 7–10 December 2014; pp. 162–176. [Google Scholar]
  12. Barton, R.R.; Lam, H.; Song, E. Input uncertainty in stochastic simulation. In The Palgrave Handbook of Operations Research; Salhi, S., Boylan, J., Eds.; Springer: Heidelberg, Germany, 2022; pp. 573–620. [Google Scholar]
  13. Barton, R.R.; Nelson, B.L.; Xie, W. Quantifying input uncertainty via simulation confidence intervals. Inf. J. Comput. 2014, 26, 74–87. [Google Scholar] [CrossRef] [Green Version]
  14. Kleijnen, J.P.C. Sensitivity analysis versus uncertainty analysis: When to use what? In Predictability and Nonlinear Modelling in Natural Sciences and Economics; Grassman, J., van Straten, G., Eds.; Kluwer: Dordrecht, The Netherlands, 1994; pp. 322–333. [Google Scholar]
  15. Kleijnen, J.P.C. Five-stage procedure for the evaluation of simulation models through statistical technique. In Proceedings of the 1996 Winter Simulation Conference, Coronado, CA, USA, 8–11 December 1996; pp. 248–254. [Google Scholar]
  16. Kleijnen, J.P.C. Experimental design for sensitivity analysis, optimization, and validation of simulation models. In Handbook of Simulation: Principles, Methodology, Advances, Applications, and Practice; Banks, J., Ed.; Wiley: New York, NY, USA, 1998; pp. 133–140. [Google Scholar]
  17. Montgomery, D.C. Design and Analysis of Experiments, 8th ed.; Wiley: New York, NY, USA, 2012. [Google Scholar]
  18. Freimer, M.; Schruben, L. Collecting data and estimating parameters for input distributions. In Proceedings of the 2002 Winter Simulation Conference, San Diego, CA, USA, 8–11 December 2002; pp. 392–399. [Google Scholar]
  19. Barton, R.R.; Cheng, R.C.H.; Chick, S.E.; Henderson, S.G.; Law, A.M.; Leemis, L.M.; Schmeiser, B.W.; Schruben, L.W.; Wilson, J.R. Panel on current issues in simulation input modeling. In Proceedings of the 2002 Winter Simulation Conference, San Diego, CA, USA, 8–11 December 2002; pp. 353–369. [Google Scholar]
  20. Cheng, R.C.H. Selecting input models. In Proceedings of the 1994 Winter Simulation Conference, Orlando, FL, USA, 11–14 December 1994; pp. 184–191. [Google Scholar]
  21. Cheng, R.C.H.; Holland, W. Sensitivity of computer simulation experiments to errors in input data. J. Stat. Comput. Simul. 1997, 57, 219–241. [Google Scholar] [CrossRef]
  22. Cheng, R.C.H.; Holland, W. Two-point methods for assessing variability in simulation output. J. Stat. Comput. Simul. 1998, 60, 183–205. [Google Scholar] [CrossRef]
  23. Cheng, R.C.H.; Holland, W. Calculation of confidence intervals for simulation output. ACM TOMACS 2004, 14, 344–362. [Google Scholar] [CrossRef]
  24. Barton, R.R.; Schruben, L.W. Uniform and bootstrap resampling of input distributions. In Proceedings of the 1993 Winter Simulation Conference, Los Angeles, CA, USA, 12–15 December 1993; pp. 503–508. [Google Scholar]
  25. Barton, R.R.; Schruben, L.W. Resampling methods for input modeling. In Proceedings of the 2001 Winter Simulation Conference, Arlington, VA, USA, 9–12 December 2001; pp. 372–378. [Google Scholar]
  26. Bernardo, J.M.; Smith, A.F.M. Bayesian Theory, 8th ed.; Wiley: New York, NY, USA, 2009. [Google Scholar]
  27. Draper, D. Assessment and propagation of model uncertainty. J. R. Stat. Soc. Ser. B Stat. Methodol. 1995, 57, 45–70. [Google Scholar] [CrossRef]
  28. Chick, S.E. Bayesian analysis for simulation input and output. In Proceedings of the 1997 Winter Simulation Conference, Atlanta, GA, USA, 7–10 December 1997; pp. 253–260. [Google Scholar]
  29. Chick, S.E. Input distribution selection for simulation experiments: Accounting for input uncertainty. Oper. Res. 2001, 49, 744–758. [Google Scholar] [CrossRef]
  30. Zouaoui, F.; Wilson, J.R. Accounting for input-model and input-parameter uncertainties in simulation. IIE Trans. 2004, 36, 1135–1151. [Google Scholar] [CrossRef] [Green Version]
  31. Biller, B.; Corlu, C.G. Accounting for parameter uncertainty in large-scale stochastic simulations with correlated inputs. Oper. Res. 2011, 49, 661–673. [Google Scholar] [CrossRef] [Green Version]
  32. Berger, J.O.; Bernardo, J.M. On the development of the reference prior method. Bayesian Stat. 1992, 4, 35–60. [Google Scholar]
  33. Chick, S.E. Subjective probability and Bayesian methodology. In Handbooks in Operations Research and Management Science; Henderson, S.G., Nelson, B.L., Eds.; Elsevier: Amsterdam, The Netherlands, 2006; Volume 13, pp. 225–257. [Google Scholar]
  34. Muñoz, D.F.; Muñoz, D.F.; Ramírez-López, A. On the incorporation of parameter uncertainty for inventory management using simulation. Int. Trans. Oper. Res. 2013, 20, 493–513. [Google Scholar] [CrossRef]
  35. Xie, W.; Nelson, B.L.; Barton, R.R. A Bayesian framework for quantifying uncertainty in stochastic simulation. Oper. Res. 2014, 62, 1439–1452. [Google Scholar] [CrossRef] [Green Version]
  36. Muñoz, D.F.; Glynn, P.W. A batch means methodology for estimation of a nonlinear function of a steady-state mean. Manag. Sci. 1997, 43, 1121–1135. [Google Scholar] [CrossRef]
  37. Russo, D.; Van Roy, B. Learning to optimize via posterior sampling. Math. Oper. Res. 2014, 39, 1221–1243. [Google Scholar] [CrossRef] [Green Version]
  38. Brooks, S.; Gelman, A.; Jones, G.; Meng, X.L. (Eds.) Handbook of Markov Chain Monte Carlo; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
  39. Serfling, R.J. Approximation Theorems of Mathematical Statistics; Wiley: New York, NY, USA, 2009. [Google Scholar]
Figure 1. Algorithm for the method of independent replications with a parameter fixed at the value θ .
Figure 1. Algorithm for the method of independent replications with a parameter fixed at the value θ .
Appliedmath 03 00031 g001
Figure 2. Algorithm for a two-level nested simulation experiment to calculate a point estimator for the expectation of a performance variable under parameter uncertainty.
Figure 2. Algorithm for a two-level nested simulation experiment to calculate a point estimator for the expectation of a performance variable under parameter uncertainty.
Appliedmath 03 00031 g002
Figure 3. Performance of the estimation of the point forecast α for T = 15 and fixed n m with different values of m.
Figure 3. Performance of the estimation of the point forecast α for T = 15 and fixed n m with different values of m.
Appliedmath 03 00031 g003
Figure 4. Performance of the estimation of the point forecast α for T = 5 and fixed n m with different values of m.
Figure 4. Performance of the estimation of the point forecast α for T = 5 and fixed n m with different values of m.
Appliedmath 03 00031 g004
Figure 5. Performance of the estimation of the stochastic variance σ S 2 for T = 15 and fixed n m with different values of m.
Figure 5. Performance of the estimation of the stochastic variance σ S 2 for T = 15 and fixed n m with different values of m.
Appliedmath 03 00031 g005
Figure 6. Performance of the estimation of the stochastic variance σ S 2 for T = 5 and fixed n m with different values of m.
Figure 6. Performance of the estimation of the stochastic variance σ S 2 for T = 5 and fixed n m with different values of m.
Appliedmath 03 00031 g006
Figure 7. Performance of the estimation of the total variance σ T 2 for T = 15 and fixed n m with different values of m.
Figure 7. Performance of the estimation of the total variance σ T 2 for T = 15 and fixed n m with different values of m.
Appliedmath 03 00031 g007
Figure 8. Performance of the estimation of the total variance σ T 2 for T = 5 and fixed n m with different values of m.
Figure 8. Performance of the estimation of the total variance σ T 2 for T = 5 and fixed n m with different values of m.
Appliedmath 03 00031 g008
Figure 9. Performance of the estimation of the point forecast α for T = 15 and fixed n m to compare m = 1 , m ( n m ) 1 / 3 , and m ( n m ) 1 / 2 .
Figure 9. Performance of the estimation of the point forecast α for T = 15 and fixed n m to compare m = 1 , m ( n m ) 1 / 3 , and m ( n m ) 1 / 2 .
Appliedmath 03 00031 g009
Figure 10. Performance of the estimation of the point forecast α for T = 5 and fixed n m to compare m = 1 , m ( n m ) 1 / 3 , and m ( n m ) 1 / 2 .
Figure 10. Performance of the estimation of the point forecast α for T = 5 and fixed n m to compare m = 1 , m ( n m ) 1 / 3 , and m ( n m ) 1 / 2 .
Appliedmath 03 00031 g010
Figure 11. Performance of the estimation of the total variance σ T 2 for T = 15 and fixed n m to compare m = 1 , m ( n m ) 1 / 3 , and m ( n m ) 1 / 2 .
Figure 11. Performance of the estimation of the total variance σ T 2 for T = 15 and fixed n m to compare m = 1 , m ( n m ) 1 / 3 , and m ( n m ) 1 / 2 .
Appliedmath 03 00031 g011
Figure 12. Performance of the estimation of the total variance σ T 2 for T = 5 and fixed n m to compare m = 1 , m ( n m ) 1 / 3 , and m ( n m ) 1 / 2 .
Figure 12. Performance of the estimation of the total variance σ T 2 for T = 5 and fixed n m to compare m = 1 , m ( n m ) 1 / 3 , and m ( n m ) 1 / 2 .
Appliedmath 03 00031 g012
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Muñoz, D.F. Estimation of Expectations and Variance Components in Two-Level Nested Simulation Experiments. AppliedMath 2023, 3, 582-600. https://doi.org/10.3390/appliedmath3030031

AMA Style

Muñoz DF. Estimation of Expectations and Variance Components in Two-Level Nested Simulation Experiments. AppliedMath. 2023; 3(3):582-600. https://doi.org/10.3390/appliedmath3030031

Chicago/Turabian Style

Muñoz, David Fernando. 2023. "Estimation of Expectations and Variance Components in Two-Level Nested Simulation Experiments" AppliedMath 3, no. 3: 582-600. https://doi.org/10.3390/appliedmath3030031

Article Metrics

Back to TopTop