Abstract
This paper considers sequentially two main problems. First, we estimate both the mean and the variance of the normal distribution under a unified one decision framework using Hall’s three-stage procedure. We consider a minimum risk point estimation problem for the variance considering a squared-error loss function with linear sampling cost. Then we construct a confidence interval for the mean with a preassigned width and coverage probability. Second, as an application, we develop Fortran codes that tackle both the point estimation and confidence interval problems for the inverse coefficient of variation using a Monte Carlo simulation. The simulation results show negative regret in the estimation of the inverse coefficient of variation, which indicates that the three-stage procedure provides better estimation than the optimal.
Mathematics Subject Classification:
62L10; 62L12; 62L15
1. Introduction
Let be a sequence of independent and identically distributed IID random variables from a normal distribution with mean and variance , both and are unknown. Assume further that a random sample of size from the normal distribution becomes available then we propose to estimate and by the corresponding sample measures and , respectively. It is a common practice, over the last decays, to treat each problem separately, where we consider one decision framework for each inference problem of the mean or the variance.
The objective in this paper is to combine the inference of both problems under one decision framework in order to achieve maximal use of the available sample information to handle these problems simultaneously. Given pre-defined , and , where is the confidence coefficient and is the fixed-width of the interval, we want to construct a fixed-width confidence interval for the mean whose confidence coefficient is at least the nominal value 100, where at the same time, we will be able to use the same available data to estimate the population variance under squared-error loss function with linear sampling cost. Hence, we combine both optimal sample sizes in one decision rule to propose the three-stage sampling decision framework.
Therefore, the optimal sample size required to construct a fixed-width confidence interval for whose coverage probability is least the nominal value 100 must satisfy the following:
where is the upper % critical point of the standard normal distribution . For more details about Equation (1), see Mukhopadhyay and de Silva ([1]; chapter 6, p. 97).
1.1. Minimum Risk Estimation
In the literature of sequential point estimation problems, one may consider several types of loss functions such as the squared-error loss function, the absolute-error loss function, the linex-loss function and others. It was shown that the commonly used one is the squared-error loss function due to its simplicity in mathematical computations; see, for example, Degroot [2]. Therefore, we write the loss incurred in estimating by the corresponding sample measure as
where is a known constant and is the known cost per unit sample observation. We will elaborate on the determination of in the following lines. Now, the risk corresponding to Equation (2) is
Thus, the minimum value of that minimizes the risk in Equation (3) is
moreover, the associated minimum risk is
The value of in Equation (4) is called the optimal sample size required to generate a point estimate for under Equation (2) while Equation (5) is the minimum risk obtained if is known.
1.2. A Unified One Decision Framework
If we want to combine both the confidence interval estimation and the point estimation in one decision framework, we have to have the constant to perform both confidence and point estimation in one decision rule. Careful investigation of the constant provided the statistical interpretation, that is is the cost of optimal sampling while represents the optimal information; in other words, it is the amount of information required to explore a unit of variance in order to achieve minimum risk. Thus is the cost of perfect information, and it is contrary to what has been said in the literature—that it is the cost of estimation.
Therefore, we proceed to use the following optimal sample size, to perform the required inference,
Since in Equation (6) is unknown, then no fixed sample size procedure can estimate the mean , independent of ; see Dantzig [3]. Therefore, we resort to a triple sampling sequential procedure to achieve the previously required goals. Henceforth, we continue to use the asymptotic sample size defined in Equation (6) to propose the following triple sampling procedure to estimate the unknown population mean and the unknown population variance via estimation of .
2. Three-Stage Estimation of the Mean and Variance
In his seminal work, Hall [4] introduced the idea of sampling in three stages to tackle several problems in sequential estimation. He combined the asymptotic characteristics of one-by-one purely sequential sampling procedures of Anscombe [5], Robbins [6], and Chow and Robbins [7] and the operational saving made possible by Stein [8], and Cox [9] group sampling.
From 1965 until the early 1980s, the research in sequential estimation was mainly devoted to two types of sequential sampling procedures—the two-stage procedure, which satisfies the operational savings, and the one-by-one purely sequential procedure that satisfies the asymptotic efficiency. The objective was to use these methods under non-normal distributions. For brevity, see Mukhopadhyay [10], Mukhopadhyay and Hilton [11], Mukhopadhyay and Darmanto [12], Mukhopadhyay and Hamdy [13], Ghosh and Mukhopadhyay [14], Mukhopadhyay and Ekwo [15], Sinha and Mukhopadhyay [16], Zacks [17], and Khan [18]. For a complete list of references, see Ghosh, Mukhopadhyay, and Sen [19].
In the early 1980s, Hall [4,20] considered the normal distribution with an unknown finite mean and an unknown finite variance. His objective was to construct a confidence interval for the mean with a pre-assigned fixed-width and coverage probability. We will describe Hall’s three-stage procedure in Section 2.1.
Since the publication of Hall’s paper, research in multistage sampling has extended Halls results in several directions. Some have utilized the triple sampling technique to generate inference for other distributions, others have tried to improve the quality of inference such as protecting the inference against type II error probability, studying the characteristic operating curve, or/and discussing the sensitivity of triple sampling when the underlying distribution departs away from normality. For more details see Mukhopadhyay [21,22,23], Mukhopadhyay et al. [24], Mukhopadhyay and Mauromoustakos [25], Hamdy and Palotta [26], Hamdy et al. [27], Hamdy [28], Hamdy et al. [29], Lohr [30], Mukhopadhyay and Padmanabhan [31], Takada [32], Hamdy et al. [33], Hamdy [34], Al-Mahmeed and Hamdy [35], AlMahmeed et al. [36], Costanzo et al. [37], Yousef et. al. [38], Yousef [39], Hamdy et al. [40] and Yousef [41]. Liu [42] used Hall’s results to tackle hypothesis-testing problems for the mean of the normal distribution while Son et al. [43] used the three-stage procedure to tackle the problem of testing hypotheses concerning shifts in the population normal mean with controlled Type II error probability.
2.1. Three-Stage Sampling Procedure
As the name suggests, an inference in triple sampling is performed in three consecutive stages—the pilot phase, the main study phase, and the fine-tuning phase.
The Pilot Phase: In the pilot study phase, a random sample of size from the population say, to initiate sample measures, for the population mean and for the population standard deviation , where and .
The main Study Phase: In the main study phase, we only estimate a portion of to avoid possible oversampling. In literature, is known as the design factor. Let be the largest integer and as defined before we have
If then we stop at this stage, otherwise we continue to sample an extra sample of size , say , then we update the sampling measures to and for the population’s unknown parameters and , respectively. Hence, we proceed to define the fine-tuning phase.
The Fine-Tuning Phase: In the fine-tuning phase, the decision to stop or continue sampling is taken according to the following stopping rule
If then sampling is terminated at this stage, or else we continue to sample an additional sample of size , say . Hence, we augment the previously collected samples with the new to update the sample estimates to and for the unknown parameters and . Upon terminating the sampling process, we propose to estimate the unknown population mean by the corresponding triple sampling confidence interval and the unknown population variance by the corresponding triple sampling point estimate .
The asymptotic results in this paper are developed under the Assumption (A) set forward by Hall [20] to develop a theory for the triple sampling procedure. That is,
Assumption (A) Let such that , , and , .
Preliminaries: Recall the sample variance for all , and consider the following Helmert’s transformation to the original normal random variables , to write as an average of IID random variables for all . Now, let for , and write for and . They are IID for all . Let , then the random variables are IID random variables each distributed as . Which means . From Lemma 2 of Robbins [6], it follows that and are identically distributed for all . That is, , for all .
We continue to use the representation of instead of for all to develop the asymptotic theory for both the main study phase and the fine-tuning phase.
2.2. The Asymptotic Characteristics of the Main Study Phase
Under Assumption (A), we have
As , a.s and as , , in probability likewise in probability. While, from the Anscombe [5] Central Limit Theorem, we have as , and in distribution.
From Theorem 1 of Yousef et al. [38] as we have
Theorem 1.
Under Assumption (A) and using Equation (7), we can show for any real, as
Proof.
Since and are identically distributed, we write
Conditioning on the generated by we have
By using binomial expansion, it follows
where for and . Conditioning on the generated by that is . It follows where and hence . □
Further simplifications similar to those given in Hamdy [28], we get
where are IID with and .
By applying the first two terms of the infinite binomial series and taking the expectation, we get
, where is a generic constant. Since we have
Consider and expand around , and then take the expectation.
, where is a random variable between and . It is not hard to show that , we have omitted the proof for brevity.
Substituting for , we have
It follows that
Likewise, we recall the second term and expand around , we get
Substituting Equations (11) and (12) into Equation (10), we get the result. The proof is complete.
As a particular case of Theorem 1, for and we have as ,
while from the Equation (13) and the results of (ii) and (iii) we obtain
The following Theorem 2 gives the second-order asymptotic expansion of the moments of a real-valued continuously differentiable function of .
Theorem 2.
Under Assumption (A) and letbe a real-valued continuously differentiable function in a neighborhood aroundsuch that, then
Proof.
Taylor expansion of around provides,
where is a random variable between and . Now, taking the expectation all through we have,
From Equation (13), parts (ii) and (14), we have
However, from Equation (13), part (iv), and the assumption that is a bounded function. The proof is complete. □
Corollary 1.
Under Assumption (A) and let, be a real-valued continuously differentiable function in a neighborhood aroundsuch thatthen
Proof.
First, by using Taylor series expansion of the function around , we get
By taking the expectation all through we have
by using Equation (13), parts (i), (ii) and (iii) and the fact, that is bounded. The proof is complete.
As an especial case of Corollary 1, take , and we obtain,
This completes our first assertion regarding the asymptotic characteristics of the main-study phase. In the following section, we find the asymptotic characteristics of the final random sample size. □
2.3. The Asymptotic Characteristics of the Fine-Tuning Phase
Asymptotic characteristics of the variable are given in the following Theorem.
Theorem 3.
Under Assumption (A) and using Equation (8), letbe a real-valued continuously differentiable function in a neighborhood aroundsuch that. Then as
Proof.
We write , except possibly on a set of measure zero. Therefore, for real , we have
provided that the moment exists, and , where as defined before. From Hall [4], as , is an asymptotically uniform distribution.
Now, for , we have,
Likewise, for , we have
For , we have
We turn to prove Theorem 3.
First, write in Tayler series expansion as
where is a random variable between and . By using Equations (16)–(18) we have
However, from Equation (18) since and its derivatives are bounded. The proof is complete. □
Let be defined as in Equation (8) and assume (A) holds, the asymptotic characteristics of the fine-tuning phase as we have (see Yousef et al. [38])
Theorem 4.
Under Assumption (A) and using Equation (8), we can show that for any realand as
Proof.
Write , conditioning on the generated , we have
Thus, we write the binomial expression as an infinite series and we get
where for and .
Conditioning on the generated by the random sum is distributed as -distribution with degrees of freedom. Therefore,
Consequently, this yields
Consider the first three terms in the expansion and the remainder term
where . Let us evaluate the second term , first expand around using Taylor series , where is arandom variable lies between and . Furthermore,
where we have used the fact that . Thus,
However, the first term in (21) , by Wald’s [44] first equation.
For the second term in Equation (21), , conditioning on the generated by , we have
Expanding the binomial term and taking the expectation conditional on then expanding in Taylor series we obtain
Now, recall the third term in Equation (21) and expand in the Taylor series and applying Wald’s [44] second equation, we get
Finally, recall the remainder term in Equation (21), and consider the following two cases:
Case 1, if then and
, as , since and Assumption (A) holds.
Case 2, if then and , it follows that
, as by Assumption (A). Therefore,
The proof is complete. □
3. Three-Stage Coverage Probability of the Mean
Since and the events , are independent because is a function of also because and are independent for all for the normal distribution, it follows that,
where . Recall Theorem 3, it follows
as The quantity is known as the cost of ignorance or the cost of not knowing to (see Simons [45] for details).
4. The Asymptotic Regret Incurred in Estimating
Theorem 5.
The risk associated with (2) as is given by
Moreover, the asymptotic regret is
Proof.
Recall the squared-error loss function given in Equation (2) and take the expectation all through,
By using Equation (16) and Theorem 4 with we have
while the asymptotic regret of the triple sampling point estimation of under (2) is
The proof is complete. □
Clearly, for zero cost, we obtain zero regrets. While for a non zero cost, we obtain negative regret for all . This means that the triple sampling procedure provides better estimates than the optimal (see Martinsek [46]).
5. Simulation Results
Since the results are asymptotic, it is worth mentioning to record the performance of the estimates under a moderate sample size performance. Microsoft Developer Studio software was used to run FORTRAN codes using Equations (7) and (8). A series of 50,000 replications were generated from a normal distribution with different values of and . The optimal sample sizes were chosen to represent small, medium to large sample sizes 24, 43, 61, 76, 96, 125, 171, 246, and 500 with as recommended by Hall [4,20]. For brevity, we report the case at .
5.1. The Mean and the Variance of the Normal Distribution
We estimate the optimal final sample size and its standard error, the mean and its standard error, the coverage probability of the mean, the variance and its standard error, the asymptotic regret of using the sample variance instead of the population variance. For constructing a fixed-width confidence interval for the mean we take . In each Table, we report as an estimate of , as the standard error of , as an estimate of with standard error . The estimated coverage probability is while the estimated asymptotic regret is .
The simulation process is performed as follows: Fix , and as in Equation (6).
First: For the -th sample generated from the normal distribution, take a pilot sample of size , that is .
Second: Compute the sample mean and the sample variance .
Third: Apply Equations (7) and (8) to determine the stopping sample size at this iteration, whether in the first stage or the second stage, say .
The inverse coefficient of variation is the ratio of the population mean to the population standard deviation, that is , (no singularity point can exist over the entire real line). Assume further a random sample of size from the normal distribution becomes available, we propose to estimate by . It is a dimensionless quantity that makes comparisons across several populations that have different units of measurements has useful meanings. In practical life, the inverse coefficient of variation is equal to the signal to noise ratio, which measures how much signal has been corrupted by noise (see McGibney and Smith [47]).
Fourth: Record the resultant sample size, the sample mean, the sample standard deviation, and the estimated inverse coefficient of variation for where
Hence, for each experimental combination, we have four vectors of size as follows:
Let , , and , where, , , and are respectively the estimated mean sample size, the estimated mean of the population mean, the estimated mean of the sample variance and the estimated mean of the inverse coefficient of variation across replicates. The standard errors are, respectively,
Fifth: The simulated regret with .
Table 1.
Three-stage estimation of the mean and variance of the normal distribution under a unified stopping rule with .
Table 2.
Three-stage estimation of the mean and variance of the normal distribution under a unified stopping rule with .
Regarding the final random sample size , we noticed that as increases, is always less than (early stopping) with standard error increases, . While as increases and with standard error decreases. Regarding the coverage probability, the three-stage procedure under the rules in Equations (7) and (8) provides coverage probabilities that are always less than the desired nominal value while it attains it only asymptotically. Regarding the estimated asymptotic regret, we obtain negative regret, which agrees with the result of Theorem 5.
5.2. The Inverse Coefficient of Variation
As an application, we invest the three-stage estimation of both the mean and the variance to estimate the inverse coefficient of variation , and its standard error , the coverage probability of and the asymptotic regret. To estimate we perform the previous steps in addition to the simulated regret using a squared-error loss function with linear sampling cost is
Table 3 below shows the performance of the procedure for estimating . Obviously, as increases with standard errors decrease. Regarding the coverage probability of we noticed that for all . This means that the procedure attains exact consistency. Regarding the asymptotic regret, we noticed that as increases, the regret decreases with negative values. This means that the three-stage procedure does better than the optimal.
Table 3.
Three-stage estimation of the inverse coefficient of variation under a unified stopping rule.
6. Conclusions
We use a three-stage procedure to tackle the point estimation problem for the variance while estimating the mean by a confidence interval with preassigned width and coverage probability. We use one unified stopping rule for this estimation and use the results in developing both point and interval estimations for the inverse coefficient of variation. Monte Carlo simulations were performed to investigate the performance of all estimators. We conclude that the estimation of the inverse coefficient of variation through the mean and variance obtained better results with negative regret. As an application in engineering reliability see Ghosh, Mukhopadhyay and Sen ([19]; chapter 1, p. 11). For applications in real-world problems see Mukhopadhyay, Datta, and Chattopadhyay [48].
Author Contributions
Conceptualization, A.Y.; methodology, A.Y and H.H.; software, A.Y.; validation, A.Y and H.H.; formal analysis, A.Y.; investigation, A.Y and H.H.; resources, A.Y.; data curation, A.Y and H.H.; writing—original draft preparation, A.Y Yousef.; writing—review and editing, A.Y.; visualization, A.Y.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Mukhopadhyay, N.; de Silva, B. Sequential Methods and Their Applications; CRC: New York, NY, USA, 2009. [Google Scholar]
- Degroot, M.H. Optimal Statistical Decisions; McGraw-Hill: New York, NY, USA, 1970. [Google Scholar]
- Dantzig, G.B. On the nonexistence of tests of ‘students’ hypothesis having power functions independent of σ. Ann. Math. Stat. 1940, 11, 186–192. [Google Scholar] [CrossRef]
- Hall, P. Asymptotic theory and triple sampling of sequential estimation of a mean. Ann. Stat. 1981, 9, 1229–1238. [Google Scholar] [CrossRef]
- Anscombe, F.J. Sequential estimation. J. Roy. Stat. Soc. 1953, 15, 1–21. [Google Scholar] [CrossRef]
- Robbins, H. Sequential Estimation of the Mean of a Normal Population. Probability and Statistics (Harald Cramer Volume); Almquist and Wiksell: Uppsala, Sweden, 1959; pp. 235–245. [Google Scholar]
- Chow, Y.S.; Robbins, H. On the asymptotic theory of fixed width sequential confidence intervals for the mean. Ann. Math. Stat. 1965, 36, 1203–1212. [Google Scholar] [CrossRef]
- Stein, C. A two-sample test for a linear hypothesis whose power is independent of the variance. Ann. Math. Stat. 1945, 16, 243–258. [Google Scholar] [CrossRef]
- Cox, D.R. Estimation by double sampling. Biometrika 1952, 39, 217–227. [Google Scholar] [CrossRef]
- Mukhopadhyay, N. Sequential estimation of a location parameter in exponential distributions. Calcutta Stat. Assoc. Bull. 1974, 23, 85–95. [Google Scholar] [CrossRef]
- Mukhopadhyay, N.; Hilton, G.F. Two-stage and sequential procedures for estimating the location parameter of a negative exponential distribution. S. Afr. Stat. J. 1986, 20, 117–136. [Google Scholar]
- Mukhopadhyay, N.; Darmanto, S. Sequential estimation of the difference of means of two negative exponential populations. Seq. Anal. 1988, 7, 165–190. [Google Scholar] [CrossRef]
- Mukhopadhyay, N.; Hamdy, H.I. On estimating the difference of location parameters of two negative exponential distributions. Can. J. Stat. 1984, 12, 67–76. [Google Scholar] [CrossRef]
- Ghosh, M.; Mukhopadhyay, N. Sequential point estimation of the parameter of a rectangular distribution. Calcutta Stat. Assoc. Bull. 1975, 24, 117–122. [Google Scholar] [CrossRef]
- Mukhopadhyay, N.; Ekwo, M.E. Sequential estimation problems for the scale parameter of a Pareto distribution. Scand. Actuar. J. 1987, 83–103. [Google Scholar] [CrossRef]
- Sinha, B.K.; Mukhopadhyay, N. Sequential estimation of a bivariate normal mean vector. Sankhya Ser. B 1976, 38, 219–230. [Google Scholar]
- Zacks, S. Sequential estimation of the mean of a lognormal distribution having a prescribed proportional closeness. Ann. Math. Stat. 1966, 37, 1688–1696. [Google Scholar] [CrossRef]
- Khan, R.A. Sequential estimation of the mean vector of a multivariate normal distribution. Indian Stat. Inst. 1968, 30, 331–334. [Google Scholar]
- Ghosh, M.; Mukhopadhyay, N.; Sen, P.K. Sequential Estimation; Wiley: New York, NY, USA, 1997. [Google Scholar]
- Hall, P. Sequential estimation saving sampling operations. J. Roy. Stat. Soc. B 1983, 45, 1229–1238. [Google Scholar] [CrossRef]
- Mukhopadhyay, N. A note on three-stage and sequential point estimation procedures for a normal mean. Seq. Anal. 1985, 4, 311–319. [Google Scholar] [CrossRef]
- Mukhopadhyay, N. Sequential estimation problems for negative exponential populations. Commun. Stat. Theory Methods A 1988, 17, 2471–2506. [Google Scholar] [CrossRef]
- Mukhopadhyay, N. Some properties of a three-stage procedure with applications in sequential analysis. Indian J. Stat Ser. A 1990, 52, 218–231. [Google Scholar]
- Mukhopadhyay, N.; Hamdy, H.I.; Al Mahmeed, M.; Costanza, M.C. Three-stage point estimation procedures for a normal mean. Seq. Anal. 1987, 6, 21–36. [Google Scholar] [CrossRef]
- Mukhopadhyay, N.; Mauromoustakos, A. Three-stage estimation procedures of the negative exponential distribution. Metrika 1987, 34, 83–93. [Google Scholar] [CrossRef]
- Hamdy, H.I.; Pallotta, W.J. Triple sampling procedure for estimating the scale parameter of Pareto distribution. Commun. Stat. Theory Methods 1987, 16, 2155–2164. [Google Scholar] [CrossRef]
- Hamdy, H.I.; Mukhopadhyay, N.; Costanza, M.C.; Son, M.S. Triple stage point estimation for the exponential location parameter. Ann. Inst. Stat. Math. 1988, 40, 785–797. [Google Scholar] [CrossRef]
- Hamdy, H.I. Remarks on the asymptotic theory of triple stage estimation of the normal mean. Scand. J. Stat. 1988, 15, 303–310. [Google Scholar]
- Hamdy, H.I.; AlMahmeed, M.; Nigm, A.; Son, M.S. Three-stage estimation for the exponential location parameters. Metron 1989, 47, 279–294. [Google Scholar]
- Lohr, S.L. Accurate multivariate estimation using triple sampling. Ann. Stat. 1990, 18, 1615–1633. [Google Scholar] [CrossRef]
- Mukhopadhyay, N.; Padmanabhan, A.R. A note on three-stage confidence intervals for the difference of locations: The exponential case. Metrika 1993, 40, 121–128. [Google Scholar] [CrossRef]
- Takada, Y. Three-stage estimation procedure of the multivariate normal mean. Indian J. Stat. Ser. B 1993, 55, 124–129. [Google Scholar]
- Hamdy, H.I.; Costanza, M.C.; Ashikaga, T. On the Behrens-Fisher problem: An integrated triple sampling approach. 1995; in press. [Google Scholar]
- Hamdy, H.I. Performance of fixed width confidence intervals under Type II errors: The exponential case. South. African Stat. J. 1997, 31, 259–269. [Google Scholar]
- AlMahmeed, M.; Hamdy, H.I. Sequential estimation of linear models in three stages. Metrika 1990, 37, 19–36. [Google Scholar] [CrossRef]
- AlMahmeed, M.; AlHessainan, A.; Son, M.S.; Hamdy, H.I. Three-stage estimation for the mean of a one-parameter exponential family. Korean Commun. Stat. 1998, 5, 539–557. [Google Scholar]
- Costanza, M.C.; Hamdy, H.I.; Haugh, L.D.; Son, M.S. Type II error performance of triple sampling fixed precision confidence intervals for the normal mean. Metron 1995, 53, 69–82. [Google Scholar]
- Yousef, A.; Kimber, A.; Hamdy, H.I. Sensitivity of Normal-Based Triple Sampling Sequential Point Estimation to the Normality Assumption. J. Stat. Plan. Inference 2013, 143, 1606–1618. [Google Scholar] [CrossRef][Green Version]
- Yousef, A. Construction a Three-Stage Asymptotic Coverage Probability for the Mean Using Edgeworth Second-Order Approximation. Selected Papers on the International Conference on Mathematical Sciences and Statistics 2013; Springer: Singapore, 2014; pp. 53–67. [Google Scholar]
- Hamdy, H.I.; Son, S.M.; Yousef, S.A. Sensitivity Analysis of Multi-Stage Sampling to Departure of an underlying Distribution from Normality with Computer Simulations. J. Seq. Anal. 2015, 34, 532–558. [Google Scholar] [CrossRef]
- Yousef, A. A Note on a Three-Stage Sequential Confidence Interval for the Mean When the Underlying Distribution Departs away from Normality. Int. J. Appl. Math. Stat. 2018, 57, 57–69. [Google Scholar]
- Liu, W. A k-stage sequential sampling procedure for estimation of a normal mean. J. Stat. Plan. Inf. 1995, 65, 109–127. [Google Scholar] [CrossRef]
- Son, M.S.; Haugh, H.I.; Hamdy, H.I.; Costanza, M.C. Controlling type II error while constructing triple sampling fixed precision confidence intervals for the normal mean. Ann. Inst. Stat. Math. 1997, 49, 681–692. [Google Scholar] [CrossRef]
- Wald, A. Sequential Analysis; Wiley: New York, NY, USA, 1947. [Google Scholar]
- Simon, G. On the cost of not knowing the variance when making a fixed-width confidence interval for the mean. Ann. Math. Stat. 1968, 39, 1946–1952. [Google Scholar] [CrossRef]
- Martinsek, A.T. Negative regret, optimal stopping, and the elimination of outliers. J. Amer. Stat. Assoc. 1988, 83, 160–163. [Google Scholar] [CrossRef]
- McGibney, G.; Smith, M.R. An unbiased signal to noise ratio measure for magnetic resonance images. Med. Phys. 1993, 20, 1077–1079. [Google Scholar] [CrossRef]
- Mukhopadhyay, N.; Datta, S.; Chattopadhyay, S. Applied Sequential Methodologies: Real-World Examples with Data Analysis; CRC Press: Boca Raton, FL, USA, 2004. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).