Next Article in Journal
An Efficient Two-Layer Non-Hydrostatic Model for Investigating Wave Run-Up Phenomena
Previous Article in Journal
The Low Lying Double-Exciton State of Conjugated Diradicals: Assessment of TDUDFT and Spin-Flip TDDFT Predictions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Stage Sequential Estimation of the Inverse Coefficient of Variation of the Normal Distribution

1
Department of Mathematics, Kuwait College of Science and Technology, Doha District B. O. Box 27235, Kuwait
2
Faculty of Management Sciences, October University for Modern Sciences and Arts, 6th October City 12566, Cairo, Egypt
*
Author to whom correspondence should be addressed.
Computation 2019, 7(4), 69; https://doi.org/10.3390/computation7040069
Submission received: 8 November 2019 / Revised: 29 November 2019 / Accepted: 3 December 2019 / Published: 4 December 2019

Abstract

:
This paper sequentially estimates the inverse coefficient of variation of the normal distribution using Hall’s three-stage procedure. We find theorems that facilitate finding a confidence interval for the inverse coefficient of variation that has pre-determined width and coverage probability. We also discuss the sensitivity of the constructed confidence interval to detect a possible shift in the inverse coefficient of variation. Finally, we find the asymptotic regret encountered in point estimation of the inverse coefficient of variation under the squared-error loss function with linear sampling cost. The asymptotic regret provides negative values, which indicate that the three-stage sampling does better than the optimal fixed sample size had the population inverse coefficient of variation been known.

1. Introduction

Let X 1 , X 2 , be a sequence of independent and identically distributed IID random variables from a normal distribution N ( μ ,   σ 2 ) with mean μ and variance σ 2 + , where both parameters are finite but unknown. The population coefficient of variation is the population standard deviation divided by the population mean that is σ / μ , μ 0 , mostly presented as a percentage. It is useful when we seek relative variability rather than the absolute variability. It is a dimensionless quantity, which enables researchers to compare different distributions, regardless of their measurable units. Such practicality makes it widely used in different areas of science, such as engineering, finance, economics, medicine, and others. Nairy and Rao [1] conducted a survey of several applications in engineering, business, climatology, and other fields. Ahn [2] used the coefficient of variation to predict the unknown number of flaw trees, while Gong and Li [3] used it to estimate the strength of ceramics. Faber and Korn [4] applied the measure in the mean synaptic response of the central nervous system. Hammer et al. [5] used the measure to test the homogeneity of bone samples in order to determine the effect of external treatments on the properties of bones. Billings et al. [6] used it to study the impact of socioeconomic status on hospital use in New York City. In finance, Brief and Owen [7] used the coefficient of variation to evaluate the project risks considering the rate of return as a random variable. Pyne et al. [8] used the measure to study the variability of the competitive performance of Olympic swimmers. In health sciences, see Kelley [9] and Gulhar et al. [10].
The disadvantage of the measure lies in the singularity point μ = 0 . Therefore, it is preferable to work with the reciprocal of the measure, inverse coefficient of variation, θ = μ / σ defined over . The inverse coefficient of variation is equal to the signal-to-noise ratio, which measures the signal strength relative to background noise. In quality control, it represents the magnitude of the process mean compared to its variation. In other words, it quantifies how much signal has been corrupted by noise, see McGibney and Smith [11]. In finance, it is called Sharpe’s index, which measures portfolio performance, for example, see Knight and Satchell [12].
Having observed a random sample X 1 , X 2 , , X n of size ( n 2 ) from the normal population, we continue to use both X ¯ n and S n as the sample mean and the sample standard deviation point estimates of the normal distribution mean μ and standard deviation σ , respectively. Consequently, we define the sample inverse coefficient of variation θ ^ n = X ¯ n / S n .
Hendricks and Robey [13] studied the sampling distribution of the coefficient of variation. Koopmans et al. [14] showed that without any prior restriction to the range of the population mean, it is impossible to obtain confidence intervals for the population coefficient of variation that have finite length with a probability one, uniformly for all values of μ   a n d   σ except by using purely sequential procedure. Rao and Bhatta [15] approximate the distribution function of the sample coefficient of variation using the Edgeworth series to obtain a more accurate large-sample test for the population coefficient of variation under normality. McKay [16] derived a confidence interval for the population coefficient of variation, which is based on the chi-squared distribution. He found that the constructed confidence interval works well when the coefficient of variation is less than 0.33, see Umphrey [17]. Later Vangel [18] modified McKay’s confidence interval, which shown to be closely related to McKay’s confidence interval, but is more accurate and nearly exact under normality. Miller [19] discussed the approximate distribution function of the sample coefficient of variation and proposed the approximate confidence interval for the coefficient of variation under normality. Lehmann [20] found an exact form for the distribution function of the sample coefficient of variation, which depends mainly on the non-central t -distribution (defined over ) , so it is computationally cumbersome. Curto and Pino [21] studied the distribution of the sample coefficient of variation in the case of non-IID random variables.
Sharma and Krishna [22] mathematically derived an asymptotic confidence interval for the population inverse coefficient of variation without any prior assumption regarding the underline distribution. Albatineh, Kibria, and Zogheib, [23] studied the performance of their constructed confidence interval using Monte Carlo simulation. They used randomly generated data from different distributions—normal, log-normal, χ 2 (Chi-squared-distribution), Gamma, and Weibull distributions.
Regarding sequential estimation, Chaturvedi and Rani [24] proposed a sequential procedure to construct a fixed-width confidence interval for the population inverse coefficient of variation of a normal distribution with a preassigned coverage probability. They mathematically showed that the proposed procedure attains asymptotic efficiency and consistency in the sense of Chow and Robbins [25]. Chattopadhyay and Kelley [26] used the purely sequential procedure [25] to estimate the population coefficient of variation of the normal distribution under a squared-error loss function using a Nagar-type expansion.
Yousef and Hamdy [27] utilized Hall’s three-stage sampling procedure to estimate the population inverse coefficient of variation of the normal distribution using a Monte Carlo Simulation. They found a unified stopping rule, which is a function of the unknown population variance that tackles both a fixed-width confidence interval for the unknown population mean with a pre-assigned coverage probability and a point estimation problem for the unknown population variance under a squared-error loss function with linear sampling cost. In other words, they found the asymptotic coverage probability for the population mean and the asymptotic regret incurred by estimating the population variance by the sample variance. As an application, they write FORTRAN codes and use Microsoft Developer Studio software to find the simulated coverage probability for the inverse coefficient of variation and the simulated regret. The simulation results showed that the three-stage procedure attains asymptotic efficiency and consistency in the sense of Chow and Robbins [25].
Up to our knowledge, none of the existing papers in the literature discussed the three-stage estimation of the population inverse coefficient of variation theoretically. Here, the procedure is different than in Yousef and Hamdy [27]; the stopping rule depends directly on the sample inverse coefficient of variation. We derive mathematically an asymptotic confidence interval for the population inverse coefficient of variation that has a fixed-width 2 d ( > 0 ) and coverage probability at least 100 ( 1 α ) % .
Moreover, we tackle a point estimation problem for the population inverse coefficient of variation using a squared-error loss function with linear sampling cost. Then we examine the capability of the constructed confidence interval to detect any potential shift that occurs in the population inverse coefficient of variation. Here, the stopping rule depends on the asymptotic distribution of the sample inverse coefficient of variation.

The Layout of the Paper

In Section 2, we present preliminary asymptotic results that facilitate finding the asymptotic distribution of the sample inverse coefficient of variation. In Section 3, we present Hall’s three-stage procedure and find the asymptotic characteristics for both the main-study phase and the fine-tuning phase. In Section 4, we find the asymptotic coverage probability for the population inverse coefficient of variation. In Section 5, we discuss the capability of the constructed interval to detect any shift in the inverse coefficient of variation. In Section 6, we find the asymptotic regret.

2. Preliminary Results

The following Corollaries are necessary to find the asymptotic distribution of the sample inverse coefficient of variation θ ^ n = X ¯ n / S n .
Corollary 1.
Let X 1 , X 2 , , X n be a random sample from N ( μ ,   σ 2 ) . Let S n 2 = i = 1 n ( X i X ¯ ) 2 / ( n 1 ) , n 2 and X ¯ = 1 n X i / n for n 1 . Then for all n 6 , we have
(i) 
E ( S n ) = σ σ 4 n + O ( n 2 )
(ii) 
E ( S n 4 ) = σ 4 + 2 σ 4 n + O ( n 2 )
(iii) 
E ( S n 1 ) = σ 1 + 3 σ 1 4 n + O ( n 2 )
(iv) 
E ( S n 2 ) = σ 2 + 2 σ 2 n + O ( n 2 )
(v) 
E ( S n 4 ) = σ 4 + 6 σ 4 n + O ( n 2 )
Proof. 
By using the fact ( n 1 ) σ 2 S n 2 ~ χ 2 ( n 1 ) we get E ( S n ) = 2 n 1 Γ ( n 2 ) Γ ( n 2 1 2 ) σ , E ( S n 4 ) = ( n + 1 n 1 ) σ 4 , E ( S n 1 ) = n 1 2 Γ ( n 2 1 ) Γ ( n 2 1 2 ) σ 1 , E ( S n 2 ) = n 1 n 3 σ 2 , and E ( S n 4 ) = ( n 1 ) 2 ( n 3 ) ( n 5 ) σ 4 , where Γ ( u ) = 0 t u 1 e t d t . The asymptotic expansion of 2 n 1 Γ ( n 2 ) Γ ( n 2 1 2 ) = 1 1 4 n + O ( n 2 ) , the asymptotic expansion of n 1 2 Γ ( n 2 1 ) Γ ( n 2 1 2 ) = 1 + 3 4 n + O ( n 2 ) , ( n + 1 n 1 ) = 1 + 2 n + O ( n 2 ) , while the asymptotic expansion of ( n 1 ) 2 ( n 3 ) ( n 5 ) = 1 + 6 n + O ( n 2 ) . By direct substitution, we get the results. The proof is complete. □
The next corollary provides the asymptotic characteristics of θ ^ n = X ¯ n / S n in the case of fixed sample size n , as shown in Chaturvedi and Rani [24].
Corollary 2.
For all n 8 , as n we have
(i) 
E ( θ ^ n ) = θ ( 1 + 3 4 n ) + O ( n 1 )
(ii) 
E ( θ ^ n 2 ) = θ 2 + 1 n ( 1 + 2 θ 2 ) + O ( n 1 )
(iii) 
V a r ( θ ^ n ) = 1 n ( 1 + θ 2 2 ) + O ( n 1 )
(iv) 
E ( θ ^ n 4 ) = θ 4 + 6 θ 2 n ( 1 + θ 2 ) + O ( n 1 )
(v) 
V a r ( θ ^ n 2 ) = 2 θ 2 n ( 2 + θ 2 ) + O ( n 1 ) .
Proof. 
The proof follows from Lemma 1 and Lemma 2 in Chaturvedi and Rani [24]. □
For simplicity, let us consider V a r ( θ ^ n ) θ 2 2 n , then from the central limit theorem as n ,   Q = 2 n ( θ ^ n θ ) θ N ( 0 ,   1 ) in distribution. To satisfy the requirement of having a confidence interval for θ that has a fixed-width 2 d and coverage probability with at least 100 ( 1 α ) % , we need
P ( | 2 n ( θ ^ n θ ) θ | d 2 n θ ) 1 α ,
From which we get,
n n * = ξ θ 2 ,   ξ = a 2 2 d 2
where a = Z α / 2 is the upper α / 2 cut off point of the N ( 0 , 1 ) .
Since θ is unknown, then no fixed sample size procedure can achieve the above confidence interval uniformly for all μ , and σ , see Dantzig [28]. Therefore, we resort to the three-stage procedure to estimate the unknown population inverse coefficient of variation θ via estimation of n * .

3. Three-Stage Sequential Estimation

Hall [29,30] introduced the idea of sampling in three-stages for constructing a confidence interval for the mean of the normal distribution that has prescribed width and coverage probability. His findings motivated many researchers to utilize the procedure to generate inference for other distributions; for a complete list of research, see Ghosh, Mukhopadhyay, and Sen [31]. Others have introduced point estimation under some error loss functions or tried to improve the quality of inference like protecting the inference against type II error probability, studying the operating characteristic curve, or/and discussing the sensitivity of the three-stage sampling when the underline distribution departs away from normality. For details, see Costanzo et al. [32], Hamdy et al. [33], Son et al. [34], Yousef et al. [35], Hamdy et al. [36], and Yousef [37,38].
In the following lines, we present Hall’s three-stage procedure, as described by Hall [29,30]. The procedure based on three phases: The pilot phase, the main-study phase, and the fine-tuning phase.
The Pilot Phase: In the pilot study phase a random sample of size ( m 3 ) is taken from the normal distribution say, ( X 1 ,   X 2 ,   X 3 ,   ,   X m ) to initiate sample measure, X ¯ m for the population mean μ and S m for the population standard deviation σ . Hence, we propose to estimate the inverse coefficient of variation θ by the corresponding sample measure θ ^ m .
The Main Study Phase: We estimate only a portion γ ( 0 < γ < 1 ) of n * to avoid possible oversampling. In literature, γ is known as the design factor.
N 1 = m a x { m , [ γ ξ θ ^ m 2 ] + 1 }
where, [ · ] means the largest integer function.
If N 1 , then we stop at this stage; otherwise, we continue to sample an extra sample of size N 1 m , say ( X m + 1 ,   X m + 2 ,   X m + 3 ,   ,   X N 1 ) , then we update the sampling measures X ¯ N 1 ,   S N 1 and θ ^ N 1 for the unknown population parameters, μ ,   σ , and θ , respectively.
The Fine-Tuning Phase: In the fine-tuning phase, the decision to stop sampling or continue based on the following stopping rule
N = m a x { N 1 , [ ξ θ ^ N 1 2 ] + 1 }
If N 1 N , sampling is terminated, else we continue to sample and an additional sample of size N N 1 , say ( X N 1 + 1 , X N 1 + 2 ,   X N 1 + 3 ,   , X N ) . Hence, we augment the previously collected N 1 samples with the new N N 1 to update the sample estimates to X ¯ N ,   S N and θ ^ N for the unknown parameters μ ,   σ , and θ . Upon terminating the sampling process, we propose to estimate the unknown inverse coefficient of variation θ with the fixed 2 d confidence interval I N = ( θ ^ N d ,   θ ^ N + d ) .
The following asymptotic results are developed under the general assumptions set forward by Hall [28] to develop a theory for the three-stage procedure, condition (A) by definition, ξ > 0 , n * lim s u p ( m n * ) < γ and ξ ( m ) = O ( m k ) , k > 1 .
The following Helmert’s transformation is necessary to obtain asymptotic results regarding S N 1 2 k and S N 2 k for any real number k . We need to express the sample variance S n 2 as an average of IID random variables. To do so let
{ W i = j i ( Z j i Z i + 1 ) / i ( i + 1 ) ,   i = 1 ,   2 , ,   n 1 W n = n 1 j = 1 n Z j
where Z i = X i μ σ , i = 1 , , n . It follows that W i is IID N ( 0 ,   1 ) i = 1 , 2 , , n . If we set V i = σ 2 W i 2 then V i σ 2 χ 2 ( 1 ) for i = 2 , , n . From Lemma 2 of Robbins [39], it follows that S n 2 and V ¯ n = ( n 1 ) 1 i = 2 n V i are identically distributed. So, in all the proofs we use V ¯ n instead of S n 2 , n 2 to develop asymptotic results regarding E ( S N 1 2 ) and E ( S N 2 ) .
Under condition (A),
P ( N = ( [ ξ θ ^ N 1 2 ] + 1 ) ) 1 ,   and   N 1 γ n * 1   in   probability   as   m .
From Anscombe’s [40] central limit Theorem, we have as ξ ,
(i)
2 N 1 ( θ ^ N 1 θ ) N ( 0 , θ 2 ) in distribution
(ii)
N 1 ( θ ^ N 1 2 θ 2 ) N ( 0 ,   2 θ 4 ) in distribution
Now, N = [ ξ θ ^ N 1 2 ] + 1 , except possibly on a set η = ( N 1 < m ξ θ ^ N 1 2 < γ ξ θ ^ m 2 + 1 ) of measure zero. Therefore, for real r , we have
E ( N r ) = E ( ξ θ ^ N 1 2 + β N 1 ) r + η N r d P E ( N r ) = E ( ξ θ ^ N 1 2 + β N 1 ) r + o ( ξ r 1 )
Provided that the r t h moment exists and as ξ ,   β N 1 = ( ξ θ ^ N 1 2 ) [ ξ θ ^ N 1 2 ] ~ U ( 0 , 1 ) .

3.1. The Asymptotic Characteristics of the Main-Study Phase

The following theorem gives a second-order approximation regarding the k t h moment of the sample average of the main-study phase.
Theorem 1.
For the three-stage sampling rule in Equation (2), if condition (A) holds then, as ξ ,
E ( X ¯ N 1 k ) = θ k σ k + k ( k 5 ) θ k 2 σ k 2 γ n * + o ( ξ 1 ) .
Proof. 
We write
E ( X ¯ N 1 k ) = θ k σ k E ( 1 + i = 1 N 1 Z i θ N 1 ) k
Then, we expand the above expression in infinite series while conditioning on the σ –field generated by Z 1 ,   Z 2 ,   ,   Z m , where Z i = ( X i μ ) σ are standard normal variates. Notice also that,
E ( N 1 m N 1 ) k = o ( ξ 1 )   as   ξ .
Thus
E ( X ¯ N 1 k ) = θ k σ k E ( 1 + i = 1 m Z i θ N 1 ) k .
Consider the first three terms in the infinite Binomial series and expand ( N 1 ) k in Taylor series around γ n * and taking the expectation all through, the statement of Theorem 1 is immediate. □
Special cases of Theorem 1, when k = 1 and k = 2 , provide
E ( X ¯ N 1 ) = μ 2 σ θ γ n * + o ( ξ 1 )
E ( X ¯ N 1 2 ) = μ 2 3 σ 2 γ n * + o ( ξ 1 )
It follows from Equations (4) and (5),
V a r ( X ¯ N 1 ) = σ 2 γ n * + o ( ξ 1 )
Theorem 2 below gives the k t h moment of the three-stage sample variance of the main-study phase.
Theorem 2.
For the three-stage sampling rule in Equation (2), if condition (A) holds then, for real k and as ξ
E ( S N 1 2 k ) = σ 2 k + k ( k 1 ) σ 2 k γ n * + o ( ξ 1 ) .
Proof. 
First, write E ( S N 1 2 k ) = E ( V ¯ N 1 2 k ). Hence, we condition on the σ field generated by V 1 , V 2 , V 3 ,…, V m 1 write
E ( S N 1 2 k ) = E ( N 1 1 ) k E ( i = 1 m 1 V i + i = m N 1 1 V i ) k | V 1 ,   V 2 ,   V 3 , ,   V m 1 .
Then we expand the binomial term as an infinite series as
E ( S N 1 2 k ) = E { ( N 1 1 ) k j = 0 ζ ( k , j ) ( i = 1 m 1 V i ) k j E ( i = m N 1 1 V i ) j } | V 1 ,   V 2 ,   V 3 , ,   V m 1
where ζ ( k , j ) = 1 , when j = 0 , and ζ ( k , j ) = r = 1 j ( k r + 1 ) j ! for j = 1 , 2 , 3 ,
Conditioning on V 1 , V 2 , V 3 ,…, V m 1 the random variable ( i = m N 1 1 V i ) is distributed according to σ 2 χ N 1 m 2 and therefore E ( i = m N 1 1 V i ) j = ( N 1 m ) j σ 2 j ( 1 + O ( N 1 1 ) ) . Thus,
E ( S N 1 2 k ) = σ 2 k E ( 1 + i = 1 m 1 w i N 1 1 ) k + o ( ξ 1 )
where w i = V i σ 2 σ 2 , with E ( w i ) = 0 , and V a r ( w i ) = 2 .
Consider the first three terms in the infinite expansion of the above expression in addition to a remainder term, and then we have
E ( S N 1 2 k ) = σ 2 k + σ 2 k k E ( i = 1 m 1 w i N 1 1 ) + 1 2 σ 2 k k ( k 1 ) E ( i = 1 m 1 w i N 1 1 ) 2 + E ( R ( w ) ) .
Recall E ( R ( w ) ) = M E ( i = 1 m 1 w i N 1 1 ) 3 , where M is a generic constant. Since 1 N 1 1 , we have
E ( R ( w ) ) = M E ( i = 1 m 1 w i m 1 ) 3 = M ( m 1 ) 3 E ( V ¯ m σ 2 ) 3 / ( m 1 ) 3 = M E ( V ¯ m σ 2 ) 3 = 0 .
Consider the second term σ 2 k k E ( i = 1 m 1 w i N 1 1 ) , and expand ( N 1 1 ) 1 in Taylor series
( N 1 1 ) 1 = ( γ n * ) 1 ( N 1 γ n * ) ( γ n * ) 2 + ( 1 / 2 ) ( N 1 γ n * ) 2 ( ρ ) 3 ,
where ρ is a random variable lies between N 1 and γ n * . It is not hard to show that
E { ( i = 1 m 1 w i ) ( N 1 γ n * ) 2 ( ρ ) 3 } = o ( ξ 1 ) .
We omit details for brevity.
σ 2 k k E ( i = 1 m 1 w i N 1 1 ) = σ 2 k 2 k m E { ( V ¯ m σ 2 ) { ( γ n * ) 1 m ( X ¯ m 2 V ¯ m 1 θ 2 ) / θ 2 ( γ n * ) + o ( ξ 1 ) } }
σ 2 k k E ( i = 1 m 1 w i N 1 1 ) = σ 2 k 2 k m 2 θ 2 ( γ n * ) { E ( X ¯ m 2 V ¯ m 1 V ¯ m ) + σ 2 E ( X ¯ m 2 V ¯ m 1 ) } + o ( ξ 1 ) = o ( ξ 1 )
Likewise, we recall the third term and expand ( N 1 1 ) 2 in Taylor series we get
1 2 σ 2 k k ( k 1 ) E ( i = 1 m 1 w i N 1 1 ) 2 = σ 2 k k ( k 1 ) ( γ n * ) .
Finally, collect terms, and the statement of Theorem 2 is complete. □
A particular case for Theorem 2 at k = 1 2 and k = 1 are as follows
E ( S N 1 1 ) = σ 1 + 3 4 γ n * σ 1 + o ( ξ 1 )
and
E ( S N 1 2 ) = σ 2 + 2 γ n * σ 2 + o ( ξ 1 )
Asymptotic results of the sample inverse coefficient of variation of the main-study phase Theorems 1, and 2, above provided the following approximate upper bound estimates
Corollary 3.
For the three-stage sampling rule in Equation (2), if condition (A) holds, then as ξ we have
(i) 
E ( θ ^ N 1 ) = θ + 3 θ 4 γ n *   + o ( ξ 1 )
(ii) 
E ( θ ^ N 2 1 ) = θ 2 + 2 θ 2 γ n * + o ( ξ 1 )
(iii) 
V a r ( θ ^ N 1 ) = θ 2 2 γ n * + o ( ξ 1 )
Proof. 
The proof of (i), and (ii) follows immediately from Equations (4), (5), (7) and (8). Part (iii) follows from (i) and (ii). The proof is complete. □

3.2. The Asymptotic Characteristics of the Fine-Tuning Phase

Recall the representation of N and write
E ( N ) = E ξ θ ^ N 1 2 + E ( β N 1 ) + o ( 1 ) = n * + 4 + γ 2 γ + o ( 1 ) E ( N 2 ) = n * + 6 γ 1 n * + o ( ξ )
and
V a r ( N ) = ( 2 γ ) γ n * + o ( ξ )
Theorem 3 gives a second-order approximation of a continuously differentiable and bounded real-valued function of N .
Theorem 3.
If condition (A) holds and ( 𝒽 > 0) be a real-valued continuously differentiable and bounded function, such that sup n > m | h ( n ) | = o | h ( n * ) | , then
E 𝒽 ( N ) = 𝒽 ( n * ) + ( 4 + γ 2 γ ) 𝒽 ( n * ) + ( 2 γ 2 γ ) n * 𝒽 ( n * ) + o ( ξ 2 𝒽 ( n * ) ) .
Proof. 
The proof follows by expanding h ( N ) around n * using the Taylor series. Then utilizing Equations (9) and (10) in the expansion, we get the result. □
Theorem 4.
For the three-stage sampling rule in Equation (3), if Condition A holds then, as ξ ,
E ( X ¯ N k ) = θ k σ k + k θ k 2 σ k { γ ( k 1 ) 4 } 2 n * + o ( ξ 1 ) .
Proof. 
First, write
E ( X ¯ N k ) = θ k σ k E ( 1 + i = 1 N Z i N θ ) k ,
then write down the binomial expression as an infinite series as
E ( 1 + i = 1 N Z i N θ ) k = j = 0 ζ ( k , j ) E { ( i = 1 N Z i ) j ( N θ ) j } ,
where, ζ ( k , j ) = 1 , when j = 0 , and ζ ( k , j ) = r = 1 j ( k r + 1 ) j ! for j = 1 ,   2 ,   3 ,
Now, conditioning on the σ field generated by Z 1 ,   Z 2 ,   Z 3 ,   , Z N 1 and we write the conditional sum ( i = 1 N Z i ) j | Z 1 ,   Z 2 ,   Z 3 ,   , Z N 1 = ( i = 1 N 1 Z i + i = N 1 + 1 N Z i ) j   | Z 1 ,   Z 2 ,   Z 3 ,   , Z N 1 as a binomial expansion, then take the conditional expectation we get
E ( X ¯ N k ) = θ k σ k E ( 1 + i = 1 N 1 Z i N θ ) k ,
where Z i are standard normal variates.
( i = N 1 N Z i ) | X 1 , X 2 ,   X 3 ,   , X N 1 is distributed N ( μ = 0 , σ 2 = N N 1 )
Therefore, ( i = N 1 N Z i / ( N N 1 ) 2 | X 1 ,   X 2 ,   X 3 ,   , X N 1 is distributed as χ 1 2 . Hence,
E ( i = N 1 N Z i N ) k | X 1 ,   X 2 ,   X 3 ,   , X N 1 E ( N N 1 N ) k = 0
as ξ . and finally, we have E ( X ¯ N k ) = θ k σ k E ( 1 + i = 1 N 1 Z i N θ ) k , where Z i are standard normal variates.
Consider the first three terms and the remainder in the infinite series, expand N 1 , and N 2 and take the expectation through, then the statement of Theorem 4 is proved. It is not hard to prove the remainder term is of order o ( ξ 1 ) . We omit any further details. □
Special cases of Theorem 4, for k = 1 ,   2 ,   a n d   4 are particularly important
E ( X ¯ N ) = μ 2 σ θ n * + o ( ξ 1 )
E ( X ¯ N 2 ) = μ 2 + ( γ 4 ) σ 2 n * + o ( ξ 1 )
E ( X ¯ N 4 ) = μ 4 + 2 ( 3 γ 4 ) θ 2 σ 4 n * + o ( ξ 1 )
Theorem 5 gives a second-order approximation for the k t h moment of the fine-tuning sample variance.
Theorem 5.
For the three-stage sampling rule in Equation (3), if condition (A) holds then, for real k as ξ
E ( S N 2 k ) = σ 2 k + γ σ 2 k k ( k 1 ) n * + o ( ξ 1 ) .
Proof. 
The proof of Theorem 5 can be justified along the lines of the proof of Theorem 4 if we condition on the σ field generated by V 1 ,   V ,   V 3 ,   , V N 1 and expand ( i = 1 N 1 V i + i = N 1 N V i ) k as an infinite series, to get,
E ( V ¯ N k ) = E ( j = 0 ζ ( k , j ) ( i = 1 N 1 V i ) k j E ( i = N 1 N 1 V i ) j ) | V 1 ,   V ,   V 3 ,   , V N 1 ,
where, ζ ( k , j ) = 1 , when j = 0 , and ζ ( k , j ) = r = 1 j ( k r + 1 ) j ! for j = 1 , 2 , 3 ,
The random sum ( i = N 1 N 1 V i ) | V 1 ,   V 2 ,   V 3 ,   , V N 1 is distributed as a σ 2 χ ( N N 1 ) 2 and
E ( i = N 1 N V i ) j | V 1 ,   V ,   V 3 ,   , V N 1 = ( N N 1 ) j σ 2 j ( 1 + O ( N 1 ) ) .
Thus, E ( V ¯ N k ) = σ 2 k + σ 2 k E ( 1 + i = 1 N 1 W i N ) k + o ( ξ 1 ) , where W i s are as defined before.
Consider the first three terms in the infinite series and the remainder, then write down N 1 , N 2 In the Taylor series, then take the expectation all through while applying Wald’s first and second equations [41], and then the statement of Theorem 5 is justified. □
Special cases of Theorem 5, at k = 1 2 ,   1   a n d   k = 2 , are particularly among our interest to obtain the moments of θ ^ N .
E ( S N 1 ) = σ 1 + 3 γ σ 1 4 n * + o ( ξ 1 )
E ( S N 2 ) = σ 2 + 2 γ σ 2 n * + o ( ξ 1 )
E ( S N 4 ) = σ 4 + 6 γ σ 4 n * + o ( ξ 1 )
Corollary 4.
For the three-stage sampling rule in Equation (3), If condition (A) holds, then as ξ we have
(i) 
E ( θ ^ N ) = θ + ( 3 γ θ 2 8 ) 4 θ n * + o ( ξ 1 )
(ii) 
E ( θ ^ N 2 ) = θ 2 + ( γ ( 2 θ 2 + 1 ) 4 ) n * + o ( ξ 1 )
(iii) 
V a r ( θ ^ N ) = γ ( θ 2 + 2 ) 2 n * + o ( ξ 1 )
(iv) 
E ( θ ^ N 4 ) = θ 4 + 6 γ θ 2 ( θ 2 + 1 ) 8 θ 2 n * + o ( ξ 1 ) .
Proof. 
Part (i) and Part (ii) follow from Equations (11), (12), (14) and (15). Part (iii) follows from (i) and (ii) while part (iv) follows from Equations (13) and (16). The proof is complete. □

3.3. The Asymptotic Coverage Probability of the Inverse Coefficient of Variation

Recall the three-stage sampling confidence interval I N = ( θ ^ N d , θ ^ N + d ) of the inverse coefficient variation, the coverage probability is given by
P ( θ I N ) = n = m ( P | θ ^ N θ | d ,   N = n ) = n = m ( P | θ ^ N θ | d | N = n ) P ( N = n ) .
From Anscombe [39], we have as ξ , 2 N ( θ ^ N θ ) θ N (0, 1) which is independent of the random variable N = m ,   m + 1 ,   m + 2 ,   , thus
P ( θ I N ) = n = m ( P | 2 n ( θ ^ N θ ) θ | d 2 n θ ) P ( N = n ) = E { 2 Φ ( d 2 N θ ) 1 } .
Utilizing Theorem 3, we get
P ( θ I N ) = ( 1 α ) + a ϕ ( a ) 4 γ n * ( a 2 ( γ 2 ) + 3 ( γ + 2 ) ) + o ( d 2 )
where, Φ ( . ) and ϕ ( . ) are the cumulative and the density functions of the standard normal distribution, respectively.
The asymptotic coverage probability in Equation (17) depends on the choice of γ and a . If we choose the design factor γ < 2 ( a 2 3 a 2 + 3 ) then P ( θ I N ) < 1 α ; otherwise, it exceeds the desired 1 α .
To study the effect of changing γ on the performance of the asymptotic coverage probability in Equation (17) as the optimal sample size increases, we take n * = 24 ,   43 ,   61 ,   76 ,   96 ,   125 ,   171 ,   246 and 500 as preferred by Hall [29] and take γ = 0.2 ,   0.3 and 0.5 . Table 1 below shows the results for 90 % ,   95 % , and 99 % confidence coefficients. We noticed that at 90%, the asymptotic coverage probability exceeds 0.9 for all chosen γ , while at 95%, the asymptotic coverage probability exceeds 0.95 only at γ = 0.5 and γ = 0.8 . At 99%, the asymptotic coverage probability exceeds 0.99 only at γ = 0.8 .
This means that the three-stage procedure attains consistency or asymptotic consistency in the sense of Chow and Robbins [25], depending on the choice of the design factor and the confidence coefficient. It looks like the three-stage procedure loses consistency as ( 1 α ) increases. Figure 1, Figure 2, and Figure 3 show the results of the tables as graphs for clarification.
The quantity { a 2 ( γ 2 ) + 3 ( γ + 2 ) 4 γ } known as the cost of ignorance (the cost of not knowing the variance σ 2 ), see Simons [42] for details.

4. The Sensitivity of Three-Stage Sampling to Shift in the Population Inverse Coefficient of Variation

The word sensitivity of sequential procedures means either sensitivity to departure from the underline distribution or sensitivity to shifting in the true parameter value. Bhattacharjee [43], Blumenthal and Govindarajulu [44], Ramkaran [45], Sook and DasGupta [46] were the first who examined the robustness of Stein’s two-stage sampling procedure [47] to departure from normality. Costanza et al. [32] and Son et al. [34], were the first to address the issue of the sensitivity of the three-stage confidence interval against the type II error probability while estimating the mean of the normal distribution. Hamdy [33] studied the same problem for the exponential distribution. However, Hamdy et al. [36] provided a more comprehensive analysis of the departure of both the underline distribution and the shift in the true parameter.
Suppose we need to investigate the capability of the constructed fixed-width confidence interval I N to signify potential shifts in the true population inverse coefficient of variation θ 0 of distance l ( 0 ) occurring outside the interval when it is incorrectly thought that such shifts never took place. In some applications, like in quality control, it is a matter of concern to closely monitor the sensitivity of the interval to depict any departure from the centerline in order to ensure the creditability of the interval. In this regard, we derive both the null and alternative hypotheses as follows:
H 0 : θ = θ 0   v s .   H a : θ = θ 1 = θ 0 ± d ( l + 1 ) I N   for   all   l 0 ,
where H 0 : θ = θ 0 , claims that no departure of the true parameter θ 0 has taken place, against the alternative hypotheses H a which alleges that the parameter value differs from θ 0 by a distance 1 + measured in units of the precision d .
The probability of not detecting a shift in the true parameter can statistically measure by the corresponding type II error probability ( β -risk), which is, in fact, the conditional probability of not depicting a departure from θ 0 , when, in fact, the departure actually occurred. In quality assurance ( β -risk), is known as the operating characteristic function
β ( ) = P ( θ 0 I N | H a ) = P ( θ ^ N d θ θ ^ N + d | θ 1 = θ 0 ± d ( l + 1 ) ) .
Since the process has an equal probability of committing a type II error probability above the centerline or below the centerline, we, therefore, consider only the probability of committing a positive shift from the true parameter value θ 0 .
Let τ be the probability of committing a type II error probability, which is the probability of no shift occurring given that an actual shift occurred. Our objective is to control the probability of committing a type II error probability. We do so by finding the characteristic operating curve O C that gives the probability of acceptance of various possible values of θ 1 . The minimum sample size required to control both α   a n d   τ is
n 0 = ( a + b ) 2 2 d 2 θ 2
where b = Z τ / 2 is the upper τ / 2 point of N ( 0 ,   1 ) . For more details, see Nelson [48,49], Hamdy [33], and Son et al. [34].
The second-order approximation of the characteristic operating function under Equations (18) and (19) as ξ
β ( l ) = P ( θ I N | H a ) = n = m P ( | θ ^ N θ 1 | d | N = n ) P ( N = n ) = n = m P ( ( 2 + l ) d θ ^ N θ 0 l d ) P ( N = n ) = E N ( Φ ( l d 2 N / θ ) ) E N ( Φ ( ( 2 + l ) d 2 N / θ ) ) .
Utilizing Theorem 3, we obtain
E N ( Φ ( l d 2 N / θ ) ) = Φ ( l ( a + b ) ) ( a + b ) 8 γ n 0 ϕ ( l ( a + b ) ) l { ( a + b ) 2 ( γ 2 ) l 2 + 3 ( γ + 2 ) } .
Similarly for E N ( Φ ( ( 2 + l ) d 2 N / θ ) ) .
Hence,
β ( l ) = Φ ( l ( a + b ) ) Φ ( ( 2 + l ) ( a + b ) ) Q 1 Q 2 + o ( ξ 2 )
where
Q 1 = ( a + b ) 8 γ n 0 ϕ 1 l { ( a + b ) 2 ( γ 2 ) l 2 + 3 ( γ + 2 ) } ,
and
Q 2 = ( a + b ) 8 γ n 0 ϕ 2 ( 2 + l ) { ( a + b ) 2 ( γ 2 ) ( l + 2 ) 2 + 3 ( γ + 2 ) }
ϕ 1 = ϕ ( l ( a + b ) ) and ϕ 2 = ϕ ( ( 2 + l ) ( a + b ) ) .
Costanza et al. [32] and Son et al. [34] treated the case of the mean of the normal distribution.
Equation (20) depends on the shift l , the design factor γ , and the optimal sample size n 0 . Table 2 below shows the β r i s k values as the shift k increases, and the optimal sample size increases, taking k = 0 ,   0.1 , , 1 . As the shift increases, the risk decreases. Figure 4 below demonstrates this idea.

5. The Asymptotic Regret Encountered in Point Estimation of the Inverse Coefficient of Variation

In this section, we aim to find the asymptotic regret that occurs when we use the sample inverse coefficient of variation rather than the population inverse coefficient of variation. We use squared-error loss function with linear sampling cost. A typical situation is in constructing a quality control chart to the inverse coefficient of variation where both estimation of control limits (the upper and the lower control limits) and the centerline are required.
What if we want to utilize the available data to provide a point estimate of θ (the centerline) under the squared-error loss function with linear sampling cost. Therefore, we assume that the cost incurred in estimating θ is given by
L n ( A ) = A ( θ ^ n θ ) 2 + c n ,
where c is the cost per unit sample. Regarding A , the literature in sequential point estimation customarily assumes that A is a known constant, which reflects the cost of estimation and can be permitted to approach . However, here, we try to give a better understanding of the nature of A in this context. First, the risk associated with the above loss function is given by
R n ( A ) = E L n ( A ) = A θ 2 2 n + c n .
Minimizing the risk associated with the loss function provides the optimal sample size n 0 = A 2 c θ . If we have to use the optimal sample size used to construct a fixed 2 d confidence interval for θ , where the coverage probability is at least the nominal value, to propose θ ^ n for θ under the squared error loss function, the constant A should be chosen such that
A = c a 4 θ 2 2 d 4 = a 2 d 2 ( c n * ) .
Clearly as d 0 ,   A , where
A = ( F i s h e r   i n f o r m a t i o n , a 2 d 2 ) × ( t h e   o p i m a l   c o s t   o f   s a m p l i n g   c n * ) .
In this case, the optimal risk is given by R n * ( d ) = 2 c n * . The asymptotic regret, which is defined as the difference between the risk of using the three-stage procedure minus the optimal risk see, Robbins [38] would be
ω ( d ) = R N ( d ) R n * ( d ) ,
where
R N ( d ) = A E ( θ ^ N θ ) 2 + c E ( N ) .
The risk of the three-stage sampling can be approximated by the upper bound
R N ( d ) = c n * ( γ + 1 ) + c ( 4 + γ 2 γ ) .
Hence, the asymptotic regret is
ω ( d ) = c n * ( γ 1 ) + c ( 4 + γ 2 γ )   as   d 0 .
Which provides negative regret. This means that the three-stage procedure does better than the optimal fixed sample size had θ been known. Martinsek [50] discussed the issue of negative regret in sequential point estimation.

6. Conclusions

This paper theoretically tackles three estimation problems for the population inverse coefficient of variation of the normal distribution under Hall’s three-stage procedure. We obtain asymptotic mathematical forms for the population inverse coefficient of variation, the asymptotic coverage probability, the characteristics operating function, and the asymptotic regret. We find the range of the design factor that makes the three-stage procedure achieve consistency or asymptotic consistency as the width of the interval approaches zero. The asymptotic regret has negative values for all possible values of the design factor.

Author Contributions

Conceptualization, A.Y.; Methodology, A.Y. and H.H.; software, A.Y.; Validation, A.Y. and H.H.; formal analysis, A.Y.; investigation, A.Y. and H.H.; resources, A.Y.; data curation, A.Y. and H.H.; writing original draft, A.Y.; Preparation, A.Y.; Writing review and editing, A.Y.; Visualization, A.Y. and H.H.; Supervision, A.Y.; Project administrator, A.Y.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conlict of interest.

References

  1. Nairy, K.S.; Rao, K.A. Tests of Coefficients of Variation of Normal Population. Commun. Stat. Simul. Comput. 2003, 3, 641–661. [Google Scholar] [CrossRef]
  2. Ahn, K. On the use of coefficient of variation for uncertainty analysis in fault tree analysis. Reliab. Eng. Syst. Saf. 1995, 47, 229–230. [Google Scholar] [CrossRef]
  3. Gong, J.; Li, Y. Relationship between the Estimated Weibull Modulus and the Coefficient of Variation of the Measured Strength for Ceramics. J. Am. Ceram. Soc. 1999, 82, 449–452. [Google Scholar] [CrossRef]
  4. Faber, D.S.; Korn, H. Applicability of the coefficient of variation method for analyzing synaptic plasticity. Biophys. J. 1991, 60, 1288–1294. [Google Scholar] [CrossRef] [Green Version]
  5. Hammer, A.J.; Strachan, J.J.; Black, M.M.; Ibbotson, C.; Elson, R.A. A new method of comparative bone strength measurement. J. Med. Eng. Technol. 1995, 19, 1–5. [Google Scholar] [CrossRef] [PubMed]
  6. Billings, J.; Zeitel, L.; Lukomnik, J.; Carey, T.S.; Blank, A.E.; Newman, L. Impact of socioeconomic status on hospital use in New York City. Health Aff. 1993, 12, 162–173. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Brief, R.P.; Owen, J. A Note on earnings risk and the coefficient of variation. J. Financ. 1969, 24, 901–904. [Google Scholar] [CrossRef]
  8. Pyne, D.B.; Trewin, C.B.; Hopkins, W.G. Progression and variability of competitive performance of Olympic swimmers. J. Sports Sci. 2004, 22, 613–620. [Google Scholar] [CrossRef]
  9. Kelly, K. Sample size planning for the coefficient of variation from the accuracy in parameter estimation approach. Behav. Res. Methods 2007, 39, 755–766. [Google Scholar] [CrossRef] [Green Version]
  10. Gulhar, M.; Kibria, B.M.; Albatineh, A.N.; Ahmed, N.U. A Comparison of some confidence intervals for estimating the population coefficient of variation: A simulation study. SORT Stat. Oper. Res. Trans. 2012, 36, 45–68. [Google Scholar]
  11. McGibney, G.; Smith, M.R. An unbiased signal to noise ratio measure for magnetic resonance images. Med. Phys. 1993, 20, 1077–1079. [Google Scholar] [CrossRef] [PubMed]
  12. Knight, J.L.; Satchell, S. A Re-Examination of Sharpe’s Ratio for Log-Normal Prices. Appl. Math. Financ. 2005, 12, 87–100. [Google Scholar] [CrossRef]
  13. Hendricks, W.A.; Robey, K.W. The Sampling Distribution of the Coefficient of Variation. Ann. Math. Stat. 1936, 7, 129–139. [Google Scholar] [CrossRef]
  14. Koopmans, L.H.; Owen, D.B.; Rosenblatt, J.I. Confidence intervals for the coefficient of variation for the normal and log-normal distribution. Biometrika 1964, 51, 25–32. [Google Scholar] [CrossRef]
  15. Rao, K.A.; Bhatta, A.R. A note on test for coefficient of variation. Calcutta Stat. Assoc. Bull. 1989, 38, 225–229. [Google Scholar] [CrossRef]
  16. McKay, A.T. Distribution of the coefficient of variation and the extended t distribution. J. R. Stat. Soc. 1932, 95, 695–698. [Google Scholar] [CrossRef]
  17. Umphrey, G.J. A comment on McKay’s approximation for the coefficient of variation. Commun. Stat. Simul. Comput. 1983, 12, 629–635. [Google Scholar] [CrossRef]
  18. Vangel, M.G. Confidence intervals for a normal coefficient of variation. Am. Stat. 1996, 50, 21–26. [Google Scholar]
  19. Miller, E.G. Asymptotic test statistics for coefficient of variation. Commun. Stat. Theory Methods 1991, 20, 3351–3363. [Google Scholar] [CrossRef]
  20. Lehmann, E.L. Theory of Point Estimation, 2nd ed.; Wiley: New York, NY, USA, 1983. [Google Scholar]
  21. Curto, J.D.; Pinto, J.C. The coefficient of variation asymptotic distribution in the case of non-iid random variables. J. Appl. Stat. 2009, 36, 21–32. [Google Scholar] [CrossRef]
  22. Sharma, K.K.; Krishna, H. Asymptotic sampling distribution of inverse coefficient-of variation and its applications. IEEE Trans. Reliab. 1994, 43, 630–633. [Google Scholar] [CrossRef]
  23. Albatineh, A.; Kibria, B.M.; Zogheib, B. Asymptotic sampling distribution of inverse coefficient of variation and its applications: Revisited. Int. J. Adv. Stat. Probab. 2014, 2, 15–20. [Google Scholar] [CrossRef]
  24. Chaturvedi, A.; Rani, U. Fixed-width confidence interval estimation of the inverse coefficient of variation in a normal population. Microelectron. Reliab. 1996, 36, 1305–1308. [Google Scholar] [CrossRef]
  25. Chow, Y.S.; Robbins, H. On the asymptotic theory of fixed width confidence intervals for the mean. Ann. Math. Stat. 1965, 36, 457–462. [Google Scholar] [CrossRef]
  26. Chattopadhyay, B.; Kelley, K. Estimation of the Coefficient of Variation with Minimum Risk: A Sequential Method for Minimizing Sampling Error and Study Cost. Multivar. Behav. Res. 2016, 51, 627–648. [Google Scholar] [CrossRef] [PubMed]
  27. Yousef, A.; Hamdy, H. Three-stage estimation for the mean and variance of the normal distribution with application to inverse coefficient of variation. Mathematics 2019, 7, 831. [Google Scholar] [CrossRef] [Green Version]
  28. Dantzig, G.B. On the Non-Existence of Tests of Student’s Hypothesis Having Power Function Independent of σ. Ann. Math. Stat. 1940, 11, 186–192. [Google Scholar] [CrossRef]
  29. Hall, P. Asymptotic Theory and Triple Sampling of Sequential Estimation of a Mean. Ann. Stat. 1981, 9, 1229–1238. [Google Scholar] [CrossRef]
  30. Hall, P. Sequential Estimation Saving Sampling Operations. J. R. Stat. Soc. 1983, 45, 1229–1238. [Google Scholar] [CrossRef]
  31. Ghosh, M.; Mukhopadhyay, N.; Sen, P. Sequential Estimation; Wiley: New York, NY, USA, 1997. [Google Scholar]
  32. Costanza, M.C.; Hamdy, H.I.; Haugh, L.D.; Son, M.S. Type II Error Performance of Triple Sampling Fixed Precision Confidence Intervals for the Normal Mean. Metron 1995, 53, 69–82. [Google Scholar]
  33. Hamdy, H.I. Performance of Fixed Width Confidence Intervals under Type II Errors: The Exponential Case. South African. J. Stat. 1997, 31, 259–269. [Google Scholar]
  34. Son, M.S.; Haugh, L.D.; Hamdy, H.I.; Costanza, M.C. Controlling Type II Error while Constructing Triple Sampling Fixed Precision Confidence Intervals for the Normal Mean. Ann. Inst. Stat. Math. 1997, 49, 681–692. [Google Scholar] [CrossRef]
  35. Yousef, A.; Kimber, A.; Hamdy, H.I. Sensitivity of Normal-Based Triple Sampling Sequential Point Estimation to the Normality Assumption. J. Stat. Plan. Inference 2013, 143, 1606–1618. [Google Scholar] [CrossRef] [Green Version]
  36. Hamdy, H.I.; Son, S.M.; Yousef, S.A. Sensitivity Analysis of Multi-Stage Sampling to Departure of an underlying Distribution from Normality with Computer Simulations. J. Seq. Anal. 2015, 34, 532–558. [Google Scholar] [CrossRef]
  37. Yousef, A. Construction a Three-Stage Asymptotic Coverage Probability for the Mean Using Edgeworth Second-Order Approximation. In International Conference on Mathematical Sciences and Statistics; Springer: Singapore, 2014; pp. 53–67. [Google Scholar] [CrossRef]
  38. Yousef, A. A Note on a Three-Stage Sequential Confidence Interval for the Mean When the Underlying Distribution Departs away from Normality. Int. J. Appl. Math. Stat. 2018, 57, 57–69. [Google Scholar]
  39. Robbins, H. Sequential Estimation of the Mean of a Normal Population. In Probability, and Statistics; Almquist and Wicksell: Uppsala, Sweden, 1959; pp. 235–245. [Google Scholar]
  40. Anscombe, F.J. Large-sample theory of sequential estimation. In Mathematical Proceedings of the Cambridge Philosophical Society; Cambridge University Press: Cambridge, UK, 1952; Volume 48, pp. 600–607. [Google Scholar]
  41. Wald, A. Sequential Analysis; Wiley: New York, NY, USA, 1947. [Google Scholar]
  42. Simons, G. On the cost of not knowing the variance when making a fixed-width interval estimation of the mean. Ann. Math. Stat. 1968, 39, 1946–1952. [Google Scholar] [CrossRef]
  43. Bhattacharjee, G.P. Effect of non-normality on Stein’s two-sample test. Ann. Math. Stat. 1965, 36, 651–663. [Google Scholar] [CrossRef]
  44. Blumenthal, S.; Govindarajulu, Z. Robustness of Stein’s two-stage procedure for mixtures of normal populations. J. Am. Stat. Assoc. 1977, 72, 192–196. [Google Scholar] [CrossRef]
  45. Ramkaran. The robustness of Stein’s two-stage procedure. J. Seq. Anal. 1983, 5, 139–168. [Google Scholar] [CrossRef]
  46. Oh, H.S.; Dasgupta, A. Robustness of Stein’s two-stage procedure. Seq. Anal. 1995, 14, 321–334. [Google Scholar]
  47. Stein, C.A. Two-sample test for a linear hypothesis whose power is independent of the variance. Ann. Math. Stat. 1945, 16, 243–258. [Google Scholar] [CrossRef]
  48. Nelson, L.S. Comments on significant tests and confidence intervals. J. Qual. Technol. 1990, 22, 328–330. [Google Scholar] [CrossRef]
  49. Nelson, L.S. Sample sizes for confidence intervals with specified length and tolerances. J. Qual. Technol. 1994, 26, 54–63. [Google Scholar] [CrossRef]
  50. Martinsek, A.T. Negative regret, optimal stopping, and the elimination of outliers. J. Am. Stat. Assoc. 1988, 10, 65–80. [Google Scholar]
Figure 1. Performance of the asymptotic coverage probability at 90%.
Figure 1. Performance of the asymptotic coverage probability at 90%.
Computation 07 00069 g001
Figure 2. Performance of the asymptotic coverage probability at 95%.
Figure 2. Performance of the asymptotic coverage probability at 95%.
Computation 07 00069 g002
Figure 3. Performance of the asymptotic coverage probability at 99%.
Figure 3. Performance of the asymptotic coverage probability at 99%.
Computation 07 00069 g003
Figure 4. Performance of the characteristic operating curve as the shift increases.
Figure 4. Performance of the characteristic operating curve as the shift increases.
Computation 07 00069 g004
Table 1. Asymptotic coverage probabilities for different values of γ as the optimal sample size increases.
Table 1. Asymptotic coverage probabilities for different values of γ as the optimal sample size increases.
1 α = 0.90 1 α = 0.95 1 α = 0.99
n* γ = 0.2 γ = 0.5 γ = 0.8 γ = 0.2 γ = 0.5 γ = 0.8 γ = 0.2 γ = 0.5 γ = 0.8
240.915280.912160.911380.948120.954150.955650.979640.988100.99021
430.908530.906790.906350.948950.952310.953150.984220.988940.99012
610.906010.904780.904480.949260.951630.952220.985920.989250.99008
760.904820.903840.903590.949410.951310.951780.986730.989400.99007
960.903820.903040.902850.949530.951040.951410.987410.989520.99005
1250.902930.902330.902180.949640.950800.951090.988010.989620.99004
1710.902140.901710.901600.949740.950580.950790.988550.989730.99003
2460.901490.901190.901110.949820.950400.950550.988990.989810.99002
5000.900730.900580.900550.949910.950200.950270.989500.989910.99001
Table 2. The sensitivity of the three-stage procedure as the shift and the optimal sample size increases γ = 0.5   a n d   α = τ = 5 % .
Table 2. The sensitivity of the three-stage procedure as the shift and the optimal sample size increases γ = 0.5   a n d   α = τ = 5 % .
Shift2443617696125171246500
00.50000.50000.50000.50000.50000.50000.50000.50000.5000
0.10.33660.34140.34320.34410.34480.34540.34600.34650.3470
0.20.20080.20770.21030.21150.21260.21350.21430.21500.2158
0.30.10650.11240.11460.11560.11650.11720.11790.11850.1192
0.40.05120.05440.05560.05610.05660.05700.05740.05770.0581
0.50.02290.02380.02420.02430.02450.02460.02470.02480.0249
0.60.00980.00960.00950.00950.00950.00940.00940.00940.0094
0.70.00400.00360.00340.00340.00330.00320.00320.00310.0031
0.80.00150.00120.00110.00110.00100.00100.00100.00090.0009
0.90.00050.00040.00030.00030.00030.00030.00030.00020.0002
1.00.00020.00010.00010.00010.00010.00010.00010.00010.0001

Share and Cite

MDPI and ACS Style

Yousef, A.; Hamdy, H. Three-Stage Sequential Estimation of the Inverse Coefficient of Variation of the Normal Distribution. Computation 2019, 7, 69. https://doi.org/10.3390/computation7040069

AMA Style

Yousef A, Hamdy H. Three-Stage Sequential Estimation of the Inverse Coefficient of Variation of the Normal Distribution. Computation. 2019; 7(4):69. https://doi.org/10.3390/computation7040069

Chicago/Turabian Style

Yousef, Ali, and Hosny Hamdy. 2019. "Three-Stage Sequential Estimation of the Inverse Coefficient of Variation of the Normal Distribution" Computation 7, no. 4: 69. https://doi.org/10.3390/computation7040069

APA Style

Yousef, A., & Hamdy, H. (2019). Three-Stage Sequential Estimation of the Inverse Coefficient of Variation of the Normal Distribution. Computation, 7(4), 69. https://doi.org/10.3390/computation7040069

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop