Next Article in Journal / Special Issue
Reliability Estimation under Normal Operating Conditions for Progressively Type-II XLindley Censored Data
Previous Article in Journal
Developments of Efficient Trigonometric Quantile Regression Models for Bounded Response Data
Previous Article in Special Issue
Advancements in Numerical Methods for Forward and Inverse Problems in Functional near Infra-Red Spectroscopy: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tolerance Limits and Sample-Size Determination Using Weibull Trimmed Data

by
Arturo J. Fernández
Departamento de Matemáticas, Estadística e Investigación Operativa and Instituto de Matemáticas y Aplicaciones (IMAULL), Universidad de La Laguna (ULL), 38071 Santa Cruz de Tenerife, Spain
Axioms 2023, 12(4), 351; https://doi.org/10.3390/axioms12040351
Submission received: 10 February 2023 / Revised: 20 March 2023 / Accepted: 29 March 2023 / Published: 2 April 2023
(This article belongs to the Special Issue Mathematical Methods in the Applied Sciences)

Abstract

:
Guaranteed-coverage and expected-coverage tolerance limits for Weibull models are derived when, owing to restrictions on data collection, experimental difficulties, the presence of outliers, or some other extraordinary reasons, certain proportions of the extreme sample values have been censored or disregarded. Unconditional and conditional tolerance bounds are presented and compared when some of the smallest observations have been discarded. In addition, the related problem of determining minimum sample sizes for setting Weibull tolerance limits from trimmed data is discussed when the numbers or proportions of the left and right trimmed observations are fixed. Step-by-step procedures for determining optimal sampling plans are also presented. Several numerical examples are included for illustrative purposes.

1. Introduction

Tolerance limits are extensively employed in some statistical fields, including statistical quality control, economics, medical and pharmaceutical statistics, environmental monitoring, and reliability analysis. In essence, a tolerance interval describes the behavior of a fraction of individuals. Roughly speaking, the tolerance limits are bounds within which one expects a stated proportion of the population to lie. Two basic types of such limits have received considerable attention, β -content and β -expectation tolerance limits; see Wilks [1], Guttman [2] and Fernández [3] and references therein. Succinctly, a β -content tolerance interval contains at least 100 β % of the population with certain confidence, whereas a β -expectation tolerance interval covers, on the average, a fraction β of the population.
In life-testing and reliability analysis, the tolerance limits are frequently computed from a complete or right-censored sample. In this paper, the available empirical information is provided by a trimmed sample, i.e., it is assumed that determined proportions q 1 and q 2 of the smallest and largest observations have been eliminated or censored. These kinds of data are frequently used in several areas of statistical practice for deriving robust inferential procedures and detecting influential observations, e.g., Prescott [4], Huber [5], Healy [6], Welsh [7], Wilcox [8], and Fernández [9,10]. In various situations, some extreme sample values may not be recorded due to restrictions on data collection (generally for reasons of economy of money, time, and effort), experimental difficulties or some other extraordinary reasons, or be discarded (especially when some observations are poorly known or the presence of outliers is suspected). In particular, a known number of observations in an ordered sample might be missing at either end (single censoring) or at both ends (double censoring) in failure censored experiments. Specifically, double censoring has been treated by many authors in the statistical literature (among others, Healy [11], Prescott [12], Schneider [13], Bhattacharyya [14], LaRiccia [15], Schneider and Weissfeld [16], Fernández [17,18], Escobar and Meeker [19], Upadhyay and Shastri [20], and Ali Mousa [21]).
The Weibull distribution provides a versatile statistical model for analyzing time-to-event data, which is useful in many fields, including biometry, economics, engineering, management, and the environmental, actuarial, and social sciences. In survival and reliability analysis, this distribution plays a prominent role and has successfully been used to describe animal and human disease mortality, as well as the reliability of both components and equipments in industrial applications. This probability model has many practical applications; e.g., Chen et al. [22], Tsai et al. [23], Aslam et al. [24], Fernández [25], Roy [26], Almongy et al. [27], and Algarni [28]. If the Weibull shape parameter α = 1 , the distribution is exponential, which plays an notable role in engineering; see Fernández et al. [29], Lee et al. [30], Chen and Yang [31], and Yousef et al. [32]. The random variable is Rayleigh distributed when α = 2 . This case is also important in various areas; see Aminzadeh [33,34], Raqab and Madi [35], Fernández [36], and Lee et al. [37].
This paper is devoted to deriving tolerance limits using a trimmed sample drawn from a Weibull W θ , α population. It is assumed that α is appropriately chosen while the Weibull scale parameter, θ , is unknown. The conditionality principle, proposed primarily by Fisher, is adopted when some of the smallest observations have been disregarded. The related problem of determining minimum sample sizes is also tackled. In the exponential case, Fernández [38,39] presented optimal two-sided tolerance intervals and tolerance limits for k-out-of-n systems, respectively. On the basis of a complete Rayleigh sample (i.e., α = 2 ) , Aminzadeh [33] found β -expectation tolerance limits and discussed the determination of sample size to control stability of coverage, whereas Aminzadeh [34] derived approximate tolerance limits when θ depends on a set of explanatory variables. Weibull tolerance limits based on complete samples were obtained in Thoman et al. [40].
The structure of the remainder of this work is as follows. The sampling distribution of a Weibull trimmed sample is provided in the next section. Section 3 presents β -content tolerance limits based on W θ , α trimmed data. In addition, the problem of determining optimal sample sizes is discussed. Mean-coverage tolerance limits are derived in Section 4. Optimal sampling plans for setting β -expectation tolerance limits are also deduced. The corresponding unconditional and conditional bounds are compared in Section 3 and Section 4 when the lower trimming proportion, q 1 , is positive, whereas Section 5 includes several numerical examples, reported by Sarhan and Greenberg [41], Meeker and Escobar [42], and Lee and Wang [43], for illustrative purposes. Finally, Section 6 offers some concluding remarks.

2. Weibull Trimmed Samples

The probability density function (pdf) of a random variable X which has a Weibull distribution with positive parameters θ and α , i.e., X W θ , α , is defined by
f X ( x θ , α ) = α x α 1 / θ α exp x / θ α , x > 0 .
Its k-th moment is obtained to be E [ X k θ , α ] = θ k Γ ( 1 + k / α ) , k = 1 , 2 , , where Γ ( · ) is the well-known gamma function. The parameter α controls the shape of the density whereas θ determines its scaling. Since the hazard rate is h ( x θ , α ) = α / θ α x α 1 for x > 0 , the Weibull law may be used to model the survival distribution of a population with increasing ( α > 1 ) , decreasing ( α < 1 ) , or constant ( α = 1 ) risk. Examples of increasing and decreasing hazard rates are, respectively, patients with lung cancer and patients who undergo successful major surgery. Davis [44] reports several cases in which a constant risk is reasonable, including payroll check errors and radar set component failures.
In many practical applications, Weibull distributions with α in the range 1 to 3 seem appropriate. If 3 α 4 , the W θ , α pdf has a near normal shape; for large α (e.g., α 10 ) , the shape of the density is close to that of the (smallest) extreme value density. The Weibull density becomes more symmetric as α grows. In Weibull data analysis it is quite habitual to assume that the shape parameter, α , is a known constant. Among other authors, Soland [45], Tsokos and Rao [46], Lawless [47], and Nordman and Meeker [48] provide justifications. The α value may come from previous or related data or may be a widely accepted value for the problem, or even an expert guess. In reliability analysis, α is often tied to the failure mechanism of the product and so engineers might have some knowledge of it. Klinger et al. [49] provide tables of Weibull parameters for various devices. Abernethy [50] supplies useful information about past experiments with Weibull data. Several situations in which it is appropriate to consider that α is constant are described in Nordman and Meeker [48]. Among many others, Danziger [51], Tsokos and Canavos [52], Moore and Bilikam [53], Kwon [54], and Zhang and Meeker [55] also utilize a given Weibull shape parameter.
Consider a random sample of size n from a Weibull distribution (1) with unknown scale parameter θ Θ = 0 , , and let X r : n , , X s : n be the ordered observations remaining when the r 1 smallest observations and the n s largest observations have been discarded or censored, where 1 r s n . The trimming proportions q 1 = r 1 / n and q 2 = 1 s / n , as well as the shape parameter α , are assumed to be predetermined constants.
The pdf of the q 1 , q 2 -trimmed sample X = X r : n , , X s : n at x = x r : n , , x s : n is then defined by
f X ( x θ , α ) = n ! α s r + 1 1 exp x r : n α / θ α r 1 i = r s x i : n α 1 r 1 ! n s ! θ s r + 1 α exp T x / θ α ,
for 0 < x r : n < < x s : n , where T x is the observed value of
T T X = i = r s X i : n α + ( n s ) X s : n α .
Clearly, T is sufficient when r = 1 , whereas the sample evidence is contained in the sufficient statistic X r : n , T if r > 1 .
The maximum likelihood estimator (MLE) of θ , denoted by θ ^ θ ^ X , can be derived from the equation ln f X ( X θ , α ) / θ = 0 . It is well-known that θ ^ is the unique solution to the equation
( θ ^ ) α = X r : n α r 1 / s r + 1 1 exp ( [ X r : n α / ( θ ^ ) α ] ) + T s r + 1 .
See, e.g., Theorem 1 in Fernández et al. [29]. Therefore, the MLE of θ is given explicitly by θ ^ = T / s 1 / α when r = 1 . Otherwise, θ ^ must be found upon using an iterative procedure.

3. Guaranteed-Coverage Tolerance Limits

Given the Weibull q 1 , q 2 -trimmed sample X = X r : n , , X s : n and β , γ 0 , 1 , a statistic L β , γ L β , γ X is called a lower β -content tolerance limit at level of confidence γ (or simply a lower β , γ -TL for short) of the W θ , α distribution if
Pr X θ , α Pr X θ , α X > L β , γ β = γ
for all θ > 0 , where Pr X θ , α · and Pr X θ , α · refer to the respective sampling distribution of X and X under the nominal values of θ and α , which are defined in (1) and (2), respectively.
According to (4), one may guarantee with confidence γ that at least 100 β % of population measurements will exceed L β , γ . In other words, with confidence γ , the probability that a future observation of X W θ , α will surpass L β , γ is at least β . Clearly, an upper β , γ -TL, U β , γ U β , γ X , is provided by L 1 β , 1 γ . In this manner, one can be 100 γ % confident that at least 100 β % of Weibull W θ , α observations will be less than U β , γ .
Assuming that 1 < r = s , as X r : n is minimal sufficient for θ , it is logical to consider a lower β , γ -TL for the form L β , γ = C β , γ X r : n , where, from (4), C β , γ must satisfy
Pr exp C β , γ X r : n / θ α β = γ .
Since exp X r : n α / θ α B e t a n r + 1 , r , it follows that β 1 / ( C β , γ ) α is merely the 1 γ -quantile of the B e t a ( n r + 1 , r ) distribution. Thus, C β , γ satisfies the equation
k = 0 r 1 n k { 1 β 1 / ( C β , γ ) α } k β k n / ( C β , γ ) α = 1 γ .
Alternatively, C β , γ may be expressed explicitly as
C β , γ = ln β ln { 1 + r F 2 r , 2 n r + 1 ; γ / n r + 1 } 1 / α ,
where F 2 r , 2 n r + 1 ; γ denotes the γ -quantile of the F-distribution with 2 r and 2 ( n r + 1 ) degrees of freedom (df). In particular, C β , γ = { n ln β / ln ( 1 γ ) } 1 / α when r = s = 1 , whereas C β , γ = { ln β / ln ( 1 γ 1 / n ) } 1 / α if r = s = n .
If 1 = r s , it is clear that T is minimal sufficient for θ , which implies that it is sensible to assume that L β , γ is proportional to T 1 / α , i.e., L β , γ = C β , γ T 1 / α . In this situation, it can be shown that 2 T / θ α χ 2 s 2 , where χ 2 s 2 represents the chi-square distribution with 2 s df. Observe that, letting X 0 : n 0 , the pivotal 2 T / θ α coincides with i = 1 s Z i , where Z i = 2 n i + 1 X i : n α X i 1 : n α / θ α , i = 1 , , s , are mutually independent χ 2 2 variables. Since, in view of (4),
Pr exp ( C β , γ ) α T / θ α β = Pr 2 T / θ α 2 ln β / ( C β , γ ) α = γ ,
it turns out that C β , γ = ( 2 ln β / χ 2 s ; γ 2 ) 1 / α . Consequently, L β , γ = ( 2 T ln β / χ 2 s ; γ 2 ) 1 / α .

3.1. Unconditional and Conditional Tolerance Limits

When focusing on the more general case, in which 1 < r < s , obviously X r : n , T is a sufficient statistic for θ . Moreover, if R = T n r + 1 X r : n α , then 2 R / θ α χ 2 s r 2 since this pivotal quantity can be expressed as the sum of the s r independent χ 2 2 variables Z r + 1 , Z r + 2 , , Z s . Therefore, it can be shown that
L β , γ = C β , γ R 1 / α = ( 2 R ln β / χ 2 s r ; γ 2 ) 1 / α
constitutes a (unconditional) lower β , γ -TL. Notice, however, that this limit is based on an insufficient statistic R .
An alternative and more appropriate TL can be constructed assuming that A = X r : n α / R is an ancillary statistic. Note that, by itself, A does not contain any information about θ , and that the statistic R , A is minimal sufficient for θ . Therefore, given A , the statistic R is conditionally sufficient. In accordance with the conditional principle suggested by Fisher, a tolerance limit should be based on the distribution of R given the observed value of the ancillary statistic A . Then, adopting the above principle and assuming that A = a , it is sensible to look for a conditional lower β , γ -TL of the form L β , γ a = C β , γ a R 1 / α , where
Pr X θ , α Pr X θ , α X > L β , γ a β A = a = γ .
Thus, as Pr ( R / θ α ln β / ( C β , γ a ) α A = a ) = γ , it follows that ln β / ( C β , γ a ) α is precisely the γ -quantile of the distribution of R / θ α conditional to A = a .
The pdf of Y = R / θ α given A = a is derived to be
f Y y a = y s r exp 1 + n + 1 r a y s r ! G a 1 exp a y 1 r , y > 0 ,
where
G a = k = 0 r 1 1 i r 1 k k + n + 1 r a + 1 s r + 1 ,
whereas the cumulative distribution function of Y conditional to A = a is defined by
Pr Y y a = 1 G * y ; a / G a , y > 0 ,
where
G * y ; a = i = 0 r 1 j = 0 s r 1 i r 1 i y j 1 + n r + 1 + i a j s + r 1 j ! exp 1 + n r + 1 + i a y .
Consequently, if y γ a denotes the γ -quantile of the distribution of Y given A = a , i.e., y γ a satisfies the equation G * [ y γ a ; a ] = 1 γ G a , it is obvious that C β , γ a = ( ln β / y γ a ) 1 / α . In this way, it follows that L β , γ a = ( R ln β / y γ a ) 1 / α .
Of course, L β , γ A L β , γ A X is also a lower β , γ -TL in the ordinary unconditional sense because
Pr X θ , α Pr X θ , α X > L β , γ A β
coincides with
E Pr X θ , α Pr X θ , α X > L β , γ A β A = E γ = γ .
Table 1 compares, for selected values of r , s, and n , the unconditional and conditional lower ( β , γ ) tolerance factors, C β , γ and C β , γ a ε , corresponding to the W θ , α distribution when α = 1 , ( β , γ ) = 0.90 , 0.95 and A = a ε , ε = 0.01 , 0.25 , 0.75 , 0.99 , where a ε denotes the ε -quantile of the distribution of A . It can be proven that a ε is the unique positive solution in a to the following equation
i = 0 r 1 1 i r r 1 i n r / n r + 1 + i 1 + n r + 1 + i a s r = 1 ε .
It is worthwhile to mention that the difference between C β , γ and C β , γ a ε might be large when A takes extreme percentiles (i.e., when ε is near to 0 or 1). For instance, if r , s , n = 4 , 8 , 30 , the unconditional factor is C β , γ = 0.01359 , whereas the respective conditional factors C β , γ a 0.01 and C β , γ a 0.99 are given by 0.009343 and 0.05627 . The difference between C β , γ and C β , γ a ε becomes smaller when n grows to infinity and the trimming proportions, q 1 and q 2 , are fixed. Indeed, provided that s r is large, C β , γ and C β , γ a are quite similar. In addition, it turns out that
C β , γ C β , γ * = ln β s r + z γ s r 1 / α
from the Wilson–Hilferty transformation (see, e.g., Lawless [47], p. 158), where z γ is the γ -quantile of the standard normal distribution. For instance, C β , γ = 0.001862 and C β , γ * = 0.001880 when r , s , n = 5 , 50 , 55 , α = 1 and ( β , γ ) = 0.90 , 0.95 . In this case, ( C β , γ a 0.01 , C β , γ a 0.99 ) is 0.001741 , 0.002170 . If one assumes now that r , s , n = 5 , 90 , 95 , then C β , γ = 0.001046 and C β , γ * = 0.001051 , whereas ( C β , γ a 0.01 , C β , γ a 0.99 ) = 0.001007 , 0.001134 .

3.2. Sample-Size Determination

The choice of sample size plays a primordial role in the design of most statistical studies. A traditional approach is to assume that it is desired to find the smallest value of n (and the corresponding values of r and s ) , such that the lower β , γ -TL based of a q 1 , q 2 -trimmed sample X = ( X r : n , , X s : n ) drawn from X W ( α , θ ) , L β , γ L β , γ X , satisfies
Pr X θ , α Pr X θ , α X > L β , γ β γ
for all θ > 0 , and certain β > β and γ . In this way, one could affirm that at least 100 β % of population measurements will exceed L β , γ with confidence γ , and that at least 100 β % of population measurements will surpass L β , γ with confidence at most γ . That is to say, the random coverage of ( L β , γ X , + ) is at least β with probability γ and it is at least β > β with a probability not exceeding γ .
In this subsection, a sampling plan ( r , s , n ) satisfying condition (7) will be named feasible. Our target is obtaining the optimal (minimum sample size) feasible plan r , s , n for setting the lower β , γ -TL. For later use, x and x will represent the rounded-up and -down values of x to integer numbers.
Supposing that 1 < r = s n , it is clear that L β , γ = C β , γ X r : n and L β , γ = C β , γ X r : n , where C β , γ and C β , γ are defined in accordance with (5). Thus, condition (7) will hold if and only if C β , γ C β , γ . Therefore, ( r , r , n ) is a feasible sampling plan if and only if g 1 r , n 0 , where
g 1 r , n = ln { 1 + r F 2 r , 2 n r + 1 ; γ / n r + 1 } ln { 1 + r F 2 r , 2 n r + 1 ; γ / n r + 1 } ln β ln β .
Since F 2 r , 2 n r + 1 ; γ χ 2 r ; γ 2 / 2 r when n and ln 1 + t / t 1 as t 0 , there exists a value of n, such that ( r , r , n ) is feasible if h 1 r > 0 , where
h 1 r = χ 2 r ; γ 2 / χ 2 r ; γ 2 ln β / ln β .
Otherwise, the inequation g 1 r , n 0 has no solution in n . On the other hand, provided that 1 = r s n , as L β , γ = ( 2 T ln β / χ 2 s ; γ 2 ) 1 / α is the lower β , γ -TL, the sampling plan ( 1 , s , n ) will be feasible if and only if h 1 s 0 . Similarly, if 1 < r < s n , the plan r , s , n would be feasible if and only if h 1 s r 0 because L β , γ = ( 2 R ln β / χ 2 s r ; γ 2 ) 1 / α .
The determination of the optimal feasible sampling plan for setting the lower β , γ -TL assuming fixed numbers of trimmed observations (Case I) or fixed trimming proportions (Case II) will be discussed in the remainder of this subsection.
  • Case I: Fixed numbers of left and right trimmed observations
Suppose that the researcher wishes to find the optimal feasible plan ( r , s , n ) , such that r 1 = δ 1 and n s = δ 2 , where δ 1 and δ 2 are prespecified non-negative integers. Then, if g 1 r , n 0 with r = δ 1 + 1 and n = δ 1 + δ 2 + 1 , it follows that δ 1 + 1 , δ 1 + 1 , δ 1 + δ 2 + 1 would be the optimal plan. Otherwise, if m denotes the smallest integer value, such that h 1 m 0 , it turns out that
r , s , n = δ 1 + 1 , δ 1 + m + I δ 1 > 0 , δ 1 + δ 2 + m + I δ 1 > 0
would be the optimal plan, where I · is the indicator function. Observe that m will always exist because χ 2 k ; γ 2 / χ 2 k ; γ 2 1 as k and ln β / ln β < 1 . It is worthwhile to point out that m = 1 if and only if γ 1 1 γ ln β / ln β since χ 2 ; ε 2 = 2 ln 1 ε for ε 0 , 1 . In particular, m is always 1 when γ γ . Note also that δ 1 + 1 , δ 1 + 1 , δ 1 + δ 2 + 1 is not feasible when m > δ 1 + 1 .
Due to the fact that χ 2 k ; ε 2 2 k { 1 1 / ( 9 k ) + z ε / ( 9 k ) 1 / 2 } 3 when k 5 and ε 0 , 1 from Wilson–Hilferty transformation, it can be proven that m is approximately equal to the smallest integer greater than or equal to ρ , i.e., m ρ , where
ρ = { ω + ( ω 2 + 1 / 9 ) 1 / 2 } 2 and ω = z γ z γ ln β / ln β 1 / 3 6 { ln β / ln β 1 / 3 1 } .
It can be proven that the approximation m ρ is exact in practically all cases. Nonetheless, a method for determining the proper value of m would be immediate: using m 0 = ρ as initial the guess of m , calculate h 1 m 0 and h 1 m 0 1 . If h 1 m 0 0 and h 1 m 0 1 < 0 , then m = m 0 ; otherwise, set m 0 = m 0 1 if h 1 m 0 < 0 or set m 0 = m 0 + 1 if h 1 m 0 0 , and repeat again this process.
  • Case II: Fixed left and right trimming proportions
Assuming that π i 0 , i = 1 , 2 , and π 1 + π 2 < 1 , consider now that the researcher desires to obtain the minimum sample size feasible plan ( r n , s n , n ) with r n = n π 1 + 1 and s n = n 1 π 2 . In such a case, the left and right trimming proportions, q 1 and q 2 , are approximately π 1 and π 2 , respectively. Furthermore, r n s n and the available observations would be at least n 1 π 1 π 2 .
Our aim is to determine the smallest integer n, such that g 1 r n , n 0 if 1 < r n = s n or such that h 1 s n r n + I r n = 1 0 otherwise. As before, m will represent the smallest integer satisfying h 1 m 0 . It is important to take into account that if ( r n , s n , n ) is a feasible plan, then r n must be greater than or equal to m when r n = s n . Otherwise, as
s n r n + I r n = 1 = m , s n m + 1 I r n = 1 and n s n ,
it follows that n n 0 , where
n 0 = max m 1 I r n = 1 1 π 1 π 2 , m I r n = 1 1 π 2 , m + 1 I r n = 1 .
In addition, since n 1 π 2 n π 1 + 1 m I r n = 1 , it turns out that
n m + 1 I r n = 1 / 1 π 1 π 2 .
On the other side, if r n = s n = k > 1 , it is clear that k n π 1 + 1 and n 1 π 2 k . As a consequence,
( k 1 ) / π 1 n k / 1 π 2 .
The above results may be helpful for finding the optimal sampling plan. Once the researcher chooses the desired values of β , γ , β , γ , π 1 , π 2 0 , 1 , with β < β and π 1 + π 2 < 1 , a step-by-step procedure for determining the smallest sample size plan ( r n , s n , n ) satisfying (7), where r n = n π 1 + 1 and s n = n 1 π 2 , may be described as follows:
  • Step 1: If γ 1 1 γ ln β / ln β , then set r n , s n , n = 1 , 1 , 1 and go to step 10. Otherwise, find the smallest integer m, such that h 1 m 0 using m 0 = ρ as initial guess (see Case I), where ρ is given in (8).
  • Step 2: Define n = n 0 assuming that r n = 1 , where n 0 is provided in (9), and compute r n and s n . If r n > 1 , redefine n = n 0 and recalculate r n and s n .
  • Step 3: While s n r n + I r n = 1 < m set n = n + 1 and recompute r n and s n .
  • Step 4: If r n < m , then go to step 10. Otherwise, take k = m .
  • Step 5: If k > 1 / 1 π 1 / 1 π 2 , go to step 10.
  • Step 6: Take n 1 = ( k 1 ) / π 1 .
  • Step 7: If n 1 > n , go to step 10.
  • Step 8: If n 1 > k / 1 π 2 , set k = k + 1 and go to step 7.
  • Step 9: If g 1 r n 1 , n 1 0 , then set r n , s n , n = k , k , n 1 and k = k + 1 , and go to step 6. Otherwise, let n 1 = n 1 + 1 and go to step 7.
  • Step 10: The optimal sampling plan is given by r n , s n , n .
Table 2 reports the optimal sampling plans ( r , s , n ) for setting lower ( β , γ ) -TLs based on the Weibull W θ , α trimmed sample X = ( X r : n , , X s : n ) when (i) q 1 0.2 and q 2 0.3 and (ii) r 1 = 2 and n s = 3 .
For instance, consider that β , γ = 0.8 , 0.9 and β , γ = 0.85 , 0.25 . Assuming that the researcher desires around, 20% and 30% of the smallest and the largest observations be trimmed, respectively, (i.e., q 1 π 1 = 0.2 and q 2 π 2 = 0.3 ) , as m = ρ = 38 , it follows from (9) and (10) that 72 n 78 . The optimal sampling plan would be precisely 16 , 54 , 76 , i.e., one needs a sample of size n = 76 , but the smallest 16 and the largest 24 observations are disregarded or censored. The left and right trimming proportions are exactly q 1 = 0.1974 and q 2 = 0.2895 . If it was required that the first two and last three data are discarded or censored (i.e., r 1 = 2 and n s = 3 ) , the optimal sampling would be 3 , 41 , 44 . On the other hand, suppose that β , γ = 0.9 , 0.9 and β , γ = 0.95 , 0.50 . In that case, m also coincides with ρ = 3 . If the researcher assumes that π 1 = 0.2 and π 2 = 0.3 , then n 0 = 3 from (9), whereas r n , s n , n = 1 , 3 , 3 is the optimal plan since s n r n + I r n = 1 m . The minimum sample size plan would be 3 , 3 , 6 provided that r 1 = 2 and n s = 3 .

4. Expected-Coverage Tolerance Limits

Given the Weibull q 1 , q 2 -trimmed sample X = ( X r : n , , X s : n ) and β 0 , 1 , a statistic L β L β X is called a lower β -expectation tolerance limit (or lower β -ETL for simplicity) of the W θ , α distribution if it satisfies
E X θ , α Pr X θ , α X > L β = β
for all θ > 0 . In this way, the probability that a future observation of X will surpass L β is expected to be β . Obviously, an upper β -ETL, U β U β X , is given by L 1 β .
Provided that β 0 , 1 , our purpose is determining β -ETLs based on the trimmed sample X = ( X r : n , , X s : n ) drawn from X W θ , α .
Suppose that 1 < r = s , in which case X r : n is minimal sufficient for θ . It is therefore rational to consider a lower β -ETL of the form L β = D β X r : n , where, from (11), the constant D β must satisfy
E X θ , α exp D β X r : n / θ α = β .
Since exp X r : n α / θ α B e t a n r + 1 , r , D β is the unique positive solution to the following equation in D
ζ D = B n r + 1 + D α , r / B n r + 1 , r = β ,
where B · , · is the beta function, i.e., D β = ζ 1 β . Note that ζ · is continuous and decreasing with ζ 0 = 1 and ζ + = 0 . Therefore, D β is the positive constant which satisfies the equation
i = 0 r 1 1 i r 1 i r n r n r + 1 + i + ( D β ) α = β .
Observe that D β = { n ( 1 β ) / β } 1 / α when r = s = 1 . In general, as
i = 0 r 1 { 1 + ( D β ) α / n i } = 1 / β ,
it is clear that
n r + 1 1 / α ( β 1 / r 1 ) 1 / α D β n 1 / α ( β 1 / r 1 ) 1 / α .
The above lower and upper bounds on D β might serve as a starting point for iterative interpolation methods, such as regula falsi. In addition,
D β n r / 2 + 1 / 2 1 / α ( β 1 / r 1 ) 1 / α
when r / n is small.
If 1 = r s , as T is minimal sufficient for θ , it is evident that L β = D β T 1 / α is an appropriate lower β -ETL. Since 2 T / θ α χ 2 s 2 , the tolerance factor is given by D β = ( β 1 / s 1 ) 1 / α .

4.1. Unconditional and Conditional Tolerance Limits

In the more general case in which 1 < r < s , the statistic X r : n , T is minimal sufficient. Since 2 R / θ α χ 2 s r 2 , where R = T n r + 1 X r : n α , it is obvious that L β = D β R 1 / α is a (unconditional) lower β -ETL when D β = { β 1 / r s 1 } 1 / α . The lower and upper β -ETLs are then given by
L β = β 1 / r s 1 R 1 / α and U β = ( 1 β ) 1 / r s 1 R 1 / α .
Nonetheless, as in Section 3, these limits are based on an insufficient statistic R . As mentioned previously, R , A is minimal sufficient for θ and A = X r : n α / R is pivotal for θ . Thus, if ones adopts the conditionality principle and assumes that A = a , it is then natural to seek a conditional lower β -ETL of the form L β a = D β a R 1 / α , where
E X θ , α Pr X θ , α X > L β a A = a = β .
After some calculations, it follows from (12) that
E exp ( D β a ) α R / θ α A = a = G [ a / { 1 + ( D β a ) α } ] G a { 1 + ( D β a ) α } s r + 1 = β .
Manifestly, the conditional lower β -ETL given A , L β A , is also an unconditional lower β -ETL because
E X θ , α Pr X θ , α X > L β A = E E Pr X θ , α X > L β A A = E β = β
Table 3 compares unconditional and conditional lower β -expectation W ( α , θ ) tolerance factors, D β and D β a ε , for selected values of r , s, and n when α = 1 , β = 0.90 and A = a ε , ε = 0.01 , 0.25 , 0.75 , 0.99 .
As before, the difference between D β and D β a ε might be considerable when ε is near to 0 or 1. Nevertheless, D β and D β a ε are quite similar when s r is large. For instance, D β = 0.001240 , D β a 0.01 = 0.001189 , and D β a 0.99 = 0.001339 when r , s , n = 5 , 90 , 95 , α = 1 , and β = 0.90 .

4.2. Sample-Size Determination

Frequently, the researcher wishes to choose the minimum sample size to achieve a specified criterion for β -ETLs. In our circumstances, a classical criterion is to require the maximum variation of the content of the β -expectation tolerance interval L β , + around its mean value, β , to be sufficiently small (say, less than ε 0 , min β , 1 β ) with a determined minimum stability (say λ 0 , 1 ) . In other words, the coverage of the random interval ( L β X , + ) must be contained in β ε , β + ε with a probability of at least λ , i.e.,
Pr X θ , α Pr X θ , α X > L β β < ε λ
or, equivalently, Pr X θ , α ln β ε < ( L β / θ ) α < ln β + ε λ for all θ > 0 . In this subsection, if condition (13) is satisfied, the corresponding sampling plan ( r , s , n ) will be called feasible.
Assuming that 1 < r = s n , as exp X r : n α / θ α B e t a n r + 1 , r and L β = D β X r : n , where D β = ζ 1 β , it is deduced that the sampling plan ( r , r , n ) is feasible if and only if g 2 r , n 0 , in which
g 2 r , n = k = 0 r 1 n k 1 β + ε 1 / ( D β ) α k β + ε k n / ( D β ) α 1 β ε 1 / ( D β ) α k β ε k n / ( D β ) α λ .
Provided that 1 = r s n , the sampling plan ( 1 , s , n ) will be feasible if and only if h 2 s 0 , where
h 2 s = i = 0 s 1 ( β i / s / i ! ) ( 1 β 1 / s ) i ln β + ε i β + ε 1 / ( 1 β 1 / s ) ln β ε i β ε 1 / ( 1 β 1 / s ) λ ,
in view of that L β = D β T 1 / α with D β = ( β 1 / s 1 ) 1 / α and 2 T / θ α χ 2 s 2 . In particular, the plan ( 1 , 1 , n ) would be feasible if and only if β + ε β / 1 β β ε β / 1 β λ . Finally, if 1 < r < s n , it can be shown that r , s , n is feasible if and only if h 2 s r 0 .
The determination of the optimal feasible sampling plan for setting the lower β -ETL assuming fixed numbers of trimmed observations (Case I) or fixed trimming proportions (Case II) will be tackled in the remainder of this subsection.
  • Case I: Fixed numbers of left and right trimmed observations
In this situation, the researcher desires to find the optimal feasible plan ( r , s , n ) , such that r 1 = δ 1 and n s = δ 2 , where δ 1 and δ 2 are prespecified non-negative integers. Then, δ 1 + 1 , δ 1 + 1 , δ 1 + δ 2 + 1 would be the optimal plan if g 2 r , n 0 with r = δ 1 + 1 and n = δ 1 + δ 2 + 1 . Otherwise, if m is now the smallest integer value, such that h 2 m 0 , it turns out that the optimal plan would be
r , s , n = δ 1 + 1 , δ 1 + m + I δ 1 > 0 , δ 1 + δ 2 + m + I δ 1 > 0
Note that m will always exist because h 2 k > 0 when k is sufficiently large. It is important to mention that m = 1 if and only if
h 2 1 = β + ε β / 1 β β ε β / 1 β λ 0 .
Moreover, if r = s and n , as 2 n X r : n α / θ α converges in law to a χ 2 r 2 distribution and D β / n 1 / α ( β 1 / r 1 ) 1 / α , it follows that g 2 r , n h 2 r . Consequently, if ( r , r , n ) is a feasible plan, it is indispensable that r m . Hence, δ 1 + 1 , δ 1 + 1 , δ 1 + δ 2 + 1 is not feasible when m > δ 1 + 1 .
  • Case II: Fixed left and right trimming proportions
Consider that it is now needed to obtain the minimum sample size feasible plan ( r n , s n , n ) with r n = n π 1 + 1 and s n = n 1 π 2 , where π i 0 ,   i = 1 , 2 , and π 1 + π 2 < 1 . Our goal in this case is to determine the smallest integer n such that g 2 r n , n 0 if 1 < r n = s n or such that h 2 s n r n + I r n = 1 0 otherwise.
Assume that ( r n , s n , n ) is a feasible plan and also that m is the smallest integer satisfying h 2 m 0 . Then, it turns out that r n m when r n = s n , whereas n n 0 if r n < s n , where n 0 is given in (9).
Given β , ε , λ , π 1 , π 2 0 , 1 , with ε < min β , 1 β , and π 1 + π 2 < 1 , a step-by-step procedure for determining the minimum sample size plan ( r n , s n , n ) satisfying (13), where r n = n π 1 + 1 and s n = n 1 π 2 , would be similar to that presented in Section 3.2, except that g 1 r n 1 , n 1 is replaced by g 2 r n 1 , n 1 in step 9, and step 1 is now as follows:
  • Step 1: If β + ε β / 1 β β ε β / 1 β λ , then set r n , s n , n = 1 , 1 , 1 and go to step 10. Otherwise, find the smallest integer m, such that h 2 m 0 .
Table 4 shows the optimal sampling plans ( r , s , n ) for setting lower β -ETLs based on the Weibull W θ , α trimmed sample X = ( X r : n , , X s : n ) for selected values of β , ε and δ when (i) q 1 0.2 and q 2 0.3 and (ii) r 1 = 2 and n s = 3 .
In particular, if β = 0.9 , ε = 0.03 , δ = 0.9 and the experimenter desires that at least 20% of the smallest and 30% of the largest observations were trimmed, the minimum sample size for setting the lower β -ETL would be n = 53 , whereas the smallest 10 and the largest 15 data would be disregarded or censored (i.e., r = 11 and s = 38 ) . On the other hand, if the experimenter wishes to discard the first two and last three data, the required sample size is n = 33 ; obviously, the optimal sampling plan would be 3 , 30 , 33 .

5. Illustrative Examples

Three numerical examples are considered in this section to illustrate the results developed above.

5.1. Example 1

An experiment in which students were learning to measure strontium-90 concentrations in samples of milk was considered by Sarhan and Greenberg [41]. The test substance was supposed to contain 9.22 microcuries per liter. Ten measurements, each involving readings and calculations, were made. However, since the measurement error was known to be relatively larger at the extremes, especially the upper one, a decision was made to censor the two smallest and the three largest observations, leaving the following trimmed sample: x = 8.2 , 8.4 , 9.1 , 9.8 , 9.9 . Thus, n = 10 , r = 3 , and s = 7 , which imply that q 1 = 0.2 and q 2 = 0.3 .
Fernández [10] assumed that the above data arisen from a Weibull model (1) with α = 3 , which has a near normal shape. In such a case, T = 6720.03 , R = 2309.09 and A = a = 0.238782 . Furthermore, in view of (3), the MLEs of θ and the mean concentration, μ = E [ X θ , α ] , are given by θ ^ = 10.1049 and μ ^ = 9.02343 .
Table 5 contains the unconditional and conditional lower and upper ( β , γ ) -TLs and β -ETLs for selected values of β and γ when α = 3 . For instance, if β = γ = 0.9 , L β , γ = 3.315 and L β , γ a = 4.162 , whereas L β = 3.950 and L β a = 4.783 . In particular, the experimenter might assert with confidence γ = 0.9 that at least 90 % of strontium-90 concentrations will exceed 3.315 ( 4.162 ) if the unconditional (conditional) approach is adopted. Moreover, under the unconditional (conditional) perspective, one can be 90 % confident that a future strontium-90 concentration will surpass 3.950 4.783 . The corresponding unconditional and conditional upper ( β , γ ) -TLs and β -ETLs are derived to be U β , γ = 14.50 , U β , γ a = 16.23 , U β = 12.16 and U β a = 14.12 .
Suppose the experimenter wish to determine the optimal sampling plan ( r , s , n ) for setting the lower ( 0.9 , 0.9 ) -TL based on a Weibull trimmed sample under the premise that the left and right trimming proportions are nearly 0.2 and 0.3, respectively. According to Table 2, if β , γ is 0.95 , 0.25 , the needed sample size would be n = 16 , whereas r = 4 and s = 12 . On the other side, if the experimenter wants to ignore the smallest two and the largest three observations, the optimal sampling plan would be 3 , 11 , 14 .
In addition, consider now the experimenter desires to find the optimal sampling plan ( r , s , n ) for setting a lower 0.9 -ETL, L 0.9 X , such that the coverage of ( L 0.9 X , + ) is contained in 0.87 , 0.93 with a probability of at least 0.7. If it is also required that about 20% and 30% of the smallest and largest observations were trimmed, respectively, the optimal sampling plan is given by 5 , 16 , 22 based on Table 4 with β = 0.9 , ε = 0.3 and δ = 0.7 . Likewise, in the case of it was demanded that r = 3 and n s = 3 , the smallest sample size would be n = 17 .
In order to explore the effect of α on the tolerance limits, the unconditional and conditional lower and upper ( β , γ ) -TLs and β -ETLs for selected values of α around 3 are displayed in Table 6. In general, as expected, the influence of α is quite appreciable, especially in the unconditional case.

5.2. Example 2

Meeker and Escobar [42] (pp. 151, 198) present the results of a failure-censored fatigue crack-initiation experiment in which 100 specimens of a type of titanium alloy were put on test. Only the nine smallest times to crack-initiation were recorded. In this way, n = 100 , r = 1 and s = 9 . The observed times in units of 1000 cycles were 18, 32, 39, 53, 59, 68, 77, 78, and 93. Based on experience with fatigue tests on similar alloys, it was assumed the adequacy of the Weibull model (1) with α = 2 to describe the above data. Hence, T = 821504 and R = 789104 , whereas θ ^ = 302.123 and μ ^ = 267.749 are the MLEs of θ and the expected failure-time μ .
Table 7 shows the unconditional and conditional lower and upper ( β , γ ) -TLs and β -ETLs when r = 1 , 2 , , 9 for selected values of β and γ . For example, if the engineer wants to discard the smallest two data (i.e., r = 3 ) and β , γ = 0.8 , 0.9 , it turns out that T = 820156 , R = 671098 , θ ^ = 302.154 and μ ^ = 267.777 , whereas A = X 3 : 100 2 / R takes the value a = 0.00226644 . In such a case, it follows that L β , γ = 127.1 , L β , γ a = 118.8 , L β = 159.5 , and L β a = 143.6 . Hence, the reliability engineer could affirm with confidence γ = 0.9 that at least 80 % of the times to crack-initiation (in units of 1000 cycles) of specimens of that type of titanium alloy will be greater than 127.1 ( 118.8 ) when the unconditional (conditional) viewpoint is considered. Likewise, adopting the unconditional (conditional) perspective, the engineer may be 80 % sure that a future time to crack-initiation will surpass 159 , 500 143 , 600 cycles.
It is interesting to point out that the variability of L β , γ a when r = 2 , , 8 is very small. In addition, observe that L β , γ a , r = 2 , , 8 , is quite similar to the unconditional lower ( β , γ ) -TL, L β , γ , for r = 1 . Analogous results are obtained for the β -ETLs when r = 2 , , 5 .

5.3. Example 3

Lee and Wang [43] (p. 205) consider that 21 patients with acute leukemia have the following remission times in months: 1, 1, 2, 2, 3, 4, 4, 5, 5, 6, 8, 8, 9, 10, 10, 12, 14, 16, 20, 24, and 34. The available sample is now complete, since r = 1 and s = n = 21 .
In accordance with previous tests, the researcher assumes that the remission time follows the exponential distribution. A probability plot also indicates that the Weibull model (1) with α = 1 fits the above data very well. In this situation, T = i = 1 n x i = 198 and θ ^ = T / n = 9.42857 . Supposing that β , γ = 0.8 , 0.9 , it follows that L β , γ = 1.634 and L β = 2.115 .
Table 8 provides the unconditional and conditional lower and upper ( β , γ ) -TLs and β -ETLs when r = 1 , 3 , 5 , 7 , 9 , 11 , and n s = r 1 (i.e., q 1 = q 2 ) . For instance, if the two smallest and largest observations are missing, and the unconditional (conditional) perspective is adopted, the researcher might state with confidence γ = 0.9 that at least 80 % of patients with acute leukemia will have remission times greater than L β , γ = 1.467 ( L β , γ a = 1.622 ) months. In the same manner, one might be 80 % confident that a future remission time will exceed L β = 1.966 ( L β a = 2.126 ) months.

6. Concluding Remarks

Weibull tolerance limits with certain guaranteed or expected coverages are obtained in this paper when the available empirical information is provided by a trimmed sample. These bounds are valid, even when some of the smallest and largest observations have been disregarded or censored. Single (right or left) and double failure-censoring are allowed. Unconditional and conditional tolerance bounds have been obtained and compared when s > r > 1 . The difference between these limits might be large when the auxiliary statistic A takes extreme percentiles. In our case, it is preferable to use the suggested conditional tolerance limits. Optimal sampling plans for setting β -content and β -expectation tolerance limits have also been determined. Efficient step-by-step procedures for computing the corresponding test plans with smallest sample sizes have been proposed. These methods can be easily applied and require little computational effort. Several numerical examples have been studied for illustrative and comparative purposes. An extension of the frequency-based perspective presented in this work the Bayesian approach is currently under investigation.

Funding

This research was partially supported by MCIN/AEI/10.13039/501100011033 under grant PID2019-110442GB-I00.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The author thanks the Editor and Reviewers for providing valuable comments and suggestions.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Wilks, S.S. Determination of sample sizes for setting tolerance limits. Ann. Math. Stat. 1941, 12, 91–96. [Google Scholar] [CrossRef]
  2. Guttman, I. Statistical Tolerance Regions: Classical and Bayesian; Charles W. Griffin and Co.: London, UK, 1970. [Google Scholar]
  3. Fernández, A.J. Computing tolerance limits for the lifetime of a k-out-of-n:F system based on prior information and censored data. Appl. Math. Model. 2014, 38, 548–561. [Google Scholar] [CrossRef]
  4. Prescott, P. Selection of trimming proportions for robust adaptive trimmed means. J. Amer. Statist. Assoc. 1978, 73, 133–140, Erratum in J. Amer. Statist. Assoc. 1978, 73, 691. [Google Scholar] [CrossRef]
  5. Huber, P.J. Robust Statistics; Wiley: New York, NY, USA, 1981. [Google Scholar]
  6. Healy, M.J.R. Algorithm AS 180: A linear estimator of standard deviation in symmetrically trimmed normal samples. Appl. Stat. 1982, 31, 174–175. [Google Scholar] [CrossRef]
  7. Welsh, A.H. The trimmed mean in the linear model (with discussion). Ann. Stat. 1987, 15, 20–45. [Google Scholar]
  8. Wilcox, R.R. Simulation results on solutions to the multivariate Behrens-Fisher problem via trimmed means. Statistician 1995, 44, 213–225. [Google Scholar] [CrossRef]
  9. Fernández, A.J. Bayesian estimation based on trimmed samples from Pareto populations. Comput. Stat. Data Anal. 2006, 51, 1119–1130. [Google Scholar] [CrossRef]
  10. Fernández, A.J. Weibull inference using trimmed samples and prior information. Stat. Pap. 2009, 50, 119–136. [Google Scholar] [CrossRef]
  11. Healy, M.J.R. A mean difference estimator of standard deviation in symmetrically censored samples. Biometrika 1978, 65, 643–646. [Google Scholar] [CrossRef]
  12. Prescott, P. A mean difference estimator of standard deviation in asymmetrically censored normal samples. Biometrika 1979, 66, 684–686. [Google Scholar] [CrossRef]
  13. Schneider, H. Simple and highly efficient estimators for censored normal samples. Biometrika 1984, 71, 412–414. [Google Scholar] [CrossRef]
  14. Bhattacharyya, G.K. On asymptotics of maximum likelihood and related estimators based on Type II censored data. J. Am. Stat. Assoc. 1985, 80, 398–404. [Google Scholar] [CrossRef]
  15. LaRiccia, V.N. Asymptotically chi-squared distributed tests of normality for Type II censored samples. J. Am. Stat. Assoc. 1986, 81, 1026–1031. [Google Scholar] [CrossRef]
  16. Schneider, H.; Weissfeld, L. Inference based on Type II censored samples. Biometrics 1986, 42, 531–536. [Google Scholar] [CrossRef] [PubMed]
  17. Fernández, A.J. Highest posterior density estimation from multiply censored Pareto data. Stat. Pap. 2008, 49, 333–341. [Google Scholar] [CrossRef]
  18. Fernández, A.J. Smallest Pareto confidence regions and applications. Comput. Stat. Data Anal. 2013, 62, 11–25. [Google Scholar] [CrossRef]
  19. Escobar, L.A.; Meeker, W.Q. Algorithm AS 292: Fisher information matrix for the extreme value, normal and logistic distributions and censored data. Appl. Stat. 1994, 43, 533–554. [Google Scholar] [CrossRef]
  20. Upadhyay, S.K.; Shastri, V. Bayesian results for classical Pareto distribution via Gibbs sampler, with doubly-censored observations. IEEE Trans. Reliab. 1997, 46, 56–59. [Google Scholar] [CrossRef]
  21. Ali Mousa, M.A.M. Bayesian prediction based on Pareto doubly censored data. Statistics 2003, 37, 65–72. [Google Scholar] [CrossRef]
  22. Chen, J.W.; Li, K.H.; Lam, Y. Bayesian single and double variable sampling plans for the Weibull distribution with censoring. Eur. J. Oper. Res. 2007, 177, 1062–1073. [Google Scholar] [CrossRef]
  23. Tsai, T.-R.; Lu, Y.-T.; Wu, S.-J. Reliability sampling plans for Weibull distribution with limited capacity of test facility. Comput. Ind. Eng. 2008, 55, 721–728. [Google Scholar] [CrossRef]
  24. Aslam, M.; Jun, C.-H.; Fernández, A.J.; Ahmad, M.; Rasool, M. Repetitive group sampling plan based on truncated tests for Weibull models. Res. J. Appl. Sci. Eng. Technol. 2014, 7, 1917–1924. [Google Scholar] [CrossRef]
  25. Fernández, A.J. Optimum attributes component test plans for k-out-of-n:F Weibull systems using prior information. Eur. J. Oper. Res. 2015, 240, 688–696. [Google Scholar] [CrossRef]
  26. Roy, S. Bayesian accelerated life test plans for series systems with Weibull component lifetimes. Appl. Math. Model. 2018, 62, 383–403. [Google Scholar] [CrossRef]
  27. Almongy, H.M.; Alshenawy, F.Y.; Almetwally, E.M.; Abdo, D.A. Applying Transformer Insulation Using Weibull Extended Distribution Based on Progressive Censoring Scheme. Axioms 2021, 10, 100. [Google Scholar] [CrossRef]
  28. Algarni, A. Group Acceptance Sampling Plan Based on New Compounded Three-Parameter Weibull Model. Axioms 2022, 11, 438. [Google Scholar] [CrossRef]
  29. Fernández, A.J.; Bravo, J.I.; De Fuentes, Í. Computing maximum likelihood estimates from Type II doubly censored exponential data. Stat. Methods Appl. 2002, 11, 187–200. [Google Scholar] [CrossRef]
  30. Lee, W.-C.; Wu, J.-W.; Hong, C.-W. Assessing the lifetime performance index of products with the exponential distribution under progressively type II right censored samples. J. Comput. Appl. Math. 2009, 231, 648–656. [Google Scholar] [CrossRef] [Green Version]
  31. Chen, K.-S.; Yang, C.-M. Developing a performance index with a Poisson process and an exponential distribution for operations management and continuous improvement. J. Comput. Appl. Math. 2018, 343, 737–747. [Google Scholar] [CrossRef]
  32. Yousef, M.M.; Hassan, A.S.; Alshanbari, H.M.; El-Bagoury, A.-A.H.; Almetwally, E.M. Bayesian and Non-Bayesian Analysis of Exponentiated Exponential Stress-–Strength Model Based on Generalized Progressive Hybrid Censoring Process. Axioms 2022, 11, 455. [Google Scholar] [CrossRef]
  33. Aminzadeh, M.S. β-expectation tolerance intervals and sample-size determination for the Rayleigh distribution. IEEE Trans. Reliab. 1991, 40, 287–289. [Google Scholar] [CrossRef]
  34. Aminzadeh, M.S. Approximate 1-sided tolerance limits for future observations for the Rayleigh distribution, using regression. IEEE Trans. Reliab. 1993, 42, 625–630. [Google Scholar] [CrossRef]
  35. Raqab, M.Z.; Madi, M.T. Bayesian prediction of the total time on test using doubly censored Rayleigh data. J. Stat. Comput. Simul. 2002, 72, 781–789. [Google Scholar] [CrossRef]
  36. Fernández, A.J. Bayesian estimation and prediction based on Rayleigh sample quantiles. Qual. Quant. 2010, 44, 1239–1248. [Google Scholar] [CrossRef]
  37. Lee, W.-C.; Wu, J.-W.; Hong, M.-L.; Lin, L.-S.; Chan, R.-L. Assessing the lifetime performance index of Rayleigh products based on the Bayesian estimation under progressive type II right censored samples. J. Comput. Appl. Math. 2011, 235, 1676–1688. [Google Scholar] [CrossRef]
  38. Fernández, A.J. Two-sided tolerance intervals in the exponential case: Corrigenda and generalizations. Comput. Stat. Data Anal. 2010, 54, 151–162. [Google Scholar] [CrossRef]
  39. Fernández, A.J. Tolerance Limits for k-out-of-n Systems With Exponentially Distributed Component Lifetimes. IEEE Trans. Reliab. 2010, 59, 331–337. [Google Scholar] [CrossRef]
  40. Thoman, D.R.; Bain, L.J.; Antle, C.E. Maximum likelihood estimation, exact confidence intervals for reliability and tolerance limits in the Weibull distribution. Technometrics 1970, 12, 363–373. [Google Scholar] [CrossRef]
  41. Sarhan, A.E.; Greenberg, B.G. Contributions to Order Statistics; Wiley: New York, NY, USA, 1962. [Google Scholar]
  42. Meeker, W.Q.; Escobar, L.A. Statistical Methods for Reliability Data; Wiley: New York, NY, USA, 1998. [Google Scholar]
  43. Lee, E.T.; Wang, J.W. Statistical Methods for Survival Data Analysis, 3rd ed.; Wiley: Hoboken, NJ, USA, 2003. [Google Scholar]
  44. Davis, D.J. An analysis of some failure data. J. Am. Stat. Assoc. 1952, 47, 113–150. [Google Scholar] [CrossRef]
  45. Soland, R.M. Bayesian analysis of the Weibull process with unknown scale parameter and its application to acceptance sampling. IEEE Trans. Reliab. 1968, 17, 84–90. [Google Scholar] [CrossRef]
  46. Tsokos, C.P.; Rao, A.N.V. Bayesian analysis of the Weibull failure model under stochastic variation of the shape and scale parameters. Metron 1976, 34, 201–217. [Google Scholar]
  47. Lawless, J.F. Statistical Models and Methods for Lifetime Data; Wiley: New York, NY, USA, 1982. [Google Scholar]
  48. Nordman, D.J.; Meeker, W.Q. Weibull prediction intervals for a future number of failures. Technometrics 2002, 44, 15–23. [Google Scholar] [CrossRef] [Green Version]
  49. Klinger, D.J.; Nakada, Y.; Menéndez, M.A. AT&T Reliability Manual; Van Nostrand Reinhold: New York, NY, USA, 1990. [Google Scholar]
  50. Abernethy, R.B. The New Weibull Handbook; Robert B. Abernethy: North Palm Beach, FL, USA, 1998. [Google Scholar]
  51. Danziger, L. Planning censored life tests for estimation of the hazard rate of a Weibull distribution with prescribed precision. Technometrics 1970, 12, 408–412. [Google Scholar] [CrossRef]
  52. Tsokos, C.P.; Canavos, G.C. Bayesian concepts for the estimation of reliability in the Weibull life testing model. Int. Stat. Rev. 1972, 40, 153–160. [Google Scholar] [CrossRef]
  53. Moore, A.H.; Bilikam, J.E. Bayesian estimation of parameters of life distributions and reliability from type II censored samples. IEEE Trans. Reliab. 1978, 27, 64–67. [Google Scholar] [CrossRef]
  54. Kwon, Y.I. A Bayesian life test sampling plan for products with Weibull lifetime distribution sold under warranty. Reliab. Eng. Syst. Saf. 1996, 53, 61–66. [Google Scholar] [CrossRef]
  55. Zhang, Y.; Meeker, W.Q. Bayesian life test planning for the Weibull distribution with given shape parameter. Metrika 2005, 61, 237–249. [Google Scholar] [CrossRef] [Green Version]
Table 1. Unconditional and conditional lower ( β , γ ) tolerance factors, C β , γ and C β , γ a ε , for the W ( α , θ ) model based on X = ( X r : n , . . . , X s : n ) when α = 1 , β = 0.90 , and γ = 0.95 .
Table 1. Unconditional and conditional lower ( β , γ ) tolerance factors, C β , γ and C β , γ a ε , for the W ( α , θ ) model based on X = ( X r : n , . . . , X s : n ) when α = 1 , β = 0.90 , and γ = 0.95 .
rsn C β , γ C β , γ a 0.01 C β , γ a 0.25 C β , γ a 0.75 C β , γ a 0.99
26100.01358850.01036090.01243130.01835260.0450331
10200.008013360.006827160.007513940.009218140.0147077
48300.01358850.009343130.01290050.02114420.0562717
20400.004561630.003961710.004370340.005066430.00673139
610500.01358850.008945740.01336320.02304750.0636906
30600.003233370.002850840.003126170.003530060.00438183
Table 2. Optimal sampling plans ( r , s , n ) for setting lower ( β , γ ) -TLs for the W ( α , θ ) model based on X = ( X r : n , . . . , X s : n ) when (i) q 1 0.2 and q 2 0.3 and (ii) r 1 = 2 and n s = 3 .
Table 2. Optimal sampling plans ( r , s , n ) for setting lower ( β , γ ) -TLs for the W ( α , θ ) model based on X = ( X r : n , . . . , X s : n ) when (i) q 1 0.2 and q 2 0.3 and (ii) r 1 = 2 and n s = 3 .
q 1 0.2 ,   q 2 0.3 r = 3 ,   n s = 3
β β γ γ r s n r s n
0.800.850.900.2516547634144
0.506212931821
0.950.25217310335558
0.5010354932831
0.900.950.900.254121631114
0.50133336
0.950.254141931316
0.502793811
Table 3. Unconditional and conditional lower β -expectation tolerance factors, D β and D β a ε , for the W ( α , θ ) model based on X = ( X r : n , . . . , X s : n ) when α = 1 and β = 0.90 .
Table 3. Unconditional and conditional lower β -expectation tolerance factors, D β and D β a ε , for the W ( α , θ ) model based on X = ( X r : n , . . . , X s : n ) when α = 1 and β = 0.90 .
rsn D β D β a 0.01 D β a 0.25 D β a 0.75 D β a 0.99
26100.02669010.01831450.02197560.03245220.0796828
10200.01325720.01077890.01186330.01455440.0232244
48300.02669010.01545730.02134450.03498950.0931401
20400.006606760.005537050.006108230.007081310.00940913
610500.02669010.01412420.02110050.03639610.100592
30600.004399670.003764060.004127690.004661060.00578604
Table 4. Optimal sampling plans ( r , s , n ) for setting lower β -ETLs for the W ( α , θ ) model based on X = ( X r : n , . . . , X s : n ) when (i) q 1 0.2 and q 2 0.3 and (ii) r 1 = 2 and n s = 3 .
Table 4. Optimal sampling plans ( r , s , n ) for setting lower β -ETLs for the W ( α , θ ) model based on X = ( X r : n , . . . , X s : n ) when (i) q 1 0.2 and q 2 0.3 and (ii) r 1 = 2 and n s = 3 .
q 1 0.2 ,   q 2 0.3 r = 3 ,   n s = 3
β ε δ r s n r s n
0.800.030.7016547634144
0.9039135192399102
0.060.704141931316
0.9010344832730
0.900.030.705162231417
0.9011385333033
0.060.70133336
0.903101331013
Table 5. Unconditional and conditional lower and upper ( β , γ ) -TLs and β -ETLs in Example 1.
Table 5. Unconditional and conditional lower and upper ( β , γ ) -TLs and β -ETLs in Example 1.
β γ L β , γ L β , γ a L β L β a U β , γ U β , γ a U β U β a
0.800.904.2575.3455.0986.16012.8714.4010.4612.31
0.954.0505.139 13.9615.24
0.900.903.3154.1623.9504.78314.5016.2312.1614.12
0.953.1544.002 15.7317.18
Table 6. Unconditional and conditional lower and upper ( β , γ ) -TLs and β -ETLs for selected values of α around 3 in Example 1.
Table 6. Unconditional and conditional lower and upper ( β , γ ) -TLs and β -ETLs for selected values of α around 3 in Example 1.
β = 0.80 , γ = 0.90 β = 0.90 , γ = 0.95
α L β , γ L β , γ a L β L β a U β , γ U β , γ a U β U β a
2.83.9365.1334.7755.9762.8553.7643.6334.557
2.94.1005.2414.9406.0713.0063.8853.7944.673
3.04.2575.3455.0986.1603.1544.0023.9504.783
3.14.4085.4445.2486.2453.2984.1144.1004.889
3.24.5525.5385.3916.3263.4374.2224.2444.990
Table 7. Unconditional and conditional lower and upper ( β , γ ) -TLs and β -ETLs when r = 1 , , 9 in Example 2.
Table 7. Unconditional and conditional lower and upper ( β , γ ) -TLs and β -ETLs when r = 1 , , 9 in Example 2.
β = 0.80 , γ = 0.90 β = 0.90 , γ = 0.95
r L β , γ L β , γ a L β L β a U β , γ U β , γ a U β U β a
1118.8 143.6 77.44 98.35
2123.5118.8152.7143.680.0377.44104.598.37
3127.1118.8159.5143.682.0177.44109.098.36
4123.5118.9157.9143.779.2977.50107.898.43
5126.8118.9166.2143.780.9077.49113.498.43
6125.1118.9169.7145.379.0177.54115.5108.5
7119.9119.0171.9152.074.5777.61116.4105.8
8151.2118.9242.9150.991.1077.50161.9105.2
9119.4 144.3 77.81 98.84
Table 8. Unconditional and conditional lower and upper ( β , γ ) -TLs and β -ETLs for r = 1 , 3 , 5 , 7 , 9 , 11 and s = n + 1 r in Example 3.
Table 8. Unconditional and conditional lower and upper ( β , γ ) -TLs and β -ETLs for r = 1 , 3 , 5 , 7 , 9 , 11 and s = n + 1 r in Example 3.
β = 0.80 , γ = 0.90 β = 0.90 , γ = 0.95
r s L β , γ L β , γ a L β L β a U β , γ U β , γ a U β U β a
1211.634 2.115 0.7178 0.9959
3191.4671.6221.9662.1260.63860.71020.92491.001
5171.3851.5861.9332.1100.59600.69200.90830.9926
7151.2321.5071.8392.0390.52090.65420.86170.9591
9131.4361.5802.4672.1840.58430.68171.1481.027
11111.768 2.518 0.7563 1.182
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fernández, A.J. Tolerance Limits and Sample-Size Determination Using Weibull Trimmed Data. Axioms 2023, 12, 351. https://doi.org/10.3390/axioms12040351

AMA Style

Fernández AJ. Tolerance Limits and Sample-Size Determination Using Weibull Trimmed Data. Axioms. 2023; 12(4):351. https://doi.org/10.3390/axioms12040351

Chicago/Turabian Style

Fernández, Arturo J. 2023. "Tolerance Limits and Sample-Size Determination Using Weibull Trimmed Data" Axioms 12, no. 4: 351. https://doi.org/10.3390/axioms12040351

APA Style

Fernández, A. J. (2023). Tolerance Limits and Sample-Size Determination Using Weibull Trimmed Data. Axioms, 12(4), 351. https://doi.org/10.3390/axioms12040351

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop