Abstract
Guaranteed-coverage and expected-coverage tolerance limits for Weibull models are derived when, owing to restrictions on data collection, experimental difficulties, the presence of outliers, or some other extraordinary reasons, certain proportions of the extreme sample values have been censored or disregarded. Unconditional and conditional tolerance bounds are presented and compared when some of the smallest observations have been discarded. In addition, the related problem of determining minimum sample sizes for setting Weibull tolerance limits from trimmed data is discussed when the numbers or proportions of the left and right trimmed observations are fixed. Step-by-step procedures for determining optimal sampling plans are also presented. Several numerical examples are included for illustrative purposes.
Keywords:
missing or discarded data; guaranteed-coverage and expected-coverage tolerance limits; optimal sampling plans; unconditional and conditional tolerance limits MSC:
62F25; 62N05; 62N01; 62G30
1. Introduction
Tolerance limits are extensively employed in some statistical fields, including statistical quality control, economics, medical and pharmaceutical statistics, environmental monitoring, and reliability analysis. In essence, a tolerance interval describes the behavior of a fraction of individuals. Roughly speaking, the tolerance limits are bounds within which one expects a stated proportion of the population to lie. Two basic types of such limits have received considerable attention, -content and -expectation tolerance limits; see Wilks [1], Guttman [2] and Fernández [3] and references therein. Succinctly, a -content tolerance interval contains at least of the population with certain confidence, whereas a -expectation tolerance interval covers, on the average, a fraction of the population.
In life-testing and reliability analysis, the tolerance limits are frequently computed from a complete or right-censored sample. In this paper, the available empirical information is provided by a trimmed sample, i.e., it is assumed that determined proportions and of the smallest and largest observations have been eliminated or censored. These kinds of data are frequently used in several areas of statistical practice for deriving robust inferential procedures and detecting influential observations, e.g., Prescott [4], Huber [5], Healy [6], Welsh [7], Wilcox [8], and Fernández [9,10]. In various situations, some extreme sample values may not be recorded due to restrictions on data collection (generally for reasons of economy of money, time, and effort), experimental difficulties or some other extraordinary reasons, or be discarded (especially when some observations are poorly known or the presence of outliers is suspected). In particular, a known number of observations in an ordered sample might be missing at either end (single censoring) or at both ends (double censoring) in failure censored experiments. Specifically, double censoring has been treated by many authors in the statistical literature (among others, Healy [11], Prescott [12], Schneider [13], Bhattacharyya [14], LaRiccia [15], Schneider and Weissfeld [16], Fernández [17,18], Escobar and Meeker [19], Upadhyay and Shastri [20], and Ali Mousa [21]).
The Weibull distribution provides a versatile statistical model for analyzing time-to-event data, which is useful in many fields, including biometry, economics, engineering, management, and the environmental, actuarial, and social sciences. In survival and reliability analysis, this distribution plays a prominent role and has successfully been used to describe animal and human disease mortality, as well as the reliability of both components and equipments in industrial applications. This probability model has many practical applications; e.g., Chen et al. [22], Tsai et al. [23], Aslam et al. [24], Fernández [25], Roy [26], Almongy et al. [27], and Algarni [28]. If the Weibull shape parameter the distribution is exponential, which plays an notable role in engineering; see Fernández et al. [29], Lee et al. [30], Chen and Yang [31], and Yousef et al. [32]. The random variable is Rayleigh distributed when This case is also important in various areas; see Aminzadeh [33,34], Raqab and Madi [35], Fernández [36], and Lee et al. [37].
This paper is devoted to deriving tolerance limits using a trimmed sample drawn from a Weibull population. It is assumed that is appropriately chosen while the Weibull scale parameter, is unknown. The conditionality principle, proposed primarily by Fisher, is adopted when some of the smallest observations have been disregarded. The related problem of determining minimum sample sizes is also tackled. In the exponential case, Fernández [38,39] presented optimal two-sided tolerance intervals and tolerance limits for k-out-of-n systems, respectively. On the basis of a complete Rayleigh sample (i.e., Aminzadeh [33] found -expectation tolerance limits and discussed the determination of sample size to control stability of coverage, whereas Aminzadeh [34] derived approximate tolerance limits when depends on a set of explanatory variables. Weibull tolerance limits based on complete samples were obtained in Thoman et al. [40].
The structure of the remainder of this work is as follows. The sampling distribution of a Weibull trimmed sample is provided in the next section. Section 3 presents -content tolerance limits based on trimmed data. In addition, the problem of determining optimal sample sizes is discussed. Mean-coverage tolerance limits are derived in Section 4. Optimal sampling plans for setting -expectation tolerance limits are also deduced. The corresponding unconditional and conditional bounds are compared in Section 3 and Section 4 when the lower trimming proportion, is positive, whereas Section 5 includes several numerical examples, reported by Sarhan and Greenberg [41], Meeker and Escobar [42], and Lee and Wang [43], for illustrative purposes. Finally, Section 6 offers some concluding remarks.
2. Weibull Trimmed Samples
The probability density function (pdf) of a random variable X which has a Weibull distribution with positive parameters and i.e., is defined by
Its k-th moment is obtained to be where is the well-known gamma function. The parameter controls the shape of the density whereas determines its scaling. Since the hazard rate is for the Weibull law may be used to model the survival distribution of a population with increasing decreasing or constant risk. Examples of increasing and decreasing hazard rates are, respectively, patients with lung cancer and patients who undergo successful major surgery. Davis [44] reports several cases in which a constant risk is reasonable, including payroll check errors and radar set component failures.
In many practical applications, Weibull distributions with in the range 1 to 3 seem appropriate. If the pdf has a near normal shape; for large (e.g., the shape of the density is close to that of the (smallest) extreme value density. The Weibull density becomes more symmetric as grows. In Weibull data analysis it is quite habitual to assume that the shape parameter, is a known constant. Among other authors, Soland [45], Tsokos and Rao [46], Lawless [47], and Nordman and Meeker [48] provide justifications. The value may come from previous or related data or may be a widely accepted value for the problem, or even an expert guess. In reliability analysis, is often tied to the failure mechanism of the product and so engineers might have some knowledge of it. Klinger et al. [49] provide tables of Weibull parameters for various devices. Abernethy [50] supplies useful information about past experiments with Weibull data. Several situations in which it is appropriate to consider that is constant are described in Nordman and Meeker [48]. Among many others, Danziger [51], Tsokos and Canavos [52], Moore and Bilikam [53], Kwon [54], and Zhang and Meeker [55] also utilize a given Weibull shape parameter.
Consider a random sample of size n from a Weibull distribution (1) with unknown scale parameter , and let be the ordered observations remaining when the smallest observations and the largest observations have been discarded or censored, where The trimming proportions and as well as the shape parameter are assumed to be predetermined constants.
The pdf of the -trimmed sample at is then defined by
for where is the observed value of
Clearly, T is sufficient when whereas the sample evidence is contained in the sufficient statistic if
The maximum likelihood estimator (MLE) of denoted by can be derived from the equation It is well-known that is the unique solution to the equation
See, e.g., Theorem 1 in Fernández et al. [29]. Therefore, the MLE of is given explicitly by when Otherwise, must be found upon using an iterative procedure.
3. Guaranteed-Coverage Tolerance Limits
Given the Weibull -trimmed sample and a statistic is called a lower -content tolerance limit at level of confidence (or simply a lower -TL for short) of the distribution if
for all where and refer to the respective sampling distribution of and X under the nominal values of and which are defined in (1) and (2), respectively.
According to (4), one may guarantee with confidence that at least of population measurements will exceed In other words, with confidence the probability that a future observation of will surpass is at least Clearly, an upper -TL, is provided by In this manner, one can be confident that at least of Weibull observations will be less than
Assuming that as is minimal sufficient for it is logical to consider a lower -TL for the form where, from (4), must satisfy
Since it follows that is merely the -quantile of the distribution. Thus, satisfies the equation
Alternatively, may be expressed explicitly as
where denotes the -quantile of the F-distribution with and degrees of freedom (df). In particular, when whereas if
If it is clear that T is minimal sufficient for which implies that it is sensible to assume that is proportional to i.e., In this situation, it can be shown that where represents the chi-square distribution with df. Observe that, letting the pivotal coincides with where are mutually independent variables. Since, in view of (4),
it turns out that Consequently,
3.1. Unconditional and Conditional Tolerance Limits
When focusing on the more general case, in which , obviously is a sufficient statistic for . Moreover, if then since this pivotal quantity can be expressed as the sum of the independent variables Therefore, it can be shown that
constitutes a (unconditional) lower -TL. Notice, however, that this limit is based on an insufficient statistic
An alternative and more appropriate TL can be constructed assuming that is an ancillary statistic. Note that, by itself, A does not contain any information about and that the statistic is minimal sufficient for Therefore, given the statistic R is conditionally sufficient. In accordance with the conditional principle suggested by Fisher, a tolerance limit should be based on the distribution of R given the observed value of the ancillary statistic Then, adopting the above principle and assuming that it is sensible to look for a conditional lower -TL of the form where
Thus, as it follows that is precisely the -quantile of the distribution of conditional to
The pdf of given is derived to be
where
whereas the cumulative distribution function of Y conditional to is defined by
where
Consequently, if denotes the -quantile of the distribution of Y given i.e., satisfies the equation it is obvious that In this way, it follows that
Of course, is also a lower -TL in the ordinary unconditional sense because
coincides with
Table 1 compares, for selected values of s, and the unconditional and conditional lower tolerance factors, and corresponding to the distribution when and where denotes the -quantile of the distribution of It can be proven that is the unique positive solution in a to the following equation
Table 1.
Unconditional and conditional lower tolerance factors, and for the model based on when , and .
It is worthwhile to mention that the difference between and might be large when A takes extreme percentiles (i.e., when is near to 0 or 1). For instance, if the unconditional factor is whereas the respective conditional factors and are given by and The difference between and becomes smaller when n grows to infinity and the trimming proportions, and are fixed. Indeed, provided that is large, and are quite similar. In addition, it turns out that
from the Wilson–Hilferty transformation (see, e.g., Lawless [47], p. 158), where is the -quantile of the standard normal distribution. For instance, and when and In this case, is If one assumes now that then and whereas
3.2. Sample-Size Determination
The choice of sample size plays a primordial role in the design of most statistical studies. A traditional approach is to assume that it is desired to find the smallest value of n (and the corresponding values of r and , such that the lower -TL based of a -trimmed sample drawn from , satisfies
for all and certain and In this way, one could affirm that at least of population measurements will exceed with confidence , and that at least of population measurements will surpass with confidence at most That is to say, the random coverage of is at least with probability and it is at least with a probability not exceeding
In this subsection, a sampling plan satisfying condition (7) will be named feasible. Our target is obtaining the optimal (minimum sample size) feasible plan for setting the lower -TL. For later use, and will represent the rounded-up and -down values of x to integer numbers.
Supposing that it is clear that and where and are defined in accordance with (5). Thus, condition (7) will hold if and only if Therefore, is a feasible sampling plan if and only if where
Since when and as there exists a value of n, such that is feasible if where
Otherwise, the inequation has no solution in On the other hand, provided that as is the lower -TL, the sampling plan will be feasible if and only if Similarly, if the plan would be feasible if and only if because
The determination of the optimal feasible sampling plan for setting the lower -TL assuming fixed numbers of trimmed observations (Case I) or fixed trimming proportions (Case II) will be discussed in the remainder of this subsection.
- Case I: Fixed numbers of left and right trimmed observations
Suppose that the researcher wishes to find the optimal feasible plan , such that and where and are prespecified non-negative integers. Then, if with and it follows that would be the optimal plan. Otherwise, if m denotes the smallest integer value, such that it turns out that
would be the optimal plan, where I is the indicator function. Observe that m will always exist because as and It is worthwhile to point out that if and only if since for In particular, m is always 1 when Note also that is not feasible when
Due to the fact that when and from Wilson–Hilferty transformation, it can be proven that m is approximately equal to the smallest integer greater than or equal to i.e., where
It can be proven that the approximation is exact in practically all cases. Nonetheless, a method for determining the proper value of m would be immediate: using as initial the guess of calculate and If and then otherwise, set if or set if and repeat again this process.
- Case II: Fixed left and right trimming proportions
Assuming that and consider now that the researcher desires to obtain the minimum sample size feasible plan with and In such a case, the left and right trimming proportions, and are approximately and respectively. Furthermore, and the available observations would be at least
Our aim is to determine the smallest integer n, such that if or such that otherwise. As before, m will represent the smallest integer satisfying It is important to take into account that if is a feasible plan, then must be greater than or equal to m when Otherwise, as
it follows that where
In addition, since I it turns out that
On the other side, if it is clear that and As a consequence,
The above results may be helpful for finding the optimal sampling plan. Once the researcher chooses the desired values of with and a step-by-step procedure for determining the smallest sample size plan satisfying (7), where and may be described as follows:
- Step 1: If then set and go to step 10. Otherwise, find the smallest integer m, such that using as initial guess (see Case I), where is given in (8).
- Step 2: Define assuming that where is provided in (9), and compute and If redefine and recalculate and
- Step 3: While set and recompute and
- Step 4: If then go to step 10. Otherwise, take
- Step 5: If go to step 10.
- Step 6: Take
- Step 7: If go to step 10.
- Step 8: If set and go to step 7.
- Step 9: If then set and and go to step 6. Otherwise, let and go to step 7.
- Step 10: The optimal sampling plan is given by
Table 2 reports the optimal sampling plans for setting lower -TLs based on the Weibull trimmed sample when (i) and and (ii) and
Table 2.
Optimal sampling plans for setting lower -TLs for the model based on when (i) and and (ii) and .
For instance, consider that and Assuming that the researcher desires around, 20% and 30% of the smallest and the largest observations be trimmed, respectively, (i.e., and as it follows from (9) and (10) that The optimal sampling plan would be precisely i.e., one needs a sample of size but the smallest 16 and the largest 24 observations are disregarded or censored. The left and right trimming proportions are exactly and If it was required that the first two and last three data are discarded or censored (i.e., and the optimal sampling would be On the other hand, suppose that and In that case, m also coincides with If the researcher assumes that and then from (9), whereas is the optimal plan since I The minimum sample size plan would be provided that and
4. Expected-Coverage Tolerance Limits
Given the Weibull -trimmed sample and a statistic is called a lower -expectation tolerance limit (or lower -ETL for simplicity) of the distribution if it satisfies
for all In this way, the probability that a future observation of X will surpass is expected to be Obviously, an upper -ETL, is given by
Provided that our purpose is determining -ETLs based on the trimmed sample drawn from
Suppose that in which case is minimal sufficient for It is therefore rational to consider a lower -ETL of the form where, from (11), the constant must satisfy
Since is the unique positive solution to the following equation in D
where is the beta function, i.e., . Note that is continuous and decreasing with and Therefore, is the positive constant which satisfies the equation
Observe that when In general, as
it is clear that
The above lower and upper bounds on might serve as a starting point for iterative interpolation methods, such as regula falsi. In addition,
when is small.
If as T is minimal sufficient for it is evident that is an appropriate lower -ETL. Since the tolerance factor is given by .
4.1. Unconditional and Conditional Tolerance Limits
In the more general case in which the statistic is minimal sufficient. Since where it is obvious that is a (unconditional) lower -ETL when The lower and upper -ETLs are then given by
Nonetheless, as in Section 3, these limits are based on an insufficient statistic As mentioned previously, is minimal sufficient for and is pivotal for Thus, if ones adopts the conditionality principle and assumes that it is then natural to seek a conditional lower -ETL of the form where
After some calculations, it follows from (12) that
Manifestly, the conditional lower -ETL given is also an unconditional lower -ETL because
Table 3 compares unconditional and conditional lower -expectation tolerance factors, and for selected values of s, and n when and
Table 3.
Unconditional and conditional lower -expectation tolerance factors, and for the model based on when and .
As before, the difference between and might be considerable when is near to 0 or 1. Nevertheless, and are quite similar when is large. For instance, , and when , and
4.2. Sample-Size Determination
Frequently, the researcher wishes to choose the minimum sample size to achieve a specified criterion for -ETLs. In our circumstances, a classical criterion is to require the maximum variation of the content of the -expectation tolerance interval around its mean value, to be sufficiently small (say, less than with a determined minimum stability (say In other words, the coverage of the random interval must be contained in with a probability of at least , i.e.,
or, equivalently, for all In this subsection, if condition (13) is satisfied, the corresponding sampling plan will be called feasible.
Assuming that as and , where it is deduced that the sampling plan is feasible if and only if in which
Provided that the sampling plan will be feasible if and only if where
in view of that with and In particular, the plan would be feasible if and only if Finally, if it can be shown that is feasible if and only if
The determination of the optimal feasible sampling plan for setting the lower -ETL assuming fixed numbers of trimmed observations (Case I) or fixed trimming proportions (Case II) will be tackled in the remainder of this subsection.
- Case I: Fixed numbers of left and right trimmed observations
In this situation, the researcher desires to find the optimal feasible plan , such that and where and are prespecified non-negative integers. Then, would be the optimal plan if with and Otherwise, if m is now the smallest integer value, such that it turns out that the optimal plan would be
Note that m will always exist because when k is sufficiently large. It is important to mention that if and only if
Moreover, if and as converges in law to a distribution and it follows that Consequently, if is a feasible plan, it is indispensable that Hence, is not feasible when
- Case II: Fixed left and right trimming proportions
Consider that it is now needed to obtain the minimum sample size feasible plan with and where and . Our goal in this case is to determine the smallest integer n such that if or such that otherwise.
Assume that is a feasible plan and also that m is the smallest integer satisfying Then, it turns out that when whereas if where is given in (9).
Given with , and a step-by-step procedure for determining the minimum sample size plan satisfying (13), where and would be similar to that presented in Section 3.2, except that is replaced by in step 9, and step 1 is now as follows:
- Step 1: If then set and go to step 10. Otherwise, find the smallest integer m, such that
Table 4 shows the optimal sampling plans for setting lower -ETLs based on the Weibull trimmed sample for selected values of and when (i) and and (ii) and
Table 4.
Optimal sampling plans for setting lower -ETLs for the model based on when (i) and and (ii) and .
In particular, if and the experimenter desires that at least 20% of the smallest and 30% of the largest observations were trimmed, the minimum sample size for setting the lower -ETL would be whereas the smallest 10 and the largest 15 data would be disregarded or censored (i.e., and On the other hand, if the experimenter wishes to discard the first two and last three data, the required sample size is obviously, the optimal sampling plan would be
5. Illustrative Examples
Three numerical examples are considered in this section to illustrate the results developed above.
5.1. Example 1
An experiment in which students were learning to measure strontium-90 concentrations in samples of milk was considered by Sarhan and Greenberg [41]. The test substance was supposed to contain 9.22 microcuries per liter. Ten measurements, each involving readings and calculations, were made. However, since the measurement error was known to be relatively larger at the extremes, especially the upper one, a decision was made to censor the two smallest and the three largest observations, leaving the following trimmed sample: Thus, , and which imply that and
Fernández [10] assumed that the above data arisen from a Weibull model (1) with which has a near normal shape. In such a case, and Furthermore, in view of (3), the MLEs of and the mean concentration, are given by and
Table 5 contains the unconditional and conditional lower and upper -TLs and -ETLs for selected values of and when For instance, if and whereas and In particular, the experimenter might assert with confidence that at least of strontium-90 concentrations will exceed if the unconditional (conditional) approach is adopted. Moreover, under the unconditional (conditional) perspective, one can be confident that a future strontium-90 concentration will surpass . The corresponding unconditional and conditional upper -TLs and -ETLs are derived to be and
Table 5.
Unconditional and conditional lower and upper -TLs and -ETLs in Example 1.
Suppose the experimenter wish to determine the optimal sampling plan for setting the lower -TL based on a Weibull trimmed sample under the premise that the left and right trimming proportions are nearly 0.2 and 0.3, respectively. According to Table 2, if is the needed sample size would be whereas and On the other side, if the experimenter wants to ignore the smallest two and the largest three observations, the optimal sampling plan would be
In addition, consider now the experimenter desires to find the optimal sampling plan for setting a lower -ETL, such that the coverage of is contained in with a probability of at least 0.7. If it is also required that about 20% and 30% of the smallest and largest observations were trimmed, respectively, the optimal sampling plan is given by based on Table 4 with and Likewise, in the case of it was demanded that and the smallest sample size would be
In order to explore the effect of on the tolerance limits, the unconditional and conditional lower and upper -TLs and -ETLs for selected values of around 3 are displayed in Table 6. In general, as expected, the influence of is quite appreciable, especially in the unconditional case.
Table 6.
Unconditional and conditional lower and upper -TLs and -ETLs for selected values of around 3 in Example 1.
5.2. Example 2
Meeker and Escobar [42] (pp. 151, 198) present the results of a failure-censored fatigue crack-initiation experiment in which 100 specimens of a type of titanium alloy were put on test. Only the nine smallest times to crack-initiation were recorded. In this way, and The observed times in units of 1000 cycles were 18, 32, 39, 53, 59, 68, 77, 78, and 93. Based on experience with fatigue tests on similar alloys, it was assumed the adequacy of the Weibull model (1) with to describe the above data. Hence, and whereas and are the MLEs of and the expected failure-time
Table 7 shows the unconditional and conditional lower and upper -TLs and -ETLs when for selected values of and For example, if the engineer wants to discard the smallest two data (i.e., and it turns out that and whereas takes the value In such a case, it follows that , and Hence, the reliability engineer could affirm with confidence that at least of the times to crack-initiation (in units of 1000 cycles) of specimens of that type of titanium alloy will be greater than when the unconditional (conditional) viewpoint is considered. Likewise, adopting the unconditional (conditional) perspective, the engineer may be sure that a future time to crack-initiation will surpass cycles.
Table 7.
Unconditional and conditional lower and upper -TLs and -ETLs when in Example 2.
It is interesting to point out that the variability of when is very small. In addition, observe that is quite similar to the unconditional lower -TL, for Analogous results are obtained for the -ETLs when
5.3. Example 3
Lee and Wang [43] (p. 205) consider that 21 patients with acute leukemia have the following remission times in months: 1, 1, 2, 2, 3, 4, 4, 5, 5, 6, 8, 8, 9, 10, 10, 12, 14, 16, 20, 24, and 34. The available sample is now complete, since and
In accordance with previous tests, the researcher assumes that the remission time follows the exponential distribution. A probability plot also indicates that the Weibull model (1) with fits the above data very well. In this situation, and Supposing that it follows that and
Table 8 provides the unconditional and conditional lower and upper -TLs and -ETLs when , and (i.e., For instance, if the two smallest and largest observations are missing, and the unconditional (conditional) perspective is adopted, the researcher might state with confidence that at least of patients with acute leukemia will have remission times greater than months. In the same manner, one might be confident that a future remission time will exceed months.
Table 8.
Unconditional and conditional lower and upper -TLs and -ETLs for and in Example 3.
6. Concluding Remarks
Weibull tolerance limits with certain guaranteed or expected coverages are obtained in this paper when the available empirical information is provided by a trimmed sample. These bounds are valid, even when some of the smallest and largest observations have been disregarded or censored. Single (right or left) and double failure-censoring are allowed. Unconditional and conditional tolerance bounds have been obtained and compared when The difference between these limits might be large when the auxiliary statistic A takes extreme percentiles. In our case, it is preferable to use the suggested conditional tolerance limits. Optimal sampling plans for setting -content and -expectation tolerance limits have also been determined. Efficient step-by-step procedures for computing the corresponding test plans with smallest sample sizes have been proposed. These methods can be easily applied and require little computational effort. Several numerical examples have been studied for illustrative and comparative purposes. An extension of the frequency-based perspective presented in this work the Bayesian approach is currently under investigation.
Funding
This research was partially supported by MCIN/AEI/10.13039/501100011033 under grant PID2019-110442GB-I00.
Data Availability Statement
Data are contained within the article.
Acknowledgments
The author thanks the Editor and Reviewers for providing valuable comments and suggestions.
Conflicts of Interest
The author declares no conflict of interest.
References
- Wilks, S.S. Determination of sample sizes for setting tolerance limits. Ann. Math. Stat. 1941, 12, 91–96. [Google Scholar] [CrossRef]
- Guttman, I. Statistical Tolerance Regions: Classical and Bayesian; Charles W. Griffin and Co.: London, UK, 1970. [Google Scholar]
- Fernández, A.J. Computing tolerance limits for the lifetime of a k-out-of-n:F system based on prior information and censored data. Appl. Math. Model. 2014, 38, 548–561. [Google Scholar] [CrossRef]
- Prescott, P. Selection of trimming proportions for robust adaptive trimmed means. J. Amer. Statist. Assoc. 1978, 73, 133–140, Erratum in J. Amer. Statist. Assoc. 1978, 73, 691. [Google Scholar] [CrossRef]
- Huber, P.J. Robust Statistics; Wiley: New York, NY, USA, 1981. [Google Scholar]
- Healy, M.J.R. Algorithm AS 180: A linear estimator of standard deviation in symmetrically trimmed normal samples. Appl. Stat. 1982, 31, 174–175. [Google Scholar] [CrossRef]
- Welsh, A.H. The trimmed mean in the linear model (with discussion). Ann. Stat. 1987, 15, 20–45. [Google Scholar]
- Wilcox, R.R. Simulation results on solutions to the multivariate Behrens-Fisher problem via trimmed means. Statistician 1995, 44, 213–225. [Google Scholar] [CrossRef]
- Fernández, A.J. Bayesian estimation based on trimmed samples from Pareto populations. Comput. Stat. Data Anal. 2006, 51, 1119–1130. [Google Scholar] [CrossRef]
- Fernández, A.J. Weibull inference using trimmed samples and prior information. Stat. Pap. 2009, 50, 119–136. [Google Scholar] [CrossRef]
- Healy, M.J.R. A mean difference estimator of standard deviation in symmetrically censored samples. Biometrika 1978, 65, 643–646. [Google Scholar] [CrossRef]
- Prescott, P. A mean difference estimator of standard deviation in asymmetrically censored normal samples. Biometrika 1979, 66, 684–686. [Google Scholar] [CrossRef]
- Schneider, H. Simple and highly efficient estimators for censored normal samples. Biometrika 1984, 71, 412–414. [Google Scholar] [CrossRef]
- Bhattacharyya, G.K. On asymptotics of maximum likelihood and related estimators based on Type II censored data. J. Am. Stat. Assoc. 1985, 80, 398–404. [Google Scholar] [CrossRef]
- LaRiccia, V.N. Asymptotically chi-squared distributed tests of normality for Type II censored samples. J. Am. Stat. Assoc. 1986, 81, 1026–1031. [Google Scholar] [CrossRef]
- Schneider, H.; Weissfeld, L. Inference based on Type II censored samples. Biometrics 1986, 42, 531–536. [Google Scholar] [CrossRef] [PubMed]
- Fernández, A.J. Highest posterior density estimation from multiply censored Pareto data. Stat. Pap. 2008, 49, 333–341. [Google Scholar] [CrossRef]
- Fernández, A.J. Smallest Pareto confidence regions and applications. Comput. Stat. Data Anal. 2013, 62, 11–25. [Google Scholar] [CrossRef]
- Escobar, L.A.; Meeker, W.Q. Algorithm AS 292: Fisher information matrix for the extreme value, normal and logistic distributions and censored data. Appl. Stat. 1994, 43, 533–554. [Google Scholar] [CrossRef]
- Upadhyay, S.K.; Shastri, V. Bayesian results for classical Pareto distribution via Gibbs sampler, with doubly-censored observations. IEEE Trans. Reliab. 1997, 46, 56–59. [Google Scholar] [CrossRef]
- Ali Mousa, M.A.M. Bayesian prediction based on Pareto doubly censored data. Statistics 2003, 37, 65–72. [Google Scholar] [CrossRef]
- Chen, J.W.; Li, K.H.; Lam, Y. Bayesian single and double variable sampling plans for the Weibull distribution with censoring. Eur. J. Oper. Res. 2007, 177, 1062–1073. [Google Scholar] [CrossRef]
- Tsai, T.-R.; Lu, Y.-T.; Wu, S.-J. Reliability sampling plans for Weibull distribution with limited capacity of test facility. Comput. Ind. Eng. 2008, 55, 721–728. [Google Scholar] [CrossRef]
- Aslam, M.; Jun, C.-H.; Fernández, A.J.; Ahmad, M.; Rasool, M. Repetitive group sampling plan based on truncated tests for Weibull models. Res. J. Appl. Sci. Eng. Technol. 2014, 7, 1917–1924. [Google Scholar] [CrossRef]
- Fernández, A.J. Optimum attributes component test plans for k-out-of-n:F Weibull systems using prior information. Eur. J. Oper. Res. 2015, 240, 688–696. [Google Scholar] [CrossRef]
- Roy, S. Bayesian accelerated life test plans for series systems with Weibull component lifetimes. Appl. Math. Model. 2018, 62, 383–403. [Google Scholar] [CrossRef]
- Almongy, H.M.; Alshenawy, F.Y.; Almetwally, E.M.; Abdo, D.A. Applying Transformer Insulation Using Weibull Extended Distribution Based on Progressive Censoring Scheme. Axioms 2021, 10, 100. [Google Scholar] [CrossRef]
- Algarni, A. Group Acceptance Sampling Plan Based on New Compounded Three-Parameter Weibull Model. Axioms 2022, 11, 438. [Google Scholar] [CrossRef]
- Fernández, A.J.; Bravo, J.I.; De Fuentes, Í. Computing maximum likelihood estimates from Type II doubly censored exponential data. Stat. Methods Appl. 2002, 11, 187–200. [Google Scholar] [CrossRef]
- Lee, W.-C.; Wu, J.-W.; Hong, C.-W. Assessing the lifetime performance index of products with the exponential distribution under progressively type II right censored samples. J. Comput. Appl. Math. 2009, 231, 648–656. [Google Scholar] [CrossRef]
- Chen, K.-S.; Yang, C.-M. Developing a performance index with a Poisson process and an exponential distribution for operations management and continuous improvement. J. Comput. Appl. Math. 2018, 343, 737–747. [Google Scholar] [CrossRef]
- Yousef, M.M.; Hassan, A.S.; Alshanbari, H.M.; El-Bagoury, A.-A.H.; Almetwally, E.M. Bayesian and Non-Bayesian Analysis of Exponentiated Exponential Stress-–Strength Model Based on Generalized Progressive Hybrid Censoring Process. Axioms 2022, 11, 455. [Google Scholar] [CrossRef]
- Aminzadeh, M.S. β-expectation tolerance intervals and sample-size determination for the Rayleigh distribution. IEEE Trans. Reliab. 1991, 40, 287–289. [Google Scholar] [CrossRef]
- Aminzadeh, M.S. Approximate 1-sided tolerance limits for future observations for the Rayleigh distribution, using regression. IEEE Trans. Reliab. 1993, 42, 625–630. [Google Scholar] [CrossRef]
- Raqab, M.Z.; Madi, M.T. Bayesian prediction of the total time on test using doubly censored Rayleigh data. J. Stat. Comput. Simul. 2002, 72, 781–789. [Google Scholar] [CrossRef]
- Fernández, A.J. Bayesian estimation and prediction based on Rayleigh sample quantiles. Qual. Quant. 2010, 44, 1239–1248. [Google Scholar] [CrossRef]
- Lee, W.-C.; Wu, J.-W.; Hong, M.-L.; Lin, L.-S.; Chan, R.-L. Assessing the lifetime performance index of Rayleigh products based on the Bayesian estimation under progressive type II right censored samples. J. Comput. Appl. Math. 2011, 235, 1676–1688. [Google Scholar] [CrossRef]
- Fernández, A.J. Two-sided tolerance intervals in the exponential case: Corrigenda and generalizations. Comput. Stat. Data Anal. 2010, 54, 151–162. [Google Scholar] [CrossRef]
- Fernández, A.J. Tolerance Limits for k-out-of-n Systems With Exponentially Distributed Component Lifetimes. IEEE Trans. Reliab. 2010, 59, 331–337. [Google Scholar] [CrossRef]
- Thoman, D.R.; Bain, L.J.; Antle, C.E. Maximum likelihood estimation, exact confidence intervals for reliability and tolerance limits in the Weibull distribution. Technometrics 1970, 12, 363–373. [Google Scholar] [CrossRef]
- Sarhan, A.E.; Greenberg, B.G. Contributions to Order Statistics; Wiley: New York, NY, USA, 1962. [Google Scholar]
- Meeker, W.Q.; Escobar, L.A. Statistical Methods for Reliability Data; Wiley: New York, NY, USA, 1998. [Google Scholar]
- Lee, E.T.; Wang, J.W. Statistical Methods for Survival Data Analysis, 3rd ed.; Wiley: Hoboken, NJ, USA, 2003. [Google Scholar]
- Davis, D.J. An analysis of some failure data. J. Am. Stat. Assoc. 1952, 47, 113–150. [Google Scholar] [CrossRef]
- Soland, R.M. Bayesian analysis of the Weibull process with unknown scale parameter and its application to acceptance sampling. IEEE Trans. Reliab. 1968, 17, 84–90. [Google Scholar] [CrossRef]
- Tsokos, C.P.; Rao, A.N.V. Bayesian analysis of the Weibull failure model under stochastic variation of the shape and scale parameters. Metron 1976, 34, 201–217. [Google Scholar]
- Lawless, J.F. Statistical Models and Methods for Lifetime Data; Wiley: New York, NY, USA, 1982. [Google Scholar]
- Nordman, D.J.; Meeker, W.Q. Weibull prediction intervals for a future number of failures. Technometrics 2002, 44, 15–23. [Google Scholar] [CrossRef]
- Klinger, D.J.; Nakada, Y.; Menéndez, M.A. AT&T Reliability Manual; Van Nostrand Reinhold: New York, NY, USA, 1990. [Google Scholar]
- Abernethy, R.B. The New Weibull Handbook; Robert B. Abernethy: North Palm Beach, FL, USA, 1998. [Google Scholar]
- Danziger, L. Planning censored life tests for estimation of the hazard rate of a Weibull distribution with prescribed precision. Technometrics 1970, 12, 408–412. [Google Scholar] [CrossRef]
- Tsokos, C.P.; Canavos, G.C. Bayesian concepts for the estimation of reliability in the Weibull life testing model. Int. Stat. Rev. 1972, 40, 153–160. [Google Scholar] [CrossRef]
- Moore, A.H.; Bilikam, J.E. Bayesian estimation of parameters of life distributions and reliability from type II censored samples. IEEE Trans. Reliab. 1978, 27, 64–67. [Google Scholar] [CrossRef]
- Kwon, Y.I. A Bayesian life test sampling plan for products with Weibull lifetime distribution sold under warranty. Reliab. Eng. Syst. Saf. 1996, 53, 61–66. [Google Scholar] [CrossRef]
- Zhang, Y.; Meeker, W.Q. Bayesian life test planning for the Weibull distribution with given shape parameter. Metrika 2005, 61, 237–249. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).