1. Introduction
The vast majority of contemporary things are constructed to continue functioning well over a lengthy period of time. Therefore, more time and money are required for dependability tests to be conducted in the conditions of typical operation. Under normal conditions, lifetime experiments with a set testing time have exceptionally low failure rates. Because of the expense and amount of time necessary for life testing experiments, the experimenter is compelled to end the experiment before monitoring any failures. As a result, data filtering methods are often utilised in order to cut down on the consumed time and money spent on testing. The classic Type-I and Type-II censoring schemes are the ones that are most commonly used for laboratory examinations. However, the progressive Type-II censoring scheme is the one that is utilised the most frequently in CS.
The primary distinction between Type-I and Type-II CSs is that a Type-I CS is determined by the time at which the experiment is terminated, while a Type-II CS is determined by the number of unsuccessful attempts. There is no way until the very conclusion of the test that the test units from these two CSs may be removed from consideration. The Type-II censoring procedure may be expanded into a progressive Type-II (PT-II) censoring procedure. It encourages the person doing the experiment to remove the units being tested at a variety of points during the test. For additional reading about PT-II CS, we may refer to [
1,
2].
Several researchers have developed a kind of life test in which the experimenter has the option of dividing the test units up into several groups, each of which functions as a collection of test units. Then, all of the test units are put through their paces concurrently until one of the groups experiences a failure. One name for this kind of censorship strategy is “first-failure censoring”. It was first developed by [
3]. Different studies have been conducted under the concept of first-failure censoring, such as [
4,
5].
Wu and Kus [
6] developed the PFFCS as a hybrid of the first-failure censoring scheme and the progressive Type-II censoring scheme. Recently, most articles have focused on the study and analysis of the PFFCS and system reliability; see [
7,
8,
9,
10,
11,
12,
13,
14].
Almongy et al. [
15] proposed the extended Rayleigh distribution (ERD) with three parameters to model and analyze the mortality rate of COVID-19. It combines the extended odd Weibull family and the Rayleigh distribution. The proposed distribution has great merits that qualify it to model skewed data. For a lifetime random variable
t following the ERD, the probability density function (PDF), survival function (SF), and hazard rate function (HRF) are given as follows:
and
where
and
are positive shape parameters, while
is a positive scale parameter.
Figure 1 shows the behavour of the pdf, sf, and hf of the ERD for different values of parameters.
The innovations, guidance, and encouragement that led to the writing of this article were aimed at making a statistical inference on the extended odd Weibull Rayleigh distribution or extended Rayleigh distribution (ERD) as a suitable fit for lifespan models with escalating, descending, and bathtub failure rates. The purpose of this inference was to determine whether or not the distribution is a good fit for the data, so the main contribution of the method proposed in this article is to develop techniques of parameter estimation based on progressive first-failure censored data to obtain the following:
Obtaining the maximum likelihood estimates of the parameters and the reliability functions.
Obtaining the Bayesian estimates of the parameters and the reliability functions based on four different loss functions based on both informative Gamma priors and non-informative priors.
As a method for coming up with a close approximation of Bayesian estimates, the Markov chain Monte Carlo (MCMC) procedure is used.
Based on the MLEs, the approximate confidence intervals for both the parameters and the reliability functions is constructed.
Based on the Bayesian estimates, the credible confidence intervals for both the parameters and the reliability functions are also constructed.
Comprehensive simulation research is carried out. We determine whether or not the offered estimation methods are comparable to one another.
Lastly, in terms of real-world applications, the ERD shows that it is effective for fitting real-world lifetime data
The primary goal of this paper is to concentrate on the difficulty of estimating the parameters of the PFFC life testing model under the ERD failure time distribution. The remainder of this article is structured as follows.
Section 2 describes the paradigm and formulation of the PFFC system. The ML estimates and Asym. CI for the parametrs and also for the reliability and hazard functions of ERD, depending on the PFFCS, are established in
Section 3.
Section 4 discusses the BEs and their MCMC approximations with the corresponding credible intervals based on different loss functions as squared error (SEL), linear–exponential (LINEX), generalized entropy (GEL), and Al-Bayatti loss function (ALB).
Section 5 provides an actual numerical example using a real data set. In
Section 6, we present a simulation study to evaluate the performance and the efficiency of the estimation techniques. Finally,
Section 7 brings the paper to a conclusion.
2. Model Formulation
In life testing experiments and reliability studies, the experiments may be operated on groups of objects and only the first failure encountered in a group. This concept of PFFCS was first considered by Wu and Kus [
6]. It can be described as follows: assume
n independent groups, each one with
k items, subjected to a lifetime test. At the occurrence of the first failure
,
of the surviving groups are removed randomly from the test together with the remaining
items of the group that includes the first-failure item. At the occurrence of the second failure
,
of the surviving groups are removed randomly from the test together with the remaining
items of the group that contains the second failure item, and so on until the mth failure occurs
, with
. At this time,
of the surviving groups are removed randomly from the test together with the remaining
items of the group that includes the mth failure item. The resulting progressive first-failure censored sample is
with censoring scheme
; it can be denoted by PFFCS. Suppose that
and
are the PDF and CDF, respectively, that fit the PFFCS; then, the joint PDF of
can be formed as follows
where
.
It is very important to note that the PFF model is a generalized model for different sub-models, see
Table 1.
3. Maximum Likelihood Estimation
In this section, the MLEs for the parameters
,
, and
are carried out. Using the invariant property, the MLEs for the reliability functions
and
are also obtained. ML is an easy method for estimating unknow parameters that has been used by several authors in the literature; see, for example, [
16,
17,
18].
Using (
5), the log-likelihood function is obtained as follows
Applying the partial derivative to (
6) with respect to
,
, and
and equating it to zero, we get:
and
Now to get the estimates
,
, and
, (
7), (
8) and (
9) are solved depending on the Newton–Raphson method.
3.1. Asymptotic Confidence Intervals for the Parameters
The asymptotic CIs are built in this subsection based on the asymptotic Fisher information matrix for the parameters
,
, and
and is given as follows:
Now, the
two-sided asymptotic CIs for the parameters
,
, and
are obtained as follows:
and
where
,
, and
are the estimated values for the variances of
,
, and
, which are the diagonal elements of the inverse matrix of (
10), and
is the upper
percentile of a standard normal distribution.
3.2. Approximate Confidence Intervals for and
In this subsection, the delta method suggested by [
19] is used to construct the approximate confidence intervals for both the survival and hazard functions. The delta method is a general technique for establishing confidence intervals for the functions of the MLE for which it is too complicated to calculate the variance analytically. It first approximates the function linearly and then computes the variance of the more straightforward linear function that might be applied to large-scale inference; see [
19,
20].
where
Now the approximated estimates for
and
are derived, respectively, as
where
is the transpose of
,
These findings provide approximate confidence intervals for
and
as follows:
and
4. Bayesian Estimation
In this section, the Bayesian estimators of the parameters
and
using SEL, LINEX, GEL, and ALB are obtained. Readers can find more about the loss function in [
21]. For the aim of deriving the Bayesian estimation, we supposed that the parameters
,
, and
are distributed independently with gamma priors having scale parameters
and shape parameters
,
, respectively. The jointly prior density of
,
, and
can be written as follows:
The posterior density function of
, and
is given from (
5) and (
20) as follows.
where
I is the constant of normalization, which is given as follows:
Now, the BE for any function
under the SEL function is the posterior mean of
g and is given by:
The BE based on the LINEX loss function is given as follows:
while the BE under GEL is given by:
Finally, the BE under the Al-Bayyatti loss function (ALB), which was introduced by Al-Bayyati [
22], is given by:
Unfortunately, based on (
23), (
24), (
25), and (
26), it is not easy to obtain the BEs in an explicit form, so approximation methods such as MCMC are recommended. In the following subsection, MCMC is described in order to approximate the BEs.
4.1. Markov Chain Monte Carlo Technique
Now, using the posterior distribution in (
21), the conditional posterior distributions
,
, and
of the parameters can be derived and written, respectively, as follows:
and
Because the conditional distributions in (
27), (
28), and (
29) are not recognised, the Metropolis–Hastings sampler developed in Metropolis et al. [
23] can be used to generate samples of
within the MCMC Algorithm 1.
Algorithm 1 MCMC Technique. |
Start with , which are the start values for the vector . Set . Generate a proposal , from N, where , , and . Determine the acceptance ratio , . Generate a uniform value, say u; if , set ; else, set . Put . Repeat Steps 3–6 N times to get ,
|
Assume
is a function of parameters; then, the MCMC approximation for the BEs of
g depend on the MCMC generated values
, where
(
B is the burn-in period), are carried out under SEL, LINEX, GEL, and ALB loss functions, respectively, as follows:
4.2. Bayesian Credible Intervals
The
Bayesian credible CI for a parameter
, where
is
or
, is extracted from (
34),
where
L and
U are the lower and upper bounds, respectively, of the credible CI.
Since the integral in (
34) is analytically complicated, the MCMC approximation for the credible intervals is used using the
generated values for the parameters. By sorting the generated values in an ascending order as follows,
,
and
, then the
are given as follows:
and where and .
5. Real Data Application
For the aim of illustrating the proposed methods, a real data set is analyzed under progressive first-failure censoring. The data come from a clinical trial; it represents relief time (hours) for 50 arthritic patients. It was introduced by Wingo [
24]. The data are randomly decomposed into 25 groups, each with
items, and the first failure is observed; then, the resulting complete failures of the 25 groups are given in
Table 2.
For proving that ERD is a good fit for the data, the Kolmogorov–Smirnov goodness-of-fit test is executed based on the complete data; the resulting
p-value is 0.887 with MLES
, and
.
Figure 2a shows the difference between the empirical distribution of data and the ERD distribution function, while
Figure 2b shows the quantile plot for the data (Q-Q plot).
Now, for the purpose of illustrating the proposed methods of estimation, three random PFFC samples are generated from the complete data using
observations with their corresponding censoring schemes, as shown in
Table 3. The MLEs and BEs for informative and non-informative priors of the parameters, survival function
, and the hazard function
for the real data are presented in
Table 4.
Table 5 presents also the Bayesian estimates for the parameters based on other values for the hyperparameters
related to the loss functions. The bounds of the asymptotic and credible intervals of the parameters and reliability functions
S and
H are presented in
Table 6.
Figure 3 shows the MCMC iteration and the marginal posterior distributions of
, and
for sample
. It is obvious that the marginal posterior distributions are normally distributed.
Figure 4,
Figure 5 and
Figure 6 show the differences between the estimates of MLEs and BEs for different loss functions with different hyperparameters for
,
, and
, respectively.
6. Simulation Study
In this section, the effeciency and performance of the proposed techniques of estimation are compared using a study of a Monte Carlo simulation. Comparisons are carried out via the average estimates (AEs) and the mean square error (MSE). Furthermore, the average widths (AWs) and coverage probabilities (CPs) of the proposed CIs are compared. The simulation study is started by choosing the true values of the parameters, which are
,
, and
. We utilize the algorithm proposed by Balakrishnan and Aggarwala [
1] to produce PFFC samples from the ERD based on the fact that the PFFC samples with distribution function
can be considered as a progressive Type-II censored sample obtained from a distribution function
. Since there are
n groups and
k items in each group, we assume that the total number of items used in a life test is equal to
. Only
m observations are obtained from the test using a PFFCS strategy. A total of 1000 progressive Type-II censored samples are produced, with
using three different schemes of censoring:
Scheme 1: and if .
Scheme 2: if ; otherwise, .
Scheme 3: if ; .
We assume that informative gamma priors are used with the following values of hyperparameters: , , and . The hyperparameters for the LINEX, GEL, and ALB loss functions are chosen to be , respectively. The MCMC algorithm is repeated 10,000 times with a burn-in period of 2000.
Table 7,
Table 8 and
Table 9 show the AEs and MSEs for the parameter estimates
and
, respectively, with various values of
n and different CSs.
Table 10 and
Table 11 show the AEs and MSEs for the different estimates of the reliability functions
and
, respectively. The average widths (AWs) and coverage probabilites (CPs) for
asym. CIs, informative–credible, and non-informative–credible CIs of the parameters are presented in
Table 12. The results related to the AWs and Cps for
and
are obtained in
Table 13.
7. Conclusions
Censoring is an ongoing occurrence in life testing and reliability research, so Balasooriya [
3] stated that in a case for which a product’s lifetime is fairly long and test facilities are rare but test material is reasonably inexpensive,
units can be tested by testing
n sets, each of which contains
k units. A censoring technique such as this is known as a first-failure censoring scheme. Wu and Kus [
6] created a novel life-test strategy termed a PFFCS plan by combining first-failure censoring with progressive censoring. In this article, we looked at the maximum likelihood and Bayes estimates for some survival time parameters, the reliability function, and the hazard function for the ERD using a progressive first-failure censored scheme. In addition, we established asymptotic and credible confidence intervals for the ERD parameters as well as the reliability function and the hazard function. A simulation study and a real-world application were carried out to analyse and compare the performance of the proposed approaches for various sample sizes and censoring schemes. Based on the results obtained from the simulation, we notice the following:
With increasing sample sizes, the accuracy of the various point and interval estimations improves.
In most cases, the MSEs for all estimates based on PFFCS with k = 3 are similar to those for PFFCS with k = 2.
According to the MSE, BEs with informative priors are more accurate than MLEs.
BEs with non-informative priors perform worse than those with informative priors.
MLEs outperform BEs with non-informative priors in certain instances, underscoring the importance of selecting the prior distribution of target parameters.
In most cases, as h tends to zero, the MSEs for the BEs under the MCMC method under the LINEX loss function are the lowest for all estimates.
The Censoring Schemes 1 and 3, in which the censored units are permitted at the beginning and the end of the experiment, respectively, may raise the precision of the estimates.
As sample size increases, the AWs of different CIs falls.
The smallest average widths and the highest coverage probabilities of the credible CIs exist for informative priors comparing Asym. CIs and non-informative priors.
The PFFCS with produces the best results for the AW and CP of the various CIs.
Finally, we may conclude that statistical inference for reliability studies is feasible and effective when using the progressive first-failure censored method.