Next Article in Journal
Engineering Applications with Stress-Strength for a New Flexible Extension of Inverse Lomax Model: Bayesian and Non-Bayesian Inference
Previous Article in Journal
An Efficient Method for Solving Two-Dimensional Partial Differential Equations with the Deep Operator Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Estimation Methods for Reliability Function for Family of Inverse Exponentiated Distributions under New Loss Function

1
Department of Mathematics, National Institute of Technology Patna, Patna 800005, India
2
Department of Mathematics, Indian Institute of Technology Patna, Bihta 801106, India
3
School of Mathematics, Yunnan Normal University, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(12), 1096; https://doi.org/10.3390/axioms12121096
Submission received: 23 October 2023 / Revised: 22 November 2023 / Accepted: 25 November 2023 / Published: 29 November 2023

Abstract

:
In this paper, different estimation is discussed for a general family of inverse exponentiated distributions. Under the classical perspective, maximum likelihood and uniformly minimum variance unbiased are proposed for the model parameters. Based on informative and non-informative priors, various Bayes estimators of the shape parameter and reliability function are derived under different losses, including general entropy, squared-log error, and weighted squared-error loss functions as well as another new loss function. The behavior of the proposed estimators is evaluated through extensive simulation studies. Finally, two real-life datasets are analyzed from an illustration perspective.

1. Introduction

In practical reliability and engineering fields, the life characteristic of units is usually featured by lifetime distributions, and there are extensive models proposed in practice to fit different data. The most used models include exponential, Weibull, gamma, and log-normal, among others. Recently, a general family of inverted exponentiated exponential distributions (IEED) has been proposed for data analysis and has attracted wide attention. Let X be a random variable from the IEED, then the cumulative distribution function (CDF) and the probability density function (PDF) of X are, respectively, given by
F ( x ; α , λ ) = 1 1 e λ Q ( 1 / x ) α ; x > 0 , α > 0 , λ > 0
and
f ( x ; α , λ ) = α λ Q ( 1 / x ) x 2 e λ Q ( 1 / x ) 1 e λ Q ( 1 / x ) α 1 ; x > 0 , α > 0 , λ > 0
The reliability function (RF) and hazard rate function (HRF) of X can be written in consequence as
R ( x ; λ , α ) = 1 e λ Q ( 1 / t ) α and h ( x ; λ , α ) = α λ Q ( 1 / t ) t 2 e λ Q ( 1 / t ) 1 e λ Q ( 1 / t ) 1 .
In Figure 1, Figure 2 and Figure 3, we have displayed the different shapes of the hazard rate functions of inverted exponentiated exponential distribution (IEE), inverted exponentiated Rayleigh distribution (IER), and inverted exponentiated Pareto distribution (IEP), respectively, with Q ( 1 / x ) = 1 / x , 1 / x 2 and log ( 1 + 1 / x ) . From the figures, we observed that the hazard rate functions of the IEE and IER distributions experience non-monotonic behavior for each set of parameter values. Further, we observed that the hazard function of IEP distribution decreases rapidly with time when both the shape parameters take values less than one. It is also seen that the behavior can be non-monotone in nature. This implies that the family of IE distributions may offer a suitable fit for datasets displaying decreasing or non-monotonic hazard behavior. However, it is important to note that this distribution family is unsuitable for modeling datasets characterized by increasing, constant, or bathtub-shaped hazard rate behavior.
Recently, Ghitany et al. [1] proposed the family of inverted exponentiated distributions, reported several characteristics of this distribution, and observed that this family is capable of modeling many real-life phenomena. For instance, different inferences are obtained in the literature for this family. For example, interested readers may refer to the works of Kizilaslan [2] and Fisher [3]. In a related study, Kumari et al. [4] examined a single-component stress–strength model for this family of distributions. Similar works with respect to the stress and strength variables could also refer to some of the contributions by Temraz [5], de la Cruz et al. [6], Alsadat et al. [7], and EL-Sagheer et al. [8]. Jamal et al. [9] studied the type II general inverse exponential family of distributions. Dutta et al. [10] obtained useful estimation results for a general family of inverted exponentiated distributions under unified hybrid censoring, with partially observed competing risk data. Hashem et al. [11] estimated the reliability of the IER distribution by using the Bayesian method under generalized progressive hybrid censored data, among others. Due to its definition, it is noted that some conventional lifetime models belong to this general family. For example, by considering Q ( 1 / x ) = 1 / x , 1 / x 2 and log ( 1 + 1 / x ) , the inverted exponentiated exponential, the inverted exponentiated Rayleigh, and the inverted exponentiated Pareto distributions are obtained as special cases of this family, as mentioned above, in distributional plots. Interested readers may further refer to Maurya et al. [12], Gao et al. [13], and Wang et al. [14] for some further discussion on the applicability of this family of densities in real-life situations.
In statistical inference, as an alternative to classical likelihood-based estimation, Bayesian inference is gaining increased popularity among researchers owing to its capacity to incorporate prior information in analyses, making it very valuable in reliability, lifetime study, and other associated fields, where one of the major challenges is the limited availability of data. When proper priori information is adopted in the inferential approach, the Bayesian procedure may obtain improved estimates for considered parametric functions. For instance, Sinha [15,16,17] presented a detailed study on this method by deriving inferences for (RF) under normal, inverse Gaussian, and Weibull distributions, respectively. Lye et al. [18] and Lin et al. [19] studied Bayesian estimation under complete and masked data by considering different approximation procedures. Pensky and Singh [20] studied the empirical Bayes estimation of reliability for an exponential family. In the Bayes framework, among others, Dey [21,22] obtained useful parametric estimates for the Rayleigh family of distributions. Amirzadi et al. [23] discussed Bayes estimation based on informative and non-informative priors by considering various loss functions, such as the general entropy loss function (GELF), squared-log error loss function (SLELF), and weighted squared-error loss function (WSELF). In addition, another new loss function (NLF) is also introduced in Amirzadi et al. [23]’s work to evaluate reliability estimates. Amirzadi et al. [23] studied several structural properties of inverse generalized Weibull (IGW) distribution. They showed that the corresponding hazard rate is of non-monotonic behavior. However, it is noteworthy that the family of IE distributions, notably the IEP distribution, is capable of fitting data indicating decreasing hazard rate behavior as well. Due to potential theoretical applications of the IEED, this paper considers the problem of parameter estimation as well as the reliability function of this family of distributions under classical and Bayesian perspectives, and then the associated behavior of different results is compared through numerical studies and real-life examples. For concision, the notations used in the paper are listed in Table 1 for simplification purposes.
This article is organized as follows. Section 2 deals with the maximum likelihood and uniformly minimum variance unbiased (UMVU) estimators. In Section 3, the Bayes estimators are derived under different loss functions. Section 4 discusses the Bayes estimators under a relatively new loss function. Section 5 presents Monte Carlo simulation experiments to determine the efficiency of the proposed estimation procedures. Finally, in Section 5, lifetime datasets are analyzed to support the proposed methods, and a conclusion is presented in Section 6.

2. Classical Likelihood Estimation

Suppose that X ̲ = ( X 1 , X 2 , , X n ) is a random sample from the considered IEED. Then, the log-likelihood function of α and λ is given by
L ( x ; α , λ ) n   log ( α ) + n   log ( λ ) λ i = 1 n Q 1 x i + ( α 1 ) i = 1 n log 1 e λ Q ( 1 / x i ) .
Then, the maximum likelihood estimator (MLE) and UMVU estimator are discussed in this section.

2.1. Likelihood Estimator

By maximizing function (3) given λ , the MLE of the parameter α is given by
α ^ M L = n i = 1 n   log 1 e λ Q ( 1 / x i ) .
In the sequel, the MLE of the (RF) R ( t ) is obtained by using the invariance principle as
R ^ M L ( t ) = 1 e λ Q ( 1 / t ) α ^ M L .

2.2. Uniformly Minimum Variance Unbiased Estimator

Note that the statistic T = i = 1 n log 1 e λ Q ( 1 / x i ) is complete and sufficient for parameter α , where T has a gamma distribution ( n , α ) . One may refer to Kizilaslan [2] for more details about this statistic. As a result, the UMVU estimator of α is computed as α ˜ U M V U = n 1 T . Subsequently, the approximate UMVU estimator of the reliability function is given by
R ˜ U M V U ( t ) 1 e λ Q ( 1 / t ) α ^ U M V U .
Next, the UMVU estimator of the density function of the IEED is obtained as follows.
Theorem 1.
The UMVU estimator of f ( x ) at a given point x based on a random sample X 1 , X 2 , , X n is obtained as
f ˜ ( x ) = λ Q ( 1 / x ) e λ Q ( 1 / x ) 1 + T 1 log 1 e λ Q ( 1 / x ) n 2 x 2 T 1 e λ Q ( 1 / x ) , T > log ( 1 e λ Q ( 1 / x ) ) 0 , otherwise .
Proof. 
The required estimator of f ( x ) is calculated based on the complete and sufficient statistic T. We call it ϕ ( T ) , and so we have
E ( ϕ ( T ) ) = α λ Q ( 1 / x ) x 2 e λ Q ( 1 / x ) 1 e λ Q ( 1 / x ) α 1 ,
where
ϕ ( T ) = ( n 1 ) λ Q ( 1 / x ) e λ Q ( 1 / x ) 1 + T 1 log 1 e λ Q ( 1 / x ) n 2 x 2 1 e λ Q ( 1 / x ) T , T > log ( 1 e λ Q ( 1 / t ) ) 0 , otherwise .
Then, the assertion is completed. □
The UMVU estimator of the RF is obtained in consequence.
Corollary 1.
The UMVU estimator of the (RF) R ( t ) is given by
R ˜ ( t ) = t f ˜ ( x ) d x = 1 + T 1 log 1 e λ Q ( 1 / t ) n 1 , T > log 1 e λ Q ( 1 / t ) 0 , otherwise
Proof. 
Note that
E ( R ˜ ( t ) ) = 0 R ˜ ( t ) g ( T ) d T = 0 t f ˜ ( x ) d x g ( T ) d T = t E ( f ˜ ( x ) ) d x = t f ( x ) d x = R ( t ) .
Since T is a complete and sufficient statistic, it is observed using the Lehmann–Scheffe theorem that
R ˜ ( t ) = t f ˜ ( x ) d x = t ( n 1 ) λ Q ( 1 / x ) e λ Q ( 1 / x ) 1 + T 1 log 1 e λ Q ( 1 / x ) n 2 x 2 1 e λ Q ( 1 / x ) T , T > log ( 1 e λ Q ( 1 / t ) ) 0 , otherwise = 1 + T 1 log ( 1 e λ Q ( 1 / t ) ) n 1 , T > log ( 1 e λ Q ( 1 / t ) ) 0 , otherwise ,
Then, the assertion is completed. □
Lemma 1.
The minimum mean squared-error estimator of α r in the class of estimators c T r , where c > 0 , is given by
α ^ M M S E = Γ ( n r ) Γ ( n 2 r ) T r , r R , r < n .
Proof. 
The statistic T has a gamma distribution with parameter ( n , α ) ; therefore,
E 1 T r = α r Γ ( n r ) Γ ( n ) , E 1 T 2 r = α 2 r Γ ( n 2 r ) Γ ( n ) .
It is seen that α ^ U M V U E r = Γ ( n r ) Γ ( n 2 r ) T r . The mean squared error of c T r is obtained as
M S E c T r = E c T r α r 2 = c 2 E 1 T 2 r 2 c α r E c T r + α 2 r .
The minimum of the MSE occurs when c = Γ ( n r ) Γ ( n 2 r ) . Therefore, the assertion is completed. □
Remark 1.
Some further results could be deduced from Lemma 1 as
(1) 
The MMSE estimator of α in the class c T is given by n 2 T .
(2) 
The MSE relations for different estimators of α are shown as
MSE ( α ^ M M S E ) MSE ( α ˜ U M V U ) MSE ( α ^ M L ) . Therefore, the UMVU of α is a better estimator than the MLE in terms of the mean squared error.

3. Bayes Estimation

In this section, the Bayes estimators of α and the RF R ( t ) are obtained by using the informative and non-informative prior distributions under the common GELF, SLELF, and WSELF loss functions, which are defined, respectively, as
GELF 
L ( θ ^ , θ ) = θ ^ θ c 1 c 1 log ( θ ^ θ ) 1 ,
SLELF 
L ( θ ^ , θ ) = ( log θ ^ log θ ) 2 ,
SLELF 
L ( θ ^ , θ ) = θ ^ θ 2 θ k .
Here, θ is the true parameter, and θ ^ is the estimator of θ .

3.1. Bayes Estimation with Informative Prior

Considering the following gamma prior is for parameter α , and the density function of prior with hyper-parameters ( ζ , β ) is given by
π 1 ( α ) α ζ 1 e β α ; ζ , β > 0 .
Then, the posterior distribution of α is obtained as
Π ( α X ̲ ) α n + ζ 1 e α T + β .
It is noted that the posterior density is still a gamma variable with parameters ( n + ζ , T + β ) .
In consequence, the Bayes estimators of α under the GELF, SLELF and WSELF are obtained, respectively, as follows:
α ^ G E L F = { E α c 1 T } 1 c 1 = 1 T + β Γ ( n + ζ c 1 ) Γ ( n + ζ ) 1 c 1 , α ^ S L E L F = exp ( E [ log α T ] ) = exp ( Ψ ( n + ζ ) log ( T + β ) ) ,
and
α ^ W S E L F = E 1 α k 1 T E 1 α k T = ( n + β k + 1 ) T + ζ .
Further, the estimators of RF are derived in consequence. Under GELF, this estimator is of the following form
R ^ G E L F = { E R ( t ) c 1 T } 1 c 1 = T + β T + β + c 1 log ( 1 e λ Q ( 1 / t ) ) ( ζ + n ) c 1 .
The Bayes estimator of R ( t ) under SLELF is given by
R ^ ( t ) S L E L F = exp ( E [ log R ( t ) T ] ) = exp ζ + n T + β log 1 e λ Q ( 1 / t ) .
Noticing that
E 1 R ( t ) k T = 1 + T T + k log 1 e λ Q ( 1 / t ) n + ζ
the estimator of the R ( t ) -based WSELF is obtained as
R ^ ( t ) W S E L F = E 1 R ( t ) k 1 T E 1 R ( t ) k T = T + β + k log ( 1 e λ Q ( 1 / t ) ) T + β + ( k 1 ) log ( 1 e λ Q ( 1 / t ) ) n + ζ .

3.2. Bayes Estimation with Non-Informative Prior

We take Jeffreys prior for α as
π 2 ( α ) 1 α 2 c ,
and then the posterior distribution of α given x ̲ is obtained as
Π ( α X ̲ ) α n 2 c + 1 e α T ,
which is also a gamma distribution with parameters ( n 2 c + 1 , T ) .
Following a similar procedure as previous, the Bayes estimators of α under GELF, SLELF, and WSELF are computed, respectively, as
α ^ G E L F = { E α c 1 T } 1 c 1 = 1 T Γ ( n 2 c c 1 + 1 ) Γ ( n 2 c + 1 ) 1 c 1 , α ^ S L E L F = exp ( E [ log α T ] ) = exp ( Ψ ( n 2 c + 1 ) log ( T ) ) , α ^ W S E L F = E 1 α k 1 T E 1 α k T = ( n 2 c k + 2 ) T .
Subsequently, the Bayes estimators of RF R ( t ) are represented under different loss functions as
R ^ G E L F = { E R ( t ) c 1 T } 1 c 1 = 1 + c 1 log ( 1 e λ Q ( 1 / t ) ) T n 2 c + 1 c 1 R ^ ( t ) S L E L F = exp ( E [ log R ( t ) T ] ) = exp n 2 c + 1 T log ( 1 e λ Q ( 1 / t ) ) , R ^ ( t ) W S E L F = E 1 R ( t ) k 1 T E 1 R ( t ) k T = T + k log ( 1 e λ Q ( 1 / t ) ) T + ( k 1 ) log ( 1 e λ Q ( 1 / t ) ) n 2 c + 1 .

3.3. Bayesian Estimation

Recently, Amirzadi et al. [23] established a new loss function to develop Bayesian estimates for parameter and reliability functions against commonly used gamma and Jeffreys priors. The prior is provided as
L ( θ , δ ) = e k ( δ θ ) 1 2 e k ( δ θ ) e k ( δ θ ) + e k ( θ δ ) 2 , δ θ 0 , δ = θ .
where the constant k is the loss parameter. It is noted that the NLF is a symmetric and strictly convex loss function and that the risk-unbiased condition is E ( e k δ ) E ( e k δ ) = e 2 k θ . Further, the Bayes estimator of the arbitrary function ζ ( θ ) of the parameter θ under this loss is obtained as
δ ^ ( x ̲ ) = 1 2 k log E ( e k ζ ( θ ) X ̲ = x ̲ ) E ( e k ζ ( θ ) X ̲ = x ̲ ) .
Subsequently, the Bayes estimator of θ is given by
δ ^ ( x ̲ ) = 1 2 k log M θ X ̲ ( k ) M θ X ̲ ( k ) ,
where M θ X ̲ ( . ) is the moment-generating function of θ with respect to the posterior distribution.

3.3.1. Bayes Estimator of Parameter α

The Bayesian estimator of parameter α of the IEED is examined under the NLF with previous gamma and extension Jeffreys priors, respectively.
Theorem 2.
The Bayes estimator of parameter α of the IEED against gamma prior under the NLF is given by
α ^ N L F = 1 2 k log T + β + k T + β k ζ + n , T > | k | β .
Proof. 
Note that the posterior of α has a gamma distribution with the parameters ( n + ζ , 1 T + β ) . Since
E ( e k α T ) = 0 ( T + β ) ζ + n α ζ + n 1 e α ( T + β ) e k α Γ ( ζ + n ) d α ,
= T + β T + β k ζ + n , T + β | k | > 0 ,
and hence, the proof is completed. □
Theorem 3.
The Bayes estimator of parameter α of the IEED under the non-informative prior under the NLF is obtained as
α ^ N L F = 1 2 k log T + k T k n 2 c + 1 , T > | k | .
Proof. 
As earlier, it is seen that the posterior of α has a gamma distribution with the parameters ( n 2 c + , 1 T ) . Then, one has
E ( e k α T ) = 0 T n 2 c + 1 α n 2 c e α T e k α Γ ( n 2 c + 1 ) d α = T T k n 2 c + 1 , T | k | > 0 .
Then, the result is obtained. □

3.3.2. The Bayes Estimator of the RF

The Bayesian estimator of the RF R ( t ) is discussed based on the previous informative gamma and the non-informative Jeffreys priors when the NLF is available.
Theorem 4.
The Bayes estimator of R ( t ) , with respect to gamma prior and NLF, is given by
R ^ ( t ) N L F = 1 2 k log i = 0 k i i ! 1 + T 1 β i log ( 1 e λ Q ( 1 / t ) ) n ζ i = 0 ( k ) i i ! 1 + T 1 β i log ( 1 e λ Q ( 1 / t ) ) n ζ .
Proof. 
Note that under gamma prior with the parameters ( ζ , 1 β ) for α , we obtain the required posterior as gamma with parameters ( n + ζ , 1 Y + β ) . Equivalently, we have
E ( e k R ( t ) T ) = 0 ( T + β ) ζ + n α ζ + n 1 e α ( T + β ) e k 1 e λ Q ( 1 / t ) α Γ ( ζ + n ) d α , = 0 ( T + β ) ζ + n α ζ + n 1 e α ( T + β ) e α ( T + β ) Γ ( ζ + n ) i = 0 k i i ! 1 e λ Q ( 1 / t ) i α d α = i = 0 k i i ! 1 + T 1 β i log ( 1 e λ Q ( 1 / t ) ) n ζ .
Then, the required estimator of R ( t ) is given by
R ^ ( t ) N L F = 1 2 k log E ( e k R ( t ) T ) E ( e k R ( t ) T ) = 1 2 k log i = 0 k i i ! 1 + T 1 β i log ( 1 e λ Q ( 1 / t ) ) n ζ i = 0 ( k ) i i ! 1 + T 1 β i log ( 1 e λ Q ( 1 / t ) ) n ζ .
Therefore, the result is established. □
Next, the Bayes estimator of RF is obtained under the non-informative prior.
Theorem 5.
The Bayes estimator of R ( t ) under Jeffreys prior is represented as
R ^ ( t ) N L F = 1 2 k log i = 0 k i i ! 1 + i T 1 log ( 1 e λ Q ( 1 / t ) ) n + 2 c 1 i = 0 ( k ) i i ! 1 + i T 1 log ( 1 e λ Q ( 1 / t ) ) n + 2 c 1 .
Proof. 
As earlier mentioned, the corresponding posterior of α is gamma with the parameters ( n 2 c + 1 , 1 T ) . We now have
E ( e k R ( t ) T ) = 0 ( T n 2 c + 1 α n 2 c e α T e k 1 e λ Q ( 1 / t ) α Γ ( n 2 c + 1 ) d α , = 0 T n 2 c + 1 α n 2 c e α T e α ( T + β ) Γ ( n 2 c + 1 ) i = 0 k i i ! 1 e λ Q ( 1 / t ) i α d α = i = 0 k i i ! 1 i T 1 log ( 1 e λ Q ( 1 / t ) ) n 2 c 1 .
Thus, the desired estimator of R ( t ) turns out as
R ^ ( t ) N L F = 1 2 k log E ( e k R ( t ) T ) E ( e k R ( t ) T ) = 1 2 k log i = 0 k i i ! 1 + i T 1 log ( 1 e λ Q ( 1 / t ) ) n + 2 c 1 i = 0 ( k ) i i ! 1 + i T 1 log ( 1 e λ Q ( 1 / t ) ) n + 2 c 1 .
Therefore, the assertion is completed. □

4. Simulation Study

In this section, extensive simulation experiments are carried out to assess the performance of the different estimators of parameter α and the RF R ( t ) . All estimates are evaluated based on their computed values and MSEs. The true parameter values are assigned as α = 0.8 and 2. Further, the given λ is taken to be 1.5 in all cases. The sample sizes are chosen as n = 20 , 40 , and 60, reflecting the small, medium, and large sample sizes, respectively. For Bayesian inference, the values of the hyper-parameter c in the Jeffreys prior are 0.4 and 0.8, whereas the hyper-parameters of the informative gamma hyper-parameters ( ζ , β ) are set to be ( 1.6 , 2 ) and ( 4 , 2 ) , respectively. The values of the weight loss function constant k are considered as 0 and 1. The simulation is conducted based on R software, and the results are obtained through 1000 times repetition. In addition, for generating random samples from the considered family of distribution densities, a special IEP distribution is used, and the data generation procedure is provided as follows.
  • Generate a random number u from uniform U ( 0 , 1 ) .
  • Apply the following sampling method
    X i = F 1 ( u i ) = 1 1 1 u i 1 / α 1 λ 1 , i = 1 , , n .
In sequel, we simulate the estimators R ^ ( t ) M L , R ˜ ( t ) U M V U , R ^ ( t ) G E L F , R ^ ( t ) S L E L F , R ^ ( t ) W S E L F , and R ^ ( t ) N L F for different sample sizes. Subsequently, all these estimates are evaluated in terms of their error and estimated values. Note that the MSE is evaluated as
MSE R ^ ( t ) = 1 r j = 1 r R ^ j ( t ) R ( t ) 2
where r is the number of replications. The criterion examines the estimator patterns for different parameter assignments and sample sizes. The simulation results for the different estimates are summarized in Table 2, Table 3, Table 4, Table 5 and Table 6.
It is seen here that the estimated errors of both the ML and Bayesian estimators of the reliability function tend to decrease with an increase in sample sizes. From the results reported in Table 2, Table 3, Table 4 and Table 5, it is also found that the Bayes estimators are marginally better than the respective MLEs and, in effect, lower the estimated MSEs. Further, these errors for the UMVU estimator are also lower than the ML estimator, as can be seen in Table 2. In fact, the Bayes estimators perform better than the UMVU estimates as well. One could observe from Table 3, Table 4 and Table 5 that the Bayes estimators of RF under gamma prior show better MSE performance compared to the non-informative prior. In addition, the estimates obtained under the NLF provide better MSE behavior compared to other loss functions, particularly for the moderate sample size n = 20. For the large sample sizes proposed, the estimates show similar behavior under the given criterion. Further, we also visually compare the patterns of the ML and Bayes estimators in Figure 4 and Figure 5, when α = 0.8 , λ = 1.5 , and sample size n = 10 . We find that the proposed estimates of reliability converge to zero for large values of t. In these figures, we have also plotted the respective reliability functions as well. In fact, it is also observed that the UMVU estimates are closer to the reliability function when compared to the ML estimates. Equivalently, Figure 5 suggests that the estimates obtained under the new loss function show the best performance, as its value is closest to the actual reliability. We have not presented graphs for other values of n, as no noticeable difference in the behavior of the proposed estimates was observed with respect to the various loss functions.

5. Real Data Analysis

Here, we present the analysis of real-life examples in support of the considered estimation problem. We consider the tensile strength data as presented in Bader and Priest [24]. The data describe the strength in GPa for single-carbon fibers and impregnated 1000-carbon fiber tow. Single fibers are tested under tension at gauge lengths of 20 mm (Dataset 1) and 10 mm (Dataset 2). Here, sample sizes are taken as n = 69 and 63, respectively. These data are also discussed by Kundu and Raqab [25] and Surles and Padgett [26] for applications under different frameworks.
The details of the tensile strength data, namely Dataset 1, are presented as:
1.312, 1.314, 1.479, 1.552, 1.700, 1.803, 1.861, 1.865, 1.944, 1.958, 1.966, 1.997, 2.006, 2.021, 2.027, 2.055, 2.063, 2.098, 2.14, 2.179, 2.224, 2.240, 2.253, 2.270, 2.272, 2.274, 2.301, 2.301, 2.359, 2.382, 2.382, 2.426, 2.434, 2.435, 2.478, 2.490, 2.511, 2.514, 2.535, 2.554, 2.566, 2.57, 2.586, 2.629, 2.633, 2.642, 2.648, 2.684, 2.697, 2.726, 2.770, 2.773, 2.800, 2.809, 2.818, 2.821, 2.848, 2.88, 2.954, 3.012, 3.067, 3.084, 3.090, 3.096, 3.128, 3.233, 3.433, 3.585, 3.585.
Similarly, Dataset 2 is given as follows:
1.901, 2.132, 2.203, 2.228, 2.257, 2.350, 2.361, 2.396, 2.397, 2.445, 2.454, 2.474, 2.518, 2.522, 2.525, 2.532, 2.575, 2.614, 2.616, 2.618, 2.624, 2.659, 2.675, 2.638, 2.74, 2.856, 2.917, 2.928, 2.937, 2.937, 2.977, 2.996, 3.03, 3.125, 3.139, 3.145, 3.22, 3.223, 3.235, 3.243, 3.264, 3.272, 3.294, 3.332, 3.346, 3.377, 3.408, 3.435, 3.493, 3.501, 3.537, 3.554, 3.562, 3.628, 3.852, 3.871, 3.886, 3.971, 4.024, 4.027, 4.225, 4.395, 5.020.
We compared the fit of the proposed special inverted distributions with other competing models, namely gamma, inverse gamma, and inverse Weibull distributions are very commonly used for fitting various lifetime data. Table 7 and Table 8 contain maximum likelihood estimates of all the model parameters and goodness-of-fit statistics (K-S statistic, p-value) for the first and second datasets, respectively. Based on these two tables, we conclude that the family of IEP distribution provides a satisfactory fit in the sense that it yields the smallest K-S estimates with a large p-value. Thus, this considered family can be used to obtain inferences from these datasets.
Further, the MLE and Bayes estimates of reliability for both datasets are presented in Table 9 and Table 10 by considering the proposed loss functions and prior distributions. To derive Bayes estimates for a real data analysis case, we specified non-informative prior distribution. Accordingly, we set ζ and β as approaching a zero value. Among others, we found that the Bayes estimates derived under the NLF are marginally bigger than the other studied estimates where t = 2, 3, 4.

6. Conclusions

The family of inverted exponentiated densities is widely regarded as a useful lifetime model for fitting various data. In this paper, we obtained useful classical and Bayes estimates for the shape parameter and the reliability function of IE distribution. Different estimates for these parametric functions are derived for varying sample sizes. We computed several Bayes estimates by incorporating different distinct loss functions. Particularly, we evaluated these estimates under a new symmetric loss function, as introduced by Amirzadi et al. [23]. Estimates of reliability are derived over time by assuming both informative and non-informative prior distributions. The proposed estimation results are examined via extensive simulations. The new loss function works well in these situations, particularly if the sample size is relatively small. For the medium to large sample sizes, we observed that the performance of Bayes estimates becomes similar for all loss functions. For all the sample sizes and different values for t, the proper Bayes estimators of reliability yield better estimates than the estimates obtained from Jefferys prior. This holds for all the considered loss functions. Finally, we have presented the analyses of two datasets for the applications. In lifetime studies, modeling and reliability analyses are significant topics for practitioners. We concentrated on strength data in this study, which stems from an industrial experiment. The inverted family of distributions is flexible in nature and can be used to model data arising from various physical investigations.

Author Contributions

Conceptualization, R.K. and R.K.S.; methodology, R.K. and Y.M.T.; software, R.K.; formal analysis, R.K. and L.W.; writing—original draft preparation, R.K.; writing—review and editing, L.W.; funding acquisition, L.W. and Y.M.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work of Liang Wang was supported by the Yunnan Key Laboratory of Modern Analytical Mathematics and Applications (No. 202302AN360007). The research work of Yogesh Mani Tripathi is partially financially supported under a grant MTR/2022/000183 by the Science and Engineering Research Board, India.

Data Availability Statement

The numerical data used to support the findings of this study are included within the article.

Acknowledgments

The authors would like to thank the editor and reviewers for their valuable comments and suggestions, which improved this paper significantly.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ghitany, M.E.; Tuan, V.K.; Balakrishnan, N. Likelihood estimation for a general class of inverse exponentiated distributions based on complete and progressively censored data. J. Stat. Comput. Simul. 2014, 84, 96–106. [Google Scholar] [CrossRef]
  2. Kizilaslan, F. Classical and Bayesian estimation of reliability in a multicomponent stressstrength model based on a general class of inverse exponentiated distributions. Stat. Pap. 2018, 59, 1161–1192. [Google Scholar] [CrossRef]
  3. Fisher, A.J. Statistical Inferences of Rs,k = Pr(Xks+1:k > Y) for General Class of Exponentiated Inverted Exponential Distribution with Progressively Type-II Censored Samples with Uniformly Distributed Random Removal. Masters Thesis and Doctoral Dissertations. Available online: https://scholar.utc.edu/theses/493.2016 (accessed on 12 December 2016).
  4. Kumari, R.; Tripathi, Y.M.; Sinha, R.K.; Wang, L. Reliability estimation for the inverted exponentiated Pareto distribution. Qual. Technol. Quant. Manag. 2023, 20, 485–510. [Google Scholar] [CrossRef]
  5. Temraz, N.S.Y. Inference on the stress strength reliability with exponentiated generalized Marshall Olkin-G distribution. PLoS ONE 2023, 18, e0280183. [Google Scholar]
  6. De la Cruz, R.; Salinas, H.S.; Meza, C. Reliability Estimation for Stress-Strength Model Based on Unit-Half-Normal Distribution. Symmetry 2022, 14, 837. [Google Scholar] [CrossRef]
  7. Alsadat, N.; Hassan, A.S.; Elgarhy, M.; Chesneau, C.; Mohamed, R.E. An efficient stress-strength reliability estimate of the unit gompertz distribution using ranked set sampling. Symmetry 2023, 15, 1121. [Google Scholar] [CrossRef]
  8. EL-Sagheer, R.M.; Eliwa, M.S.; El-Morshedy, M.; Al-Essa, L.A.; Al-Bossly, A.; Abd-El-Monem, A. Analysis of the stress-strength model using uniform truncated negative binomial distribution under progressive Type-II censoring. Axioms 2023, 12, 949. [Google Scholar] [CrossRef]
  9. Jamal, F.; Chesneau, C.; Elgarhy, M. Type II general inverse exponential family of distributions. J. Stat. Manag. Syst. 2020, 23, 617–641. [Google Scholar] [CrossRef]
  10. Dutta, S.; Ng, H.K.T.; Kayal, S. Inference for a general family of inverted exponentiated distributions under unified hybrid censoring with partially observed competing risks data. J. Comput. Appl. Math. 2023, 422, 114934. [Google Scholar] [CrossRef]
  11. Hashem, A.F.; Alyami, S.A.; Yousef, M.M. Utilizing empirical Bayes estimation to assess reliability in inverted exponentiated Rayleigh distribution with progressive hybrid censored medical data. Axioms 2023, 12, 872. [Google Scholar] [CrossRef]
  12. Maurya, R.K.; Tripathi, Y.M.; Sen, T.; Rastogi, M.K. On progressively censored inverted exponentiated Rayleigh distribution. J. Stat. Comput. Simul. 2019, 89, 492–518. [Google Scholar] [CrossRef]
  13. Gao, S.; Yu, J.; Gui, W. Pivotal inference for the inverted exponentiated Rayleigh distribution based on progressive Type-II censored data. Am. J. Math. Manag. Sci. 2020, 39, 315–328. [Google Scholar] [CrossRef]
  14. Wang, L.; Tripathi, Y.M.; Wu, S.J.; Zhang, M. Inference for confidence sets of the generalized inverted exponential distribution under k-record values. J. Comput. Appl. Math. 2020, 380, 112969. [Google Scholar] [CrossRef]
  15. Sinha, S.K. Bayes estimation of the reliability function of normal distribution. IEEE Trans. Reliab. 1985, 34, 360–362. [Google Scholar] [CrossRef]
  16. Sinha, S.K. Bayesian estimation of the reliability function of the inverse Gaussian distribution. Stat. Prob. Lett. 1986, 4, 319–323. [Google Scholar] [CrossRef]
  17. Sinha, S.K. Bayes estimation of the reliability function and hazard rate of a weibull failure time distribution. Trab. Estad. 1986, 1, 47–56. [Google Scholar] [CrossRef]
  18. Lye, L.M.; Hapuarachchi, K.P.; Ryan, S. Bayes estimation of the extreme-value reliability function. IEEE Trans. Reliab. 1993, 42, 641–644. [Google Scholar] [CrossRef]
  19. Lin, D.K.J.; Usher, J.S.; Guess, F.M. Bayes estimation of component-reliability from masked system-life data. IEEE Trans. Reliab. 1996, 45, 233–237. [Google Scholar] [CrossRef]
  20. Pensky, M.; Singh, R.S. Empirical Bayes estimation of reliability characteristics for an exponential family. Can. J. Stat. 1999, 27, 127–136. [Google Scholar] [CrossRef]
  21. Dey, S. Comparison of Bayes estimators of the parameter and reliability function for Rayleigh distribution under different loss functions. Malay J. Math. Sci. 2009, 3, 249–266. [Google Scholar]
  22. Dey, S. Bayesian estimation of the parameter and reliability function of an inverse Rayleigh distribution. Malay J. Math. Sci. 2012, 6, 113–124. [Google Scholar]
  23. Amirzadi, A.; Jamkhaneh, E.B.; Deiri, E. A comparison of estimation methods for reliability function of inverse generalized Weibull distribution under new loss function. J. Stat. Comput. Simul. 2021, 91, 2595–2622. [Google Scholar] [CrossRef]
  24. Bader, M.G.; Priest, A.M. Statistical aspects of fibre and bundle strength in hybrid composites. In Progress in Science and Engineering Composites; ICCM-IV: Tokyo, Japan, 1982; pp. 1129–1136. [Google Scholar]
  25. Kundu, D.; Raqab, M.Z. Estimation of R = P(Y < X) for three-parameter Weibull distribution. Stat. Prob. Lett. 2009, 79, 1839–1846. [Google Scholar]
  26. Surles, J.G.; Padgett, W.J. Inference for reliability and stress-strength for a scaled Burr Type X distribution. Life Data Anal. 2001, 7, 187–200. [Google Scholar] [CrossRef]
Figure 1. Hazard rate function of inverted exponentiated exponential (IEE) model for different values of shape and scale parameters.
Figure 1. Hazard rate function of inverted exponentiated exponential (IEE) model for different values of shape and scale parameters.
Axioms 12 01096 g001
Figure 2. Hazard rate function of inverted exponentiated Rayleigh distribution (IER) model for different values of shape and scale parameters.
Figure 2. Hazard rate function of inverted exponentiated Rayleigh distribution (IER) model for different values of shape and scale parameters.
Axioms 12 01096 g002
Figure 3. Hazard rate function of inverted exponentiated Pareto (IEP) model for different values of shape and scale parameters.
Figure 3. Hazard rate function of inverted exponentiated Pareto (IEP) model for different values of shape and scale parameters.
Axioms 12 01096 g003
Figure 4. The plot of the classical estimators of reliability function (ML and UMVU) estimators.
Figure 4. The plot of the classical estimators of reliability function (ML and UMVU) estimators.
Axioms 12 01096 g004
Figure 5. The plot of Bayes estimators of reliability function under different loss functions (GELF, SLELF, WSELF, and NLF).
Figure 5. The plot of Bayes estimators of reliability function under different loss functions (GELF, SLELF, WSELF, and NLF).
Axioms 12 01096 g005
Table 1. Notations used in the paper.
Table 1. Notations used in the paper.
      Full Name         Abbreviation
      Inverse Exponentiated         IE
      Inverted Exponentiated Pareto         IEP
      Inverted Exponentiated Exponential         IEE
      Inverted Exponentiated Rayleigh         IER
      Inverse Weibull         IW
      Inverse Gamma         IG
      Maximum Likelihood         ML
      Uniformly Minimum Variance Unbiased         UMVU
      Mean Squared Error         MSE
      Minimum Mean Squared Error         MMSE
      General Entropy Loss Function         GELF
      Squared-Log Error Loss Function         SLELF
      Weight Squared-Error Loss Function         WSELF
      New Loss Function         NLF
      Integrated Mean Squared Error         IMSE
Table 2. Simulation results for the ML and UMVU estimates of R ( t ) with their MSEs.
Table 2. Simulation results for the ML and UMVU estimates of R ( t ) with their MSEs.
nt α = 0.8 α = 2
R ( t ) R ˜ ( t ) UMVU R ^ ( t ) ML R ( t ) R ˜ ( t ) UMVU R ^ ( t ) ML
2020.533230.531820.520460.207630.208820.20567
0.5565  × 10 2 0.5654  × 10 2 0.4525  × 10 2 0.4873  × 10 2
50.318500.318250.310920.057250.057140.06046
0.6035  × 10 2 0.6258  × 10 2 0.1302  × 10 2 0.1322  × 10 2
80.233080.230540.226470.026010.026010.02965
0.5133  × 10 2 0.5206  × 10 2 0.0445  × 10 2 0.0493  × 10 2
4020.533230.531980.526270.207630.204600.20302
0.3000  × 10 2 0.3033  × 10 2 0.2332  × 10 2 0.2528  × 10 2
50.318500.317510.313750.057250.056260.05798
0.3171  × 10 2 0.3283  × 10 2 0.0502  × 10 2 0.0681  × 10 2
80.233070.233600.231400.026430.026430.02833
0.2492  × 10 2 0.2677  × 10 2 0.0212  × 10 2 0.0229  × 10 2
6020.533230.533160.529340.207630.208690.20756
0.1772  × 10 2 0.1882  × 10 2 0.1532  × 10 2 0.1776  × 10 2
50.318500.320590.318020.057250.056980.05814
0.2135  × 10 2 0.2086  × 10 2 0.0413  × 10 2 0.0425  × 10 2
80.233070.232170.230690.026190.026190.02746
0.1906  × 10 2 0.1954  × 10 2 0.0158  × 10 2 0.0161  × 10 2
Table 3. Simulation results for the Bayes estimates of R ( t ) under GELF with their MSEs.
Table 3. Simulation results for the Bayes estimates of R ( t ) under GELF with their MSEs.
Jeffrey (c = 0.4)Jeffrey (c = 0.8)Gamma Prior
n t R ( t ) c 1 = 1 c 1 = 2 c 1 = 1 c 1 = 2 c 1 = 1 c 1 = 2
R ^ ( t ) GELF R ^ ( t ) GELF R ^ ( t ) GELF R ^ ( t ) GELF R ^ ( t ) GELF R ^ ( t ) GELF
α = 0.8
2020.533230.511370.505420.524910.519030.516910.51148
0.6253  × 10 2 0.6819  × 10 2 0.5707  × 10 2 0.6110  × 10 2 0.5105  × 10 2 0.5521  × 10 2
50.318500.296730.285580.310900.299620.302010.29169
0.6921  × 10 2 0.78219  × 10 2 0.6634  × 10 2 0.7254  × 10 2 0.5827  × 10 2 0.6516  × 10 2
80.233070.210900.198060.223780.210600.215650.20368
0.5759  × 10 2 0.6648  × 10 2 0.5595  × 10 2 0.6205  × 10 2 0.4899  × 10 2 0.5598  × 10 2
4020.533230.521830.519020.528570.525780.524120.52143
0.3191  × 10 2 0.3332  × 10 2 0.3041  × 10 2 0.3144  × 10 2 0.2891  × 10 2 0.3008  × 10 2
50.318500.306670.301250.313850.308420.308970.30377
0.3472  × 10 2 0.3709  × 10 2 0.3379  × 10 2 0.3542  × 10 2 0.3175  × 10 2 0.3374  × 10 2
80.233070.223560.217260.230210.223830.225580.21950
0.2800  × 10 2 0.3008  × 10 2 0.2773  × 10 2 0.2902  × 10 2 0.2577  × 10 2 0.2749  × 10 2
6020.533230.526410.524590.530900.529080.527780.52600
0.1945  × 10 2 0.2001  × 10 2 0.1887  × 10 2 0.1926  × 10 2 0.1823  × 10 2 0.1872  × 10 2
50.318500.313330.309780.318150.314600.314690.31124
0.2144  × 10 2 0.2228  × 10 2 0.2126  × 10 2 0.2177  × 10 2 0.2026  × 10 2 0.2097  × 10 2
80.233070.225460.221280.229910.225700.226770.22269
0.2023  × 10 2 0.2127  × 10 2 0.2001  × 10 2 0.2070  × 10 2 0.1911  × 10 2 0.2001  × 10 2
α = 2
2020.207630.189940.176840.202310.188800.195680.18464
0.5244  × 10 2 0.5989  × 10 2 0.5221  × 10 2 0.5693  × 10 2 0.3681  × 10 2 0.4148  × 10 2
50.057250.048350.038280.054040.043060.050280.04138
0.1195  × 10 2 0.0986  × 10 2 0.1304  × 10 2 0.1312  × 10 2 0.0886  × 10 2 0.0986  × 10 2
80.026220.021100.014470.024250.016790.021960.01597
0.0377  × 10 2 0.0385  × 10 2 0.0433  × 10 2 0.0397  × 10 2 0.0282  × 10 2 0.0303  × 10 2
4020.207630.195040.188510.201360.194740.197700.19175
0.2685  × 10 2 0.29261  × 10 2 0.2630  × 10 2 0.2795  × 10 2 0.2209  × 10 2 0.2391  × 10 2
50.057250.051670.046230.054690.049020.052660.04762
0.0659  × 10 2 0.0706  × 10 2 0.0683  × 10 2 0.0699  × 10 2 0.0556  × 10 2 0.0595  × 10 2
80.026220.023750.019880.025500.021410.0242080.020589
0.0196  × 10 2 0.0201  × 10 2 0.0211  × 10 2 0.0203  × 10 2 0.0167  × 10 2 0.0172  × 10 2
6020.207630.202230.197920.206510.202160.203620.19957
0.1817  × 10 2 0.1900  × 10 2 0.1818  × 10 2 0.1866  × 10 2 0.1597  × 10 2 0.1662  × 10 2
50.057250.053830.050090.055910.052070.054450.050905
0.0413  × 10 2 0.0434  × 10 2 0.0424  × 10 2 0.0430  × 10 2 0.0368  × 10 2 0.0384  × 10 2
80.026220.024370.021700.025570.022800.024680.02214
0.1462  × 10 2 0.0148  × 10 2 0.0153  × 10 2 0.0149  × 10 2 0.0130  × 10 2 0.0132  × 10 2
Table 4. Simulation results for the Bayes estimates of R ( t ) under SLELF with their MSEs.
Table 4. Simulation results for the Bayes estimates of R ( t ) under SLELF with their MSEs.
ntJeffrey ( c = 0.4 )Jeffrey ( c = 0.8 )Gamma Prior
R ( t ) R ^ ( t ) SLELF R ^ ( t ) SLELF R ^ ( t ) SLELF
α = 0.8
2020.533230.517120.530590.52216
0.5783  × 10 2 0.5393  × 10 2 0.4765  × 10 2
50.318500.307410.321690.31191
0.6296  × 10 2 0.6288  × 10 2 0.5371  × 10 2
80.233070.223240.2364270.22717
0.5206  × 10 2 0.5329  × 10 2 0.4491  × 10 2
4020.53323610.52460150.53132260.5267664
0.3069  × 10 2 0.2957  × 10 2 0.2791  × 10 2
50.318500.311960.319170.31407
0.3297  × 10 2 0.3278  × 10 2 0.3035  × 10 2
80.233070.229740.236440.23153
0.2673  × 10 2 0.2724  × 10 2 0.2480  × 10 2
6020.533230.528220.532700.52953
0.1896  × 10 2 0.1854  × 10 2 0.1782  × 10 2
50.318500.316820.321650.31810
0.2087  × 10 2 0.2102  × 10 2 0.1981  × 10 2
80.233070.229580.234060.23079
0.954  × 10 2 0.19689  × 10 2 0.1855  × 10 2
α = 2
2020.207630.202560.215290.20635
0.4837  × 10 2 0.5101  × 10 2 0.3455  × 10 2
50.057250.058890.065430.05946
0.1271  × 10 2 0.1519  × 10 2 0.0935  × 10 2
80.026220.028700.032700.02863
0.0466  × 10 2 0.0595  × 10 2 0.0339  × 10 2
4020.207630.201440.207840.20355
0.2530  × 10 2 0.2552  × 10 2 0.2099  × 10 2
50.057250.057190.060430.05777
0.0668  × 10 2 0.0727  × 10 2 0.0566  × 10 2
80.026220.027840.029820.02800
0.0222  × 10 2 0.0255  × 10 2 0.0188  × 10 2
6020.207630.206480.210800.20762
0.1771  × 10 2 0.1807  × 10 2 0.1566  × 10 2
50.057250.057600.059780.05801
0.0419  × 10 2 0.0446  × 10 2 0.0374  × 10 2
80.026220.027140.028430.02731
0.0158  × 10 2 0.0172  × 10 2 0.0141  × 10 2
Table 5. Simulation results for the Bayes estimates of R(t) under WSELF with their MSEs.
Table 5. Simulation results for the Bayes estimates of R(t) under WSELF with their MSEs.
Jeffrey (c = 0.4)Jeffrey (c = 0.8)Gamma Prior
n t R ( t ) k = 0 k = 1 k = 0 k = 1 k = 0 k = 1
R ^ ( t ) WSELF R ^ ( t ) WSELF R ^ ( t ) WSELF R ^ ( t ) WSELF R ^ ( t ) WSELF R ^ ( t ) WSELF
α = 0.8
2020.533230.522680.511370.536080.524940.527250.51691
0.5401  × 10 2 0.6253  × 10 2 0.5161  × 10 2 0.5707  × 10 2 0.4495  × 10 2 0.5105  × 10 2
50.318500.317660.296730.332030.310900.321440.30201
0.5916  × 10 2 0.6921  × 10 2 0.6181  × 10 2 0.6634  × 10 2 0.5122  × 10 2 0.5827  × 10 2
80.233070.235110.210900.248540.223780.238260.21565
0.4955  × 10 2 0.5759  × 10 2 0.5371  × 10 2 0.5595  × 10 2 0.4347  × 10 2 0.4899  × 10 2
4020.533230.527310.521830.534020.528570.529360.52412
0.2966  × 10 2 0.3191  × 10 2 0.2890  × 10 2 0.3041  × 10 2 0.2708  × 10 2 0.2891  × 10 2
50.318500.317150.306670.324380.313850.319060.30897
0.3182  × 10 2 0.3472  × 10 2 0.3235  × 10 2 0.3379  × 10 2 0.2948  × 10 2 0.3175  × 10 2
80.233070.235780.223560.242540.230210.237360.22558
0.2622  × 10 2 0.2800  × 10 2 0.2752  × 10 2 0.2773  × 10 2 0.2453  × 10 2 0.2577  × 10 2
6020.533230.530010.526410.534480.530900.531270.52778
0.1855  × 10 2 0.1945  × 10 2 0.1828  × 10 2 0.1886  × 10 2 0.1747  × 10 2 0.1823  × 10 2
50.318500.320260.313330.325100.318160.321460.31469
0.2052  × 10 2 0.2144  × 10 2 0.2102  × 10 2 0.2126  × 10 2 0.1959  × 10 2 0.2026  × 10 2
80.233070.233650.225460.238150.229910.234760.22677
0.1919  × 10 2 0.2023  × 10 2 0.1968  × 10 2 0.2001  × 10 2 0.1831  × 10 2 0.1911  × 10 2
α = 2
2020.207630.214720.189940.227760.202310.216670.19568
0.4737  × 10 2 0.5244  × 10 2 0.5294  × 10 2 0.5221  × 10 2 0.3450  × 10 2 0.3681  × 10 2
50.057250.069700.048350.077030.054040.068800.05028
0.1556  × 10 2 0.1195  × 10 2 0.1978  × 10 2 0.1304  × 10 2 0.1144  × 10 2 0.0886  × 10 2
80.026220.037040.021100.041880.024250.035820.02196
0.0686  × 10 2 0.0377  × 10 2 0.0918  × 10 2 0.0433  × 10 2 0.0493  × 10 2 0.0282  × 10 2
4020.207630.207710.195040.214180.20130.20920.19770
0.2456  × 10 2 0.2685  × 10 2 0.2555  × 10 2 0.2630  × 10 2 0.2057  × 10 2 0.2209  × 10 2
50.057250.062780.051670.066220.054690.062920.05266
0.0737  × 10 2 0.0659  × 10 2 0.0836  × 10 2 0.0683  × 10 2 0.0627  × 10 2 0.0556  × 10 2
80.026220.032130.023750.034320.025500.031950.02420
0.0283  × 10 2 0.0196  × 10 2 0.0338  × 10 2 0.0211  × 10 2 0.0240  × 10 2 0.0167  × 10 2
6020.207630.210680.202230.215030.206510.211570.20362
0.1761  × 10 2 0.1817  × 10 2 0.1832  × 10 2 0.1818  × 10 2 0.1566  × 10 2 0.1597  × 10 2
50.057250.061390.053830.063670.055910.061600.05445
0.0452  × 10 2 0.0413  × 10 2 0.0497  × 10 2 0.0424  × 10 2 0.0406  × 10 2 0.0367  × 10 2
80.026220.029990.024370.03130.025570.030010.02468
0.0185  × 10 2 0.0146  × 10 2 0.0209  × 10 2 0.0153  × 10 2 0.0166  × 10 2 0.0130  × 10 2
Table 6. Simulation results for the Bayes estimates of R ( t ) under NLF with their MSEs.
Table 6. Simulation results for the Bayes estimates of R ( t ) under NLF with their MSEs.
Jeffrey (c = 0.4)Jeffrey (c = 0.8)Gamma Prior
n t R ( t ) k = 1 k = 2 k = 1 k = 2 k = 1 k = 2
R ^ ( t ) NLF R ^ ( t ) NLF R ^ ( t ) NLF R ^ ( t ) NLF R ^ ( t ) NLF R ^ ( t ) NLF
α = 0.8
2020.533230.522680.522680.536080.536080.527250.52725
0.5400  × 10 2 0.5398  × 10 2 0.5160  × 10 2 0.5158  × 10 2 0.4495  × 10 2 0.4493  × 10 2
50.318500.317680.317750.332050.332120.321460.32152
0.5915  × 10 2 0.5911  × 10 2 0.6181  × 10 2 0.6178  × 10 2 0.5121  × 10 2 0.5118  × 10 2
80.233070.235140.235230.248570.248670.238290.23837
0.4955  × 10 2 0.4954  × 10 2 0.5372  × 10 2 0.5373  × 10 2 0.1831  × 10 2 0.1831  × 10 2
4020.533230.527310.527310.534020.534020.529360.52936
0.2965  × 10 2 0.2965  × 10 2 0.2890  × 10 2 0.2890  × 10 2 0.2708  × 10 2 0.2707  × 10 2
50.318500.317150.317170.324380.324400.319060.31908
0.3182  × 10 2 0.3181  × 10 2 0.3235  × 10 2 0.3234  × 10 2 0.2948  × 10 2 0.2948  × 10 2
80.233070.235790.235810.242550.242580.2373740.23739
0.2622  × 10 2 0.2622  × 10 2 0.2752  × 10 2 0.2752  × 10 2 0.2453  × 10 2 0.2453  × 10 2
6020.533230.530010.530010.534480.534480.531270.53127
0.1855  × 10 2 0.1855  × 10 2 0.1828  × 10 2 0.1828  × 10 2 0.1747  × 10 2 0.1747  × 10 2
50.318500.320270.320280.325110.325120.321450.32146
0.2054  × 10 2 0.2054  × 10 2 0.2102  × 10 2 0.2102  × 10 2 0.1959  × 10 2 0.1959  × 10 2
80.233070.233650.233660.238150.238170.234760.23477
0.1919  × 10 2 0.1919  × 10 2 0.1968  × 10 2 0.1968  × 10 2 0.2382  × 10 2 0.2383773
α = 2
2020.207630.214750.214850.227790.227890.216690.21676
0.4738  × 10 2 0.4739  × 10 2 0.5295  × 10 2 0.5298  × 10 2 0.3450  × 10 2 0.3451  × 10 2
50.057250.069710.069760.077050.077100.069710.06976
0.1557  × 10 2 0.1561  × 10 2 0.1980  × 10 2 0.1984  × 10 2 0.1144  × 10 2 0.1146  × 10 2
80.026220.037040.037060.041890.041920.035820.03584
0.0686  × 10 2 0.0688  × 10 2 0.0919  × 10 2 0.0921  × 10 2 0.0493  × 10 2 0.0493  × 10 2
4020.207630.207720.207740.214190.214220.209290.20931
0.2456  × 10 2 0.2456  × 10 2 0.2556  × 10 2 0.2556  × 10 2 0.2057  × 10 2 0.2057  × 10 2
50.057250.062780.062790.066220.066230.062780.06279
0.0737  × 10 2 0.0737  × 10 2 0.0836  × 10 2 0.0836  × 10 2 0.0627  × 10 2 0.0627  × 10 2
80.026220.032130.032130.034330.034330.031950.03195
0.0283  × 10 2 0.0283  × 10 2 0.0338  × 10 2 0.03387  × 10 2 0.0240  × 10 2 0.0240  × 10 2
6020.207630.210680.210690.215030.215040.211570.21158
0.1761  × 10 2 0.1761  × 10 2 0.1832  × 10 2 0.1833  × 10 2 0.1566  × 10 2 0.1566  × 10 2
50.057250.061400.061400.063670.063670.061400.06140
0.0452  × 10 2 0.0453  × 10 2 0.0497  × 10 2 0.0497  × 10 2 0.0407  × 10 2 0.0407  × 10 2
80.026220.029990.029990.031380.031380.030010.03001
0.0185  × 10 2 0.0185  × 10 2 0.0209  × 10 2 0.0209  × 10 2 0.0167  × 10 2 0.0166  × 10 2
Table 7. ML estimators of the fitted models and goodness-of-fit measures for the first dataset (20 mm).
Table 7. ML estimators of the fitted models and goodness-of-fit measures for the first dataset (20 mm).
ModelsParametersK-S Testp-Value
IEP α ^ = 567.37620.0385041.0000
λ ^ = 19.6035
IEE α ^ = 205.88330.0414120.9998
λ ^ = 13.8826
IER α ^ = 10.29860.0777410.7985
λ ^ = 15.6410
Gamma α ^ = 23.384570.0582570.9626
β ^ = 9.5394
IG α ^ = 21.290930.086660.6460
β ^ = 49.8848
IW β ^ = 4.12660.133620.1553
λ ^ = 2.1437
Table 8. ML estimators of the fitted models and goodness-of-fit measures for the second dataset (10 mm).
Table 8. ML estimators of the fitted models and goodness-of-fit measures for the second dataset (10 mm).
ModelsParametersK-S Testp-Value
IEP α ^ = 357.3735 0.100290.5506
λ ^ = 22.00747
IEE α ^ = 173.4152 0.100660.5459
λ ^ = 16.7685
IER α ^ = 11.28450.0974170.5883
λ ^ = 25.3058
Gamma α ^ = 25.49970.104620.4645
β ^ = 8.3394
IG α ^ = 25.99520.0915460.6331
β ^ = 76.4390
IW β ^ = 7.32350.201620.0101
λ ^ = 2.6758
Table 9. The estimates of reliability function for different choices of T for the first dataset (20 mm).
Table 9. The estimates of reliability function for different choices of T for the first dataset (20 mm).
Estimator t = 2 t = 3 t = 4
R ^ ( t ) M L 0.818490.132620.00075
R ˜ ( t ) U M V U 0.820530.132570.00056
R ^ ( t ) G E L F 0.820290.128520.00035
R ^ ( t ) S L E L F 0.820770.136570.00083
R ^ ( t ) W S E L F 0.821000.140520.00118
R ^ ( t ) N L F 0.821000.140530.00118
Table 10. The estimates of reliability function for different choices of T for the second dataset (10 mm).
Table 10. The estimates of reliability function for different choices of T for the second dataset (10 mm).
Estimator t = 2 t = 3 t = 4
R ^ ( t ) M L 0.953490.529040.07120
R ˜ ( t ) U M V U 0.954190.532720.07019
R ^ ( t ) G E L F 0.954170.530990.06614
R ^ ( t ) S L E L F 0.954210.534410.07424
R ^ ( t ) W S E L F 0.954220.536100.07829
R ^ ( t ) N L F 0.954220.537000.07831
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kumari, R.; Tripathi, Y.M.; Sinha, R.K.; Wang, L. Comparison of Estimation Methods for Reliability Function for Family of Inverse Exponentiated Distributions under New Loss Function. Axioms 2023, 12, 1096. https://doi.org/10.3390/axioms12121096

AMA Style

Kumari R, Tripathi YM, Sinha RK, Wang L. Comparison of Estimation Methods for Reliability Function for Family of Inverse Exponentiated Distributions under New Loss Function. Axioms. 2023; 12(12):1096. https://doi.org/10.3390/axioms12121096

Chicago/Turabian Style

Kumari, Rani, Yogesh Mani Tripathi, Rajesh Kumar Sinha, and Liang Wang. 2023. "Comparison of Estimation Methods for Reliability Function for Family of Inverse Exponentiated Distributions under New Loss Function" Axioms 12, no. 12: 1096. https://doi.org/10.3390/axioms12121096

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop