Next Article in Journal
Multi-Server Queuing Production Inventory System with Emergency Replenishment
Previous Article in Journal
Development of a Robust Data-Driven Soft Sensor for Multivariate Industrial Processes with Non-Gaussian Noise and Outliers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameter Estimation of Exponentiated Half-Logistic Distribution for Left-Truncated and Right-Censored Data

School of Mathematics and Statistics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(20), 3838; https://doi.org/10.3390/math10203838
Submission received: 27 August 2022 / Revised: 26 September 2022 / Accepted: 13 October 2022 / Published: 17 October 2022
(This article belongs to the Section Probability and Statistics)

Abstract

:
Left-truncated and right-censored data are widely used in lifetime experiments, biomedicine, labor economics, and actuarial science. This article discusses how to resolve the problems of statistical inferences on the unknown parameters of the exponentiated half-logistic distribution based on left-truncated and right-censored data. In the beginning, maximum likelihood estimations are calculated. Then, asymptotic confidence intervals are constructed by using the observed Fisher information matrix. To cope with the small sample size scenario, we employ the percentile bootstrap method and the bootstrap-t method for the establishment of confidence intervals. In addition, Bayesian estimations under both symmetric and asymmetric loss functions are addressed. Point estimates are computed by Tierney–Kadane’s approximation and importance sampling procedure, which is also applied to establishing corresponding highest posterior density credible intervals. Lastly, simulated and real data sets are presented and analyzed to show the effectiveness of the proposed methods.

1. Introduction

1.1. Exponentiated Half-Logistic Distribution

The exponentiated half-logistic distribution is applied in life testing and reliability analysis far and wide. It has raised great interest in scholars in the past decades. Kang et al. [1] derived the maximum likelihood estimators and approximate maximum likelihood estimators of the scale parameter based on progressive Type II censored data. Rastogi et al. [2] studied Bayes estimates of the exponentiated half-logistic distribution. Gadde et al. [3] applied the exponentiated half-logistic distribution to the truncated life test and proposed acceptance sampling for the percentiles of the distribution. Seo et al. [4] showed that the exponentiated half-logistic distribution can be a substitute for the two-parameter family of distributions, such as the gamma distribution and the exponentiated exponential distribution. Gui [5] obtained a suitable condition for the existence of maximum likelihood estimates of the parameters in the exponentiated half-logistic distribution.
When the shape parameter equals one, the exponentiated half-logistic distribution is reduced to half-logistic distribution. The half-logistic distribution is highly suitable for the survival analysis of censored data. Based on progressively type-II censored samples, Balakrishnan et al. [6] derived the maximum likelihood estimator of the scale parameter in the half-logistic distribution. Based on the results of previous studies, Gile [7] conducted an in-depth study on the bias of the maximum likelihood estimator. Wang [8] derived an exact confidence interval for the scale parameter in scaled half-logistic distribution based on a progressively Type II censored sample. Rosaiah et al. [9] developed a reliability test plan for the half-logistic distribution.
In this article, it is assumed that the lifetime of units follows the exponentiated half-logistic distribution. The probability density function (PDF) has the form of:
f ( x ; σ , λ )   =   2 λ e x σ σ 1   +   e x σ 2 1   +   2 1   +   e x σ λ 1 , x   >   0 , σ , λ   >   0 ,
where σ is the scale parameter and λ is the shape parameter.
The cumulative distribution function (CDF) is written as:
F ( x ;   σ ,   λ )   =   1   +   2 1   +   e x σ λ , x   >   0 , σ , λ   >   0 .
Based on the PDF and CDF, the reliability function and failure rate function are derived:
R ( t )   =   1     1   +   2 1   +   e t σ λ , t   >   0 ,
h ( t )   =   2 λ e t σ σ 1   +   e t σ 2 1   +   2 1   +   e t σ λ 1 1     1   +   2 1   +   e t σ λ 1 , t   >   0 .
Figure 1 shows the PDF and CDF of the exponentiated half-logistic distribution. The PDF is right-skewed and unimodal when λ σ   >   1 , while it is decreasing monotonically when λ σ     1 . The PDF becomes more fat-tailed when the σ is larger. When it comes to CDF, the smaller σ and λ are, the faster the rising rate becomes.
Based on L’Hospital’s rule, the failure rate function approaches 1 σ when x approaches infinity as Figure 2 shows. According to [10], gamma distribution and exponentiated half-logistic distribution have the same attributes as the PDF and behaviors of the failure rate function when shape parameters are larger than one. However, the CDF, reliability, and failure rate functions of the gamma distribution do not have a closed form when the shape parameter is not an integer. Thus, the exponentiated half-logistic distribution, whose overall distribution has a closed structure, performs better than the gamma distribution.

1.2. Left-Truncated and Right-Censored Data

As for data collection, it is impractical to collect the entire lifetimes of all units with limited time and cost. For example, waiting until all the units have failed in the lifetime test of light bulbs is infeasible. Researchers may only collect partial observations and stop the experiment when a predetermined time has elapsed or when the amount of observations has reached a pre-specified number, which gives rise to type I censoring and type II censoring. In this paper, type I censoring is further discussed.
On the other hand, truncation arises when the observation information is available only if its value is within a specific range. Data out of the predetermined range are not considered. Truncation includes left and right truncations, and only left truncation is discussed in this paper. Each unit is observable only if its lifetime exceeds left-truncated time τ L .
Left truncation and right censoring naturally occur in engineering, asset management, maintenance of instruments, reliability, and survival analysis studies. Hong et al. [11] proposed a famous and authentic example of left-truncated right-censored data. The data set consists of the lifetime data of approximately 150,000 high-voltage power transformers from a specific energy company, collected from 1980 to 2008. The transformers were installed at different times in the past, and failure times were random. The information on lifetime is available only if the transformers failed after 1980. In [11], a parametric model was built to explain lifetime distribution. Balakrishnan et al. [12] applied the lognormal, Weibull, gamma, and generalized gamma distributions to mimic the lifetime distribution and provided a more detailed analysis. They used the expectation maximization algorithm to calculate maximum likelihood estimations of unknown parameters. Kundu et al. [13] solved the problem from a Bayesian perspective and involved competing risks in parametric analysis. In [14], another illustrative example of left truncation and right censoring, the Channing House data, is analyzed. The data set consists of 97 men and 365 women covered by a health care program in a retirement center in California. Data were collected from 1964 to 1975 to study the differences in the survival of genders. There is a restriction that only people who are 60 or older have permission to enter the center. Thus all the data are left-truncated. At the same time, the lifetime data of people who survived until 1975 or exited before the experiment stopped are right-censored.
Motivated by the previous work above, this paper discusses parametric estimations of left-truncated and right-censored data under an exponentiated half-logistic distribution. The rest of the paper is organized as follows: In Section 2, the maximum likelihood estimates of parameters are computed. Asymptotic confidence intervals are constructed by using the observed Fisher information matrix. In Section 3, the percentile bootstrap method and bootstrap-t method are proposed to build confidence intervals in case of a small sample size. In Section 4, Bayesian estimates are provided. Point estimates are carried out by Tierney–Kadane’s approximation and importance sampling procedure. The importance sampling procedure is also applied to build the highest posterior density credible intervals. In Section 5, simulations are carried out. Comparisons between estimators from different methods are also presented for illustrative purposes. An authentic data set, the Channing House data set, is analyzed to show the effectiveness of proposed methods in Section 6, and Section 7 concludes the paper.

2. Maximum Likelihood Estimation

In this section, point estimates and asymptotic confidence intervals are derived by using maximum likelihood estimation. Assuming there are totally n observable units in the lifetime test, namely { X 1 ,   X 2 ,     ,   X n } . The random variable X i denotes the lifetime of the i-th unit with the value of x i . The order of units is random, and units are put to the test at different times. For the i-th unit, there is a predetermined left-truncation time point τ i L and a right-censoring time point τ i R ( τ i R   >   τ i L ), which may be differentiated between units. It is important to note that the unit that fails before τ i L is not considered in this paper. This means that all of n observable units are put on test before τ i L and live till τ i L or put on test after τ i L . Units that fail after τ i L have the possibility to be censored at τ i R . In this paper, type I right censoring is discussed. The right-censoring time for each units is fixed on the same time point τ i R     τ R ( i   =   1 ,   2 ,     ,   n ) .
Moreover, we introduce a truncation indicator a i ( i   =   1 ,   2 ,     ,   n ) and a censoring indicator b i ( i   =   1 ,   2 ,     ,   n ) , which are defined as follows:
a i   =   0 , if x i is left-truncated ( put into test before τ i L ) 1 , if x i is not truncated ( put into test after τ i L ) ,
b i   =   0 , if x i is right-censored ( fails after τ R ) 1 , if x i is not censored ( fails before τ R ) .

2.1. Point Estimates

In this paper, we assume that the lifetimes follow the exponentiated half-logistic distribution E H L ( σ ,   λ ) . Based on the observations { ( x i ,   a i ,   b i   :   i   =   1 ,   2 ,     ,   n ) } , the PDF and the CDF, the likelihood function can be expressed by:
L ( x | σ ,   λ )   =   i = 1 n f ( x i ) a i b i 1     F ( x i ) a i ( 1 b i ) f ( x i ) 1     F ( t i L ) ( 1 a i ) b i 1     F ( x i ) 1     F ( t i L ) ( 1 b i ) ( 1 a i ) = i = 1 n f ( x i ) b i 1     F ( x i ) 1 b i 1 1     F ( t i L ) 1 a i ,
where x denotes data ( x 1 ,   x 2 ,     ,   x n ) .
The PDF and CDF have been displayed in Functions (1) and (2), respectively. Therefore, the likelihood function is transformed into:
L ( x | σ ,   λ )   =   i = 1 n 2 λ e x σ σ 1   +   e x i σ 2 1   +   2 1   +   e x i σ λ 1 b i 1     1   +   2 1   +   e x i σ λ 1 b i 1     1   +   2 1   +   e τ i L σ λ a i 1 = 2 λ σ i = 1 n b i e λ i = 1 n b i ln 1   +   2 1   +   e x i σ 1 σ i = 1 n b i x i i = 1 n ( 1     e 2 x i σ ) b i × i = 1 n 1     1   +   2 1   +   e x i σ λ 1 b i i = 1 n 1     1   +   2 1   +   e τ i L σ λ a i 1 .
The corresponding log-likelihood function can be written as:
l ( x | σ , λ )   =   i = 1 n b i ln f ( x i )   +   ( 1     b i ) ln 1     F ( x i )   +   ( a i     1 ) ln 1     F ( t i L ) = i = 1 n b i ln 2   +   ln λ     ln σ   +   λ ln 1   +   2 1   +   e x i σ     x i σ     ln 1     e 2 x i σ + ( 1     b i ) ln 1     1   +   2 1   +   e x i σ λ   +   ( a i     1 ) ln 1     1   +   2 1   +   e τ i L σ λ .
To compute maximum likelihood estimations (MLE) of σ and λ , the first partial derivatives are set to zero:
l σ   =   1 σ i = 1 n b i 1   +   1 1 λ ζ ( x i ) x i     1 σ F ( x i ) 1 λ x i     ( 1     b i ) η ( x i ) x i   +   ( 1     a i ) η ( τ i L ) τ i L   =   0 ,
l λ   =   1 λ i = 1 n b i [ 1   +   ln F ( x i ) ]     ( 1     b i ) G ( x i ) F ( x i )   +   ( 1     a i ) G ( τ i L ) F ( τ i L )   =   0 ,
where ζ ( x )   =   f ( x ) F ( x ) , η ( x )   =   f ( x ) 1 F ( x ) , G ( x )   =   ln F ( x ) 1 F ( x ) .
Then the MLEs of σ and λ are the roots of Equations (8) and (9), written as σ ^ and λ ^ . However, it is not easy to find the analytic roots due to the nonlinearity and non-closed forms of the equations. Thus, the Newton—Raphson method, one of the numerical methods, is applied to approximately solve the equations and then obtain MLEs, which can be achieved by R software.

2.2. Asymptotic Confidence Intervals

To derive the asymptotic confidence intervals of λ and σ , we further discuss the variance-covariance matrix. When the sample size is large, σ and λ asymptotically follow normal distributions. In this way, we can obtain asymptotic confidence intervals from the variance-covariance matrix, also known as the inverse of the Fisher information matrix. The Fisher information matrix I ( σ ,   λ ) is a vectorized definition of Fisher information which shows the average amount of information about the state parameters that a sample of random variables can provide in a certain sense.

2.2.1. Expected Fisher Information Matrix

The expected Fisher information matrix can be calculated by taking the expectation of the negative second derivative of the log-likelihood function:
I ( σ , λ )   =   E 2 l ( σ , λ ) σ 2 2 l ( σ , λ ) σ λ 2 l ( σ , λ ) λ σ 2 l ( σ , λ ) λ 2 .
Here, the second partial derivatives are as follows:
2 l σ 2   =   1 σ 2 i = 1 n b i A 1 , 1 ( i ) ( x i )   +   ( 1     b i ) A 1 , 2 ( i ) ( x i )     ( 1     a i ) η ( τ i L ) 2   +   H ( τ i L )     1 η ( τ i L ) τ i L ,
2 l λ σ   =   2 l σ λ   =   1 λ σ i = 1 n { b i [ ζ ( x i ) x i ]   +   ( 1     b i ) A 2 , 1 ( i ) ( x i )     ( 1     a i ) η ( τ i L ) τ i L [ 1   +   G ( τ i L ) ] } ,
2 l λ 2   =   1 λ 2 i = 1 n [ b i   +   ( 1     b i ) G ( x i ) 2 F ( x i )     ( 1     a i ) G ( τ i L ) 2 F ( τ i L ) ] ,
where
A 1 , 1 ( i ) ( x i )   =   1   +   1     1 λ 1     H ( x i ) ζ ( x i )     ζ ( x i ) 2 x i     x i σ F ( x i ) 1 λ 2   +   1 λ ζ ( x i ) , A 1 , 2 ( i ) ( x i )   =   η ( x i ) 2   +   H ( x i )     1 η ( x i ) x i , A 2 , 1 ( i ) ( x i )   =   η ( x i ) [ 1   +   G ( x i ) ] x i ,
and H ( x )   =   1   +   x σ { F ( x ) } 1 λ   +   ( 1     1 λ ) x ζ ( x ) .

2.2.2. Observed Fisher Information Matrix

To simplify the calculation, the observed Fisher information matrix obtained by samples is employed to approximate the expected Fisher information matrix. The observed Fisher information matrix is the value of the Fisher information matrix at the point of ( σ ^ ,   λ ^ )
I ( σ ^ ,   λ ^ )     =     2 l ( σ , λ ) σ 2 2 l ( σ , λ ) σ λ 2 l ( σ , λ ) σ λ 2 l ( σ , λ ) λ 2 ( σ , λ ) = ( σ ^ , λ ^ ) ,
where σ ^ and λ ^ are the MLEs, respectively.
The variances of σ and λ can be obtained by computing the inverse matrix of the observed Fisher information matrix:
V a r σ ^ C o v σ ^ , λ ^ C o v λ ^ , σ ^ V a r λ ^   =   2 l ( σ , λ ) σ 2 2 l ( σ , λ ) σ λ 2 l ( σ , λ ) σ λ 2 l ( σ , λ ) λ 2 ( σ , λ ) = ( σ ^ , λ ^ ) 1 ,
where V a r σ ^ and V a r λ ^ are the variances of σ and λ , and C o v σ ^ and C o v λ ^ are the covariances of σ and λ .
Thus, 100 ( 1 p ) % asymptotic confidence intervals of two parameters can be constructed as
σ ^     μ p 2 V a r σ ^ , σ ^   +   μ p 2 V a r σ ^ ,
and
λ ^     μ p 2 V a r λ ^ , λ ^   +   μ p 2 V a r λ ^ ,
where μ p 2 denotes the upper p 2 -th quantile of the standard normal distribution.

3. Bootstrap Confidence Intervals

When the sample size is small, classical theory on constructing confidence intervals may not work well. The bootstrap methods are introduced to deal with a small sample size by resampling. The two widely used bootstrap methods are the percentile bootstrap method (boot-p) and the bootstrap-t method (boot-t). The boot-p method presents the distribution of the bootstrap sample statistics, rather than the original sample, to establish confidence intervals. The boot-t method converts the bootstrap sample statistics to the corresponding t statistics. The algorithms are demonstrated as follows:

3.1. Percentile Bootstrap Confidence Intervals

From Algorithm 1, we can obtain the ascending sequence of MLEs ( σ ^ ( 1 ) * ,   σ ^ ( 2 ) * ,     ,   σ ^ ( n ) * ) and ( λ ^ ( 1 ) * ,   λ ^ ( 2 ) * ,     ,   λ ^ ( n ) * ) . Thus, 100 ( 1     p ) % percentile bootstrap confidence intervals of σ and λ are ( σ ^ ( l ) * ,   σ ^ ( r ) * ) and ( λ ^ ( l ) * ,   λ ^ ( r ) * ) , where l   =   p 2 N b o o t , r   =   ( 1     p 2 ) N b o o t and operator is rounding down to the nearest integer.
Algorithm 1 Establishing percentile bootstrap confidence intervals.
Step I
Set the number of simulations as Nboot.
Step II
Generate original left-truncated and right-censored samples { x 1 , x 2 , , x n } from E H L ( σ , λ ) (skip this step if the original samples already exists).
Step III
Compute the initial MLEs of the original samples as σ ^ and λ ^ (the roots of Equations (8) and (9)).
Step IV
Regenerate the new left-truncated and right-censored samples { x 1 * , x 2 * , , x n * } from E H L ( σ ^ , λ ^ ) . Estimate the new MLEs of the new samples as σ ^ 1 * and λ ^ 1 * .
Step V
Repeat Steps III-IV N b o o t times. A set of regenerated MLEs ( σ ^ i * , λ ^ i * ) is obtained in the i-th iteration. Rearrange σ ^ i * and λ ^ i * separately in ascending order as ( σ ^ ( 1 ) * , σ ^ ( 2 ) * , , σ ^ ( n ) * ) and ( λ ^ ( 1 ) * , λ ^ ( 2 ) * , , λ ^ ( n ) * ) .

3.2. Bootstrap-t Confidence Intervals

From Algorithm 2, we can obtain ascending sequences of t-statistics for λ ^ i * and σ ^ i * , respectively. Thus, 100 ( 1 p ) % bootstrap-t confidence intervals of λ and σ can be constructed as
σ ^ T ( l ) * ( σ ) V a r σ ^ , σ ^ + T ( r ) * ( σ ) V a r σ ^
and
λ ^ T ( l ) * ( λ ) V a r λ ^ , λ ^ + T ( r ) * ( λ ) V a r λ ^
where l = p 2 N b o o t and r = ( 1 p 2 ) N b o o t .
Algorithm 2 Establishing bootstrap-t confidence intervals.
Step I
Set the number of simulations as N b o o t .
Step II
Generate original left-truncated and right-censored samples { x 1 ,   x 2 ,     ,   x n } from E H L ( σ ,   λ ) (skip this step if the original sample already exists).
Step III
Compute the initial MLEs of the original samples as σ ^ and λ ^ .
Step IV
Regenerate the new left-truncated and right-censored samples { x 1 * ,   x 2 * ,     ,   x n * } from E H L ( σ ^ ,   λ ^ ) . Estimate the new MLEs of X * as σ ^ 1 * and λ ^ 1 * . Calculate the variances of σ ^ 1 * and λ ^ 1 * as V a r σ ^ 1 * and V a r λ ^ 1 * . Then the t-statistics for σ ^ 1 * and λ ^ 1 * are acquired: T 1 * ( σ ) = σ ^ 1 * σ ^ V a r σ ^ 1 * and T 1 * ( λ ) = λ ^ 1 * λ ^ V a r λ ^ 1 * .
Step V
Repeat Steps III-IV N b o o t times. Sets of regenerated MLEs ( σ ^ i * ,   λ ^ i * ) and corresponding t-statistics ( T i * ( σ ) , T i * ( λ ) ) are obtained in the i-th iteration. Rearrange T i * ( σ ) and T i * ( λ ) separately in ascending order as ( T ( 1 ) * ( σ ) ,   T ( 2 ) * ( σ ) ,     ,   T ( n ) * ( σ ) ) and ( T ( 1 ) * ( λ ) ,   T ( 2 ) * ( λ ) ,     ,   T ( n ) * ( λ ) ) .

4. Bayesian Estimates

Classical statistical analysis provides us with statistics such as MLE and credible intervals based on the information derived from samples. To promote the practical application of analysis in real-life cases, we propose to compute Bayesian estimates, which combine prior distributions determined by real experience with sample information.
The selections of prior distributions have a great influence on the consequences of Bayesian estimates. However, finding a conjugate prior distribution is difficult when the two parameters are unknown. The results in [15] show that the prior distributions adopted by them work well under E H L ( σ ,   λ ) . It is plausible to use the same prior distributions as Xiong et al. [15]: assuming that σ follows the inverse gamma distribution ( I G ( γ ,   δ ) ) and λ follows the gamma distribution ( G a ( α ,   β ) ), where γ , δ , α , β are hyperparameters. We also suppose that two parameters are independent for the simplicity of calculation. The PDFs of prior distributions for σ and λ can be written as:
π ( σ )   =   δ γ Γ ( γ ) σ γ 1 e δ σ ,   γ   >   0 ,   δ   >   0 ,
π ( λ )   =   β α Γ ( α ) λ α 1 e β λ ,   α   >   0 ,   β   >   0 .
Therefore, the PDF of joint prior distribution is the multiple of two prior distributions:
π ( σ , λ )   =   δ γ β α Γ ( γ ) Γ ( α ) σ γ 1 λ α 1 e ( δ σ + β λ ) .
Based on the theories of Bayes, the posterior distribution π ( σ ,   λ | x ) can be described as the ratio of the joint distribution of the samples and unknown parameters to the marginal distribution of the samples:
π ( σ ,   λ | x )   =   L ( x | σ ,   λ ) π ( σ ,   λ ) 0 0 L ( x | σ ,   λ ) π ( σ ,   λ ) d σ d λ ,
where x denotes the data.

4.1. Loss Functions

Loss functions are employed to assess the intensity of inconsistency between parameter estimations and the true values. Symmetric loss functions, for example, the square error loss function (SELF), are classical. Overestimation causes the same loss as underestimation in symmetric loss functions. Asymmetric loss functions such as the linex loss function (LLF) fit in the condition of inequality. In this subsection, both symmetric loss function, SELF, and asymmetric loss function, LLF, are in further discussion.

4.1.1. Squared Error Loss Function

Squared error loss function is introduced in [16]. As the name suggests, squared error loss function is the square of the distance between predicted value and target variable. The form of function is defined as:
L S E L F ( θ , θ ˜ )   =   θ ˜     θ 2 ,
where θ is the target variable and θ ˜ is the predicted value of θ .
The Bayesian estimate for θ under SELF can be described as:
θ ^   =   m i n θ ˜ θ ˜ Θ L S E L F ( θ   , θ ˜ ) p ( θ | x ) d θ   =   E θ ( θ | x ) ,
where p ( θ | x ) is the density of θ . Thus, Bayesian estimates for unknown parameter λ and σ can be written as:
λ ^ S E L F   =   0 0 λ π ( σ ,   λ | x ) d σ d λ ,
σ ^ S E L F   =   0 0 σ π ( σ ,   λ | x ) d σ d λ .

4.1.2. Linex Loss Function

Linex (linear exponential) loss function was originally introduced by Zellar [17]. The loss of one side of 0 increases exponentially while the other increases linearly. The linex loss function corresponds to:
L L L F ( θ ,   θ ˜ )   =   e s θ ˜ θ     s θ ˜     θ     1 ,
where s is the linex parameter. LLF alters exponentially in the negative direction and linearly in the positive direction when s   <   0 , and in the opposite when s   >   0 . The larger s is, the more heavily punishment intensity is. The function tends to be symmetric when the absolute value of s approaches 0.
The Bayesian estimate for θ under LLF is:
θ ^   =   m i n θ ˜ θ ˜ Θ L L L F ( θ   , θ ˜ ) p ( θ | x ) d θ   =   1 s ln E θ ( e s θ | x ) .
Thus, Bayesian estimates for unknown parameter λ and σ are given as:
σ ^ L L F   =   1 s ln 0 0 e s σ π ( σ   , λ | x ) d σ d λ ,
λ ^ L L F   =   1 s ln 0 0 e s λ π ( σ ,   λ | x ) d σ d λ .

4.2. Tierney-Kadane’s Approximation

In this subsection, Tierney–Kadane’s (TK) approximation is used to compute Bayesian estimates of unknown parameters. Let g ( σ ,   λ ) denote the target parametric function about λ and σ , which takes two different forms under SELF and LLF. According to the TK approximation formula, the associated Bayesian estimate, as well as the posterior expectation of g ( σ ,   λ ) , can be written as:
g ^ ( σ ,   λ )   =   E ( g ( σ ,   λ ) | x )   =   g * × e n h g * ( σ ^ h g * , λ ^ h g * ) h σ ^ h , λ ^ h ,
where h ( σ ,   λ )   =   1 n ( l ( x | σ ,   λ )   +   ln π ( σ ,   λ ) ) and h g * ( σ ,   λ )   =   h ( σ ,   λ )   +   ln g ( σ , λ ) n . Note that l ( x | σ ,   λ ) is the log-likelihood function and π ( σ ,   λ ) is the prior distribution of unknown parameters.
and g * are the determinants of inverse Hessians of h ( σ ,   λ ) and h g * ( σ ,   λ ) at ( σ ^ h ,   λ ^ h ) and ( σ ^ h g * ,   λ ^ h g * ) , respectively, where ( σ ^ h ,   λ ^ h ) and ( σ ^ h g * ,   λ ^ h g * ) are the corresponding MLEs of σ and λ under h ( σ ,   λ ) and h g * ( σ ,   λ ) :
  =   2 h σ 2 × 2 h λ 2     2 h σ λ × 2 h λ σ ( σ , λ ) = ( σ ^ h , λ ^ h ) 1 ,
g *   =   2 h g * σ 2 × 2 h g * λ 2     2 h g * σ λ × 2 h g * λ σ ( σ , λ ) = ( σ ^ h g * , λ ^ h g * ) 1 .
It can be observed that h ( σ ,   λ ) , ( σ ^ h ,   λ ^ h ) and have no relation with g ( σ ,   λ ) . Using Expression (7), h ( σ ,   λ ) transforms into:
h ( σ ,   λ )   =   1 n l ( x | σ ,   λ )   +   ln π ( σ ,   λ ) = 1 n ln 2 i = 1 n b i δ γ β α Γ ( γ ) Γ ( α )   +   ( α   +   i = 1 n b i     1 ) ln λ     ( γ   +   i = 1 n b i   +   1 ) ln σ i = 1 m b i x i + δ σ     λ β   +   i = 1 m b i ln 1   +   2 1 e     x i σ i = 1 m b i ln 1     e 2 x i σ   +   i = 1 m ( 1     b i ) ln 1     F ( x i )   +   i = 1 m ( a i     1 ) ln 1     F ( τ j L ) .
Then, ( σ ^ h ,   λ ^ h ) can be computed by finding the roots of the equations, letting first-order partial derivatives of h ( σ ,   λ ) be equal to zero:
h σ   =   1 n σ γ   +   i = 1 n b i   +   1   +   i = 1 n b i 1     1 λ ζ ( x i ) x i     1 σ F ( x i ) 1 λ x i   +   δ i = 1 n 1     b i η ( x i ) x i   +   i = 1 n 1     a i η ( τ i L ) τ i L = 0 ,
h λ = 1 n λ β λ   +   α   +   i = 1 n b i     1   +   i = 1 n b i ln F ( x i ) i = 1 n 1     b i G ( x i ) F ( x i )   +   i = 1 n 1     a i G ( τ i L ) F ( τ i L ) = 0 ,
where ζ ( x )   =   f ( x ) F ( x ) , η ( x )   =   f ( x ) 1 F ( x ) , G ( x )   =   ln F ( x ) 1 F ( x ) .
With σ ^ h , λ ^ h , h σ ^ h , λ ^ h can be obtained easily. As = 2 h σ 2 × 2 h λ 2 2 h σ λ × 2 h λ σ ( σ , λ ) = σ ^ h , λ ^ h 1 , we need to find out the second-order partial derivatives of h ( σ , λ ) at point ( σ ^ h , λ ^ h ) . The second-order partial derivatives can be described as:
2 h σ 2   =   1 n σ 2 γ   +   i = 1 n b i   +   1   +   i = 1 n b i 1     1 λ 1     H ( x i ) ζ ( x i )     ζ ( x i ) 2 x i 1 σ i = 1 n b i F ( x i ) 1 λ x i 2   +   1 λ ζ ( x i )     2 δ   +   i = 1 n ( 1     b i ) η ( x i ) 2   +   H ( x i )     1 η ( x i ) i = 1 n ( 1     a i ) η ( τ i L ) 2   +   H ( τ i L )     1 η ( τ i L ) ,
2 h σ λ = 2 h λ σ = 1 λ σ i = 1 n b i ζ ( x i ) x i   +   ( 1     b i ) η ( x i ) x i 1   +   G ( x i )     ( 1     a i ) η ( τ i L ) τ i L 1   +   G ( τ i L ) ,
2 h λ 2 = 1 n λ 2 α   +   i = 1 n b i     1   +   i = 1 n ( 1 b i ) G ( x i ) 2 F ( x i )     i = 1 n ( 1     a i ) G ( τ i L ) 2 F ( τ i L ) ,
where H ( x )   =   1   +   x σ F ( x ) 1 λ   +   ( 1   +   λ ) x λ ζ ( x ) .
Then, is gained by bringing σ ^ h , λ ^ h into Expressions (34)–(36). On the other hand, ( σ ^ h g * , λ ^ h g * ) , h g * ( σ , λ ) and g * is related to g ( σ , λ ) . The difference conditions of g ( σ , λ ) under SELF and LLF are discussed in detail as follows.

4.2.1. Squared Error Loss Function

Here, we take σ for example. According to Function (21), the Bayesian estimates for σ should be the conditional expectation of σ under sample data. Thus, let g ( σ ,   λ ) = σ . Therefore, h g * ( σ ,   λ ) takes the form of:
h σ * ( σ ,   λ )   =   h ( σ ,   λ )   +   ln σ n .
( σ ^ h σ * , λ ^ h σ * ) are the roots of the first-order partial derivatives of h σ * . With Equation (32) and (33), ( σ ^ h σ * , λ ^ h σ * ) is yielded:
h σ * σ   =   h σ   +   n σ   =   0 ,
h σ * λ   =   h λ   =   0 .
Thus,
h σ * ( σ ^ h σ * , λ ^ h σ * )   =   h ( σ ^ h σ * , λ ^ h σ * )   +   ln σ ^ h σ * n .
Similar to , σ *   =   2 h σ * σ 2   ×   2 h σ * λ 2     2 h σ * σ λ   ×   2 h σ * λ σ ( σ , λ ) = ( σ ^ h σ * , λ ^ h σ * ) 1 and the second-order partial derivatives of h σ * are as below:
2 h σ * σ 2   =   2 h σ 2     1 n σ 2 ,
2 h σ * σ λ   =   2 h σ * λ σ   =   2 h λ σ ,
2 h σ * λ 2   =   2 h λ 2 .
where 2 h σ 2 , 2 h λ σ and 2 h λ 2 are the same as Expressions (34)–(36).
Thus, the Bayesian estimate of σ under SELF can be derived:
σ ^ S E L F   =   σ *   ×   e n h σ * ( σ ^ h σ * , λ ^ h σ * ) h ( σ ^ h , λ ^ h ) .
For λ , we let g ( σ ,   λ )   =   λ , and the following steps are similar to σ . The Bayesian estimates of λ under SELF is given by:
λ ^ S E L F   =   λ *   ×   e n h λ * ( σ ^ h λ * , λ ^ h λ * ) h ( σ ^ h , λ ^ h ) .

4.2.2. Linex Loss Function

Additionally, we take σ for example. According to Function (25), the Bayesian estimates for σ should be the conditional expectation of e a σ under sample data. Thus, we take g ( σ ,   λ )   =   e a σ . Hence, h g * ( σ ,   λ ) takes the form of:
h σ * ( σ ,   λ )   =   h ( σ ,   λ )     s σ n ,
where s is the parameter of LLF.
Then, ( σ ^ h σ * , λ ^ h σ * ) is computed by finding the roots of:
h σ * σ   =   h σ     s n   =   0 ,
h σ * λ   =   h λ   =   0 .
Thus,
h σ * ( σ ^ h σ * ,   λ ^ h σ * )   =   h ( σ ^ h σ * ,   λ ^ h σ * )     s σ ^ h σ * n .
The second-order partial derivatives of h σ * turn out to be the same as the second-order partial derivatives of h; then, σ * has the form of:
σ * = 2 h σ * σ 2 × 2 h σ * λ 2 2 h σ * σ λ × 2 h σ * λ σ ( σ , λ ) = ( σ ^ h σ * , λ ^ h σ * ) 1 = 2 h σ 2 × 2 h λ 2 2 h σ λ × 2 h λ σ ( σ , λ ) = ( σ ^ h σ * , λ ^ h σ * ) 1 .
Thus, the Bayesian estimate of σ under LLF is yielded:
σ ^ L L F = σ * × e n h σ * ( σ ^ h σ * , λ ^ h σ * ) h ( σ ^ h , λ ^ h ) ,
Similarly, with g ( σ , λ ) = e a λ , the Bayesian estimates of λ under LLF can be expressed as:
λ ^ L L F = λ * × e n h λ * ( σ ^ h λ * , λ ^ h λ * ) h ( σ ^ h , λ ^ h ) .

4.3. Importance Sampling Procedure

Importance sampling is a significant technique in Monte Carlo methods, which can greatly reduce the number of sample points drawn in a simulation. It is widely used in computing point estimates and interval estimates. According to (19), the joint posterior distribution can be described as:
π ( σ ,   λ | x ) L ( x | σ ,   λ ) π ( σ ,   λ ) δ γ β α Γ ( γ ) Γ ( α ) σ ( γ + 1 ) λ α 1 e δ σ + β λ × 2 λ σ i = 1 n b i e λ i = 1 n b i ln 1 + 2 1   +   e x i σ 1 σ i = 1 n b i x i i = 1 n ( 1     e 2 x i σ ) b i × i = 1 n 1     1   +   2 1   +   e x i σ λ 1 b i i = 1 n 1 1   +   2 1   +   e τ i L σ λ a i 1 h 1 ( σ ) h 2 ( λ | σ ) h 3 ( σ , λ ) ,
where
h 1 ( σ ) = i   =   1 n b i x i   +   δ γ + i = 1 n b i Γ i = 1 n b i   +   γ σ i = 1 n b i + γ + 1 e i = 1 n b i x i + δ σ ,
h 2 ( λ | σ )   =   i = 1 n b i ln 1   +   2 1     e x i σ   +   β α + i = 1 n b i Γ i = 1 n b i + α λ i = 1 n b i + α 1 e λ i = 1 n b i ln 1 + 2 1 e x i σ + β ,
h 3 ( σ ,   λ )   =   1 i = 1 n b i ln 1   +   2 1 e x i σ   +   β i = 1 n b i + α i = 1 n 1     e 2 x i σ b i × i = 1 n 1 1   +   2 1   +   e x i σ λ 1 b i i = 1 n 1 1   +   2 1   +   e τ i L σ λ a i 1 .
It can be obviously seen that h 1 ( σ ) and h 2 ( λ | σ ) are the PDFs of I G i = 1 n b i   +   γ , i = 1 n b i x i   +   δ and G a i = 1 n b i   +   α , i = 1 n b i ln 1   +   2 1 e x i σ   +   β , respectively.
To obtain point estimates and interval estimates by importance sampling procedure, samples need to be generated first. By executing Algorithm 3, ( σ 1 ,   σ 2 ,     ,   σ N i m p o ) and ( λ 1 ,   λ 2 ,     ,   λ N i m p o ) are generated.
Algorithm 3 Generating importance sampling procedure samples.
Step I
Set the number of iterations as N i m p o .
Step II
Generate σ 1 from I G i = 1 n b i   +   γ , i = 1 n b i x i   +   δ .
Step III
With σ 1 from step 2, generate λ 1 from G a i = 1 n b i   +   α , i = 1 n b i ln 1   +   2 1 e x i σ   +   β .
Step IV
Repeat Steps II-III for N i m p o times and produce a series of samples. σ i and λ i are generated during the i-th iteration.
Then, point estimates and interval estimates can be acquired as follows.

4.3.1. Point Estimates

According to the principles of the importance sampling procedure, the Bayesian estimates of σ and λ under SELF can be written as:
σ ^ S E L F   =   i = 1 N i m p o σ i h 3 ( σ i , λ i ) i = 1 N i m p o h 3 ( σ i , λ i ) ,
λ ^ S E L F   =   i = 1 N i m p o λ i h 3 ( σ i , λ i ) i = 1 N i m p o h 3 ( σ i , λ i ) .
Similarly, the Bayesian estimates of σ and λ under LLF can be written as:
σ ^ L L F ( σ , λ )   =   1 s ln i = 1 N i m p o e σ i s h 3 ( σ i , λ i ) i = 1 N i m p o h 3 ( σ i , λ i ) ,
λ ^ L L F ( σ , λ )   =   1 s ln i = 1 N i m p o e λ i s h 3 ( σ i , λ i ) i = 1 N i m p o h 3 ( σ i , λ i ) .

4.3.2. Interval Estimates

Highest posterior density (HPD) regions are especially useful in Bayesian statistics [18]. The HPD credible interval is the interval that contains the required mass such that all points within the interval have a higher probability density than points outside the interval. Chen et al. [19] developed a Monte Carlo method to compute HPD credible intervals. In this paper, HPD credible intervals are obtained from samples generated by Algorithm 3.
Suppose that
q i   =   i = 1 N i m p o λ i h 3 ( σ i , λ i ) i = 1 N i m p o h 3 ( σ i , λ i ) ,
it is obvious that i = 1 N i m p o q i   =   1 . Display σ and q as ( σ 1 ,   q 1 ) ,   ( σ 2 ,   q 2 ) ,     ,   ( σ N i m p o ,   q N i m p o ) . Then rearrange them in ascending order of σ into: ( σ ( 1 ) ,   q ( 1 ) ) ,   ( σ ( 2 ) ,   q ( 2 ) ) ,     ,   ( σ ( N i m p o ) ,   q ( N i m p o ) ) . Note that σ ( i ) and q ( i ) are produced in the same round of iteration. Define k p as:
i = 1 k p q ( i )     p     i = 1 k p + 1 q ( i ) .
In order to establish a 100 ( 1     p ) % HPD interval, k p needs to be computed first. Then, 100 ( 1     p ) % credible intervals can be expressed as ( σ k ξ ,   σ k ξ + 1 p ) , ξ = q ( 1 ) ,   q ( 1 ) + q ( 2 ) , , i = 1 k p q ( i ) , which indicates that the length of each ( q k ξ ,   q k ξ + 1 p ) is near to 1     p . Finally, the interval with the shortest length turns out to be 100 ( 1     p ) % the HPD credible interval ( σ k ξ * ,   σ k ξ * + 1 p ) . This means σ k ξ * + 1 p     σ k ξ *     σ k ξ + 1 p     σ k ξ for all ξ .

5. Simulation Study

In this subsection, we carry out a Monte Carlo simulated data set, which is determined to mimic the lifetime data of the electronic transformers proposed in [11]. The details of the transformers’ data set can be found in Section 1.2. R software was employed to run all of the simulations. The evaluation indicators for point estimation were mean squared error (MSE) and estimation value, while for interval estimation, the evaluation indicators were average length (AL) and coverage rate (CR).
Left-truncated and right-censored data were generated as follows: The parameters we chose for lifetime distribution E H L ( σ ,   λ ) were σ   =   5 ,   λ   =   2 . The sample size n was chosen as 80, 100 and 120. The left-truncated time τ L for all units was set to 1980. Note that the item fails before 1980 were not considered, which means they were discarded and replaced by a new observation. The proportion of truncation was fixed: two different truncation percentages ( T r u n c . p ) were chosen as 10 % and 30 % for comparison, which was the same as [13]. Left-truncated samples were installed in the time range of [ 1977 ,   1979 ] with an equal possibility ( 33.34 % ) attached to each year. The remaining units were installed in [ 1980 ,   1984 ] with an equal possibility ( 25 % ) for each year.
The lifetime of all units were simulated from the E H L σ ,   λ . The year of failure was the installation year plus the lifetime. The censoring year for all units was set to 1998 and 2002 for comparison purposes. The data of observations that failed after the censoring year were right-censored.
To compute the maximum likelihood estimation, the L-BFGS-B method was applied, which is one of the most successful quasi-Newton algorithms. Relevant functions were provided in R software. Asymptotic confidence intervals and bootstrap confidence intervals at 90 % confidence level were also conducted. The bootstrap confidence intervals were computed with N b o o t   =   1000 bootstrap samples. To calculate Bayesian estimates, not only non-informative priors but also informative priors were discussed. Four parameters γ , δ , α and β of prior distributions for σ and λ were set differently in different occasions. For non-informative priors, we set γ   =   δ   =   α   =   β   =   0.0001 to minimize the impact brought by prior distribution. For informative priors, the parameters were chosen as γ   =   50 , δ   =   250 , α   =   50 , β   =   25 . The TK method under two loss functions was employed to compute Bayesian point estimation. For LLF, the linex parameter s was set as s   =   0.5 and s   =   1 . The importance sampling procedure was established for Bayesian point estimates and HPD intervals at the 90 % credible level.
For point estimates, the results of MLEs of σ and λ are shown in Table 1, while the results of non-informative and informative Bayesian estimates are presented in Table A1, Table A2, Table A3 and Table A4 in Appendix A. For interval estimates at the 90 % confidence level, the results of ACI, boot-p, boot-t and HPD intervals are provided in Table A5 and Table A6 in Appendix B.
From Table 1, the following conclusions can be drawn:
(1)
As the size of sampling n increases, the mean square error (MSE) of both σ and λ declines.
(2)
The smaller truncation percentage T r u n c . p is, the lower the MSEs of both two parameters are. Censoring at a later year (2002) also leads to a lower MSE. This is reasonable because data under lower truncation and censoring percentages are closer to the unedited true data.
(3)
The MLEs of λ perform better than those of σ , but the estimation values of the two parameters are about the same distance from the true values.
From Table A1, Table A2, Table A3 and Table A4 in Appendix A, we can observe that:
(1)
Bayesian estimates with informative priors are more accurate than MLEs, while non-informative Bayesian estimates resemble MLEs. It proves that the results would have an essential leap in accuracy with suitable prior distribution and hyperparameters selection.
(2)
For non-informative Bayesian estimates, point estimates obtained by both Tierney–Kadane’s approximation and importance sampling procedure resemble MLEs. For informative Bayesian estimates, the importance sampling method overperforms TK approximation. MSEs of two parameters stay at low values no matter how the scheme varies, and estimation values are closer to true values using the importance sampling method.
(3)
As for loss functions, the estimates of σ under LLF perform better than SELF, while for λ , SELF is slightly better. For σ , setting linex parameter s   =   1 is better than s   =   0.5 , indicating that the positive error should have heavy punishment. For λ , there is little difference under s   =   1 and s   =   0.5 . Estimates gained by TK approximation are more sensitive to the change of loss function, while lit does not make an obvious difference for importance sampling.
(4)
The results under 10 % truncation percentage and censored at 2002 perform the best for both σ and λ .
From Table A5 and Table A6 in Appendix B, we can conclude that:
(1)
Average lengths of intervals become narrower, and coverage rates rise as n increases.
(2)
Boot-p, boot-t, and HPD intervals all overperform asymptotic confidence intervals. Among all methods, informative HPD credible intervals have the shortest ALs, while boot-t intervals have the largest CRs. ALs of boot-p intervals are shorter, and CRs are larger than those of ACIs. The results of non-informative HPD credible intervals resemble ACIs. Although informative HPD credible intervals have the shortest ALs, their CRs are the lowest. Among all of the interval estimates, boot-t intervals perform the best.
(3)
For σ , the performance of intervals seems to have no clear relationship with T r u n c . p . However, when it comes to λ , CRs of all intervals under the 30 % truncation percentage are obviously smaller than in other cases. Presumably, the reason is that excessive truncation percentage interferes with these methods to obtain the principles of data. As for the censoring year, it seems to have no relationship with CRs of both σ and λ , but the later the censoring year is, the smaller ALs of intervals are.

6. Real Data Analysis

The famous Channing House data set is analyzed in this section. The data set can be acquired in R software(the `channing’ data frame in package `boot’ [20]). The study investigated 97 men and 365 women from the opening of the Channing House retirement community (1964) in Palo Alto, California, until their death or the end of the study (July 1, 1975). The data set provides information on gender, age (months) of entry, age (months) of death or exit, and the censoring indicator. Here, the age of death or exit is the data we analyze. People are above 60 years old (720 months) when they enter the community, which indicates that the entire lifetime data in this study are left-truncated at 720 (months). The data of a person’s lifetime is right-censored if he/she lives till the end of the study or leaves the center before the end. The censoring indicator b shows whether the data is right-censored. Some characteristics of the data are shown in Table 2.
At the beginning, we consider the problem whether the distribution E H L σ ,   λ fits the data set well. As many scholars mentioned, the complete data set should pass criteria examining goodness of fit, such as Kolmogorov-–Smirnov (K-S) statistics with its p-value, Bayesian information criterion (BIC), and Akaike information criterion (AIC). However, Channing House data are already truncated and censored. Therefore, we use a simulated data set to show the goodness of fit.

6.1. Fitness Test

The dataset is simulated in a manner that mimics the data collection process described in Section 5. Shang et al. [21] discussed the parameter estimation for the generalized gamma distribution based on left-truncated and right-censored data. We use the same data generation method in [21]. The sample size n is 200, and truncation percentage is 20 % . The left-truncated time is set to 1980. The year of installation ( Y i n ) is simulated based on the following distributions:
For truncated observations,
P ( Y i n   =   y )   =   0.15 , 1960     y     1964 , 0.25 / 15 , 1965     y     1979 .
For not truncated observations,
P ( Y i n   =   y )   =   0.1 , 1980     y     1985 , 0.04 , 1986     y     1995 .
The lifetimes of 200 units are simulated from the E H L σ ,   λ with σ   =   7 and λ   =   4 . The right censoring year is set to 2014.
MLEs of σ and λ are computed with the L-BFGS-B method. We plot the true PDF (the PDF of E H L 7 ,   4 ), the fitted PDF (the PDF with parameters set to MLEs) and the density of a simulated data set in Figure 3. The values of the log-likelihood for the simulated data set are also presented.
Figure 3 shows that the true and fitted PDFs are similar, indicating that exponentiated half-logistic distribution fits left-truncated and right-censored data well.

6.2. Data Processing and Analysis

For lifetime data in the Channing House data set, we subtract 720 / 620 and divide by 10 / 30 / 50 to each unit’s lifetime to obtain different truncation and censoring schemes. This operation does not affect data inference [14]. The schemes subtracted 720 and 620 are labeled as subtraction type I and II, respectively, while the schemes divided by 10, 30, 50 are labeled as division type i, ii, iii.
Point estimates are calculated using the maximum likelihood method, TK approximation, and the importance sampling procedure. Since an informative prior cannot be obtained, a non-informative prior is applied, and two loss functions, SELF and LLF, are considered for Bayesian estimates. The parameter s of LLF is set to 0.5 and 1. ACI, boot-p, boot-t, and HPD intervals are established at the 90 % confidence/credible level. The results are presented in Table A7, Table A8, Table A9, Table A10 and Table A11 in Appendix C.
Seo et al. [4] proposed methods to derive moment estimation of unknown parameters in an EHL distribution. The first moment and the second moment can be obtained as
E X   =   2 λ σ j = 1 1 ( 2 j     1 ) ( 2 j     1   +   λ ) ,
E X 2   =   2 λ σ 2 j = 1 1   +   ( 1 ) j + 1 ( j   +   1 ) ( j   +   1   +   λ ) i = 1 j 1 j   +   j = 1 1 j ( 2 j   +   1 ) i = 1 2 j 1 ( 1 ) i + 1 i ,
and variance can be derived by V a r X   =   E X 2     E X 2 . In order to approximate the summation of infinite terms in Formulas (63) and (64), we ignore the items smaller than the precision value ( e p s   =   1 e     10 ) since the summation term is monotonically decreasing.
In the Channing House data set scenario, the expectation and variance estimate the average lifespan and dispersion of the population lifetime sample. Expectations and variances obtained by MLEs in Table A7 are presented in Table A12 in Appendix C.
From Table A7, Table A8, Table A9, Table A10, Table A11 and Table A12 in Appendix C, we can conclude that:
(1)
The estimates of parameter σ for males tend to be slightly larger than those for females, while the estimates of parameter λ for males turn out to be much smaller than those for females.
(2)
The Bayesian estimates for both σ and λ gained by the TK method are generally larger and closer to MLEs than the ones gained by the importance sampling procedure.
(3)
The estimates for both two parameters decrease as the number of divisions increases from 10 to 50. As the number of substractions declines from 720 to 620, the estimates for σ slightly decrease, while those increase sharply for λ .
(4)
The average lengths of interval estimates of σ is larger for males than those for females. However, interval estimates of λ are the opposite.
(5)
The ALs of boot-p, boot-t, and HPD credible intervals are all shorter than ALs of asymptotic confidence intervals. Among them, boot-t intervals have the shortest ALs.
(6)
For parameter σ , ALs become smaller as the number of divisions increases from 10 to 50. The rising number of substractions also leads to a decrease in ALs. However, the interval estimates have no apparent connection with different truncation and censoring schemes for parameter λ .
(7)
Expectations and variances of the distribution have slight differences due to different subtraction types.
(8)
The approximate average lifespan of males is 1079 (months), and the approximate average lifespan for females is 1089.5 (months). The distribution variances are more significant for males than for females, indicating that the lifespan lengths of females are more concentrated than males.

7. Conclusions

The paper investigates both classical and Bayesian inference of the exponentiated half-logistic parameters based on left-truncated and right-censored data. Point estimations are discussed using the maximum likelihood method. Two loss functions, SELF and LLF, are considered for Bayesian estimation. Tierney–Kadane’s approximation and importance sampling are employed to compute the value. Meanwhile, confidence and credible intervals of unknown parameters are also established. Asymptotic confidence intervals are constructed with the help of the observed Fisher information matrix. Bootstrap methods are applied to handle the problem of small sample sizes. The HPD credible intervals are established using the importance sampling procedure.
Two data sets, a simulated one and a real one, are analyzed to show the performance of different methods. Mean squared error and estimation value are obtained to show the performance of point estimation. For interval estimation, mean length and coverage rate are calculated. As for simulation results, Bayesian estimates with suitable informative priors overperform MLEs. Furthermore, the results gained by the importance sampling procedure are better than those gained by TK approximation. Bayesian estimates under LLF perform better than SELF for σ , and the difference between different loss functions for λ is not apparent. Regarding interval estimates, boot-p, boot-t, and HPD credible intervals make more sense than asymptotic confidence intervals. Informative HPD credible intervals have the shortest mean lengths, while boot-t intervals have the largest coverage rates. Among all of the interval estimates, boot-t intervals perform the best.
In reality, the parameter estimation of exponentiated half-logistic distribution for left-truncated and right-censored data is practical for survival analysis. For future research, competing risks can be combined with the model. Furthermore, more flexible truncation and censoring schemes can also be explored.

Author Contributions

Investigation, X.S. and Z.X.; Supervision, W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Project 202210004001 which was supported by National Training Program of Innovation and Entrepreneurship for Undergraduates. Wenhao’s work was partially supported by the Fund of China Academy of Railway Sciences Corporation Limited (No. 2020YJ120).

Data Availability Statement

The data presented in this study are openly available in R software (the `channing’ data frame in package ‘boot’ [20]).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The Results of Bayesian Estimates Based on Simulated Data Set

Table A1. The results of non-informative Bayesian estimates for σ using TK approximation and importance sampling.
Table A1. The results of non-informative Bayesian estimates for σ using TK approximation and importance sampling.
nTrunc.pMethodSELFLLF
s = 0.5s = 1
VALUEMSEVALUEMSEVALUEMSE
censored at 1998
8010TK5.12370.39085.03100.33804.95060.3104
Importance sampling5.07660.33465.04280.35015.01440.3547
30TK5.27440.48865.16960.39675.07970.3407
Importance sampling5.04330.35915.00600.36194.97480.3810
10010TK5.14350.33735.06890.29475.00240.2688
Importance sampling5.09900.32835.07640.32105.05710.3189
30TK5.22880.36075.14780.30335.07610.2661
Importance sampling5.11950.31955.09560.30265.07530.2957
12010TK5.11290.24835.05230.22234.99720.2063
Importance sampling5.23180.25185.21590.24095.20190.2489
30TK5.21540.29845.14890.25635.08890.2273
Importance sampling5.18450.24785.16740.24175.15250.2491
censored at 2002
8010TK5.08470.31905.00380.28624.93210.2704
Importance sampling5.45960.32415.44390.31785.43070.2948
30TK5.17730.36065.08790.30655.00940.2754
Importance sampling5.39990.34095.38140.32265.36590.3082
10010TK5.07020.23315.00630.21314.94840.2031
Importance sampling5.59000.26915.58090.24185.57310.2405
30TK5.18110.30775.11000.26725.04600.2411
Importance sampling5.50860.27165.49710.26165.48740.2606
12010TK5.05370.20145.00140.18774.95320.1805
Importance sampling5.73010.17805.72410.19375.71880.1964
30TK5.16370.23795.10560.20995.05240.1911
Importance sampling5.64870.20805.64160.21045.63550.2297
Table A2. The results of non-informative Bayesian estimates for λ using TK approximation and importance sampling.
Table A2. The results of non-informative Bayesian estimates for λ using TK approximation and importance sampling.
nTrunc.pMethodSELFLLF
s = 0.5s = 1
VALUEMSEVALUEMSEVALUEMSE
censored at 1998
8010TK1.83310.13831.80650.13981.78150.1431
Importance sampling1.70910.12131.70090.12591.69240.1308
30TK1.55250.27331.53160.28781.51180.3025
Importance sampling1.55240.25461.54240.26341.53220.2725
10010TK1.79360.12121.77350.12561.75440.1307
Importance sampling1.52850.11841.52170.12461.51490.1311
30TK1.54690.26391.53040.27611.51460.2884
Importance sampling1.51340.26161.50660.26821.49960.2750
12010TK1.78220.10821.76580.11291.75010.1180
Importance sampling1.61070.11831.60730.11091.60380.1136
30TK1.53590.25881.52250.26961.50960.2804
Importance sampling1.47710.24231.47210.24761.46690.2730
censored at 2002
8010TK1.83270.12461.80750.12681.78380.1305
Importance sampling1.57910.12471.57340.13951.56740.1344
30TK1.57470.24701.55420.26081.53470.2750
Importance sampling1.45160.24831.44470.26571.43760.2733
10010TK1.80130.11121.78210.11531.76380.1201
Importance sampling1.54660.10801.54290.11141.53920.1148
30TK1.54870.26191.53300.27351.51780.2852
Importance sampling1.42790.25991.42280.26571.41770.2715
12010TK1.81100.08921.79500.09321.77950.0978
Importance sampling1.49520.08101.49250.08871.48980.1065
30TK1.55570.24321.54250.25321.52980.2633
Importance sampling1.39310.25591.38980.25981.38650.2639
Table A3. The results of informative Bayesian estimates for σ using TK approximation and importance sampling.
Table A3. The results of informative Bayesian estimates for σ using TK approximation and importance sampling.
nTrunc.pMethodSELFLLF
s = 0.5s = 1
VALUEMSEVALUEMSEVALUEMSE
censored at 1998
8010TK5.09330.34265.00230.30004.92550.2800
Importance sampling4.84940.08964.83170.09314.81550.0969
30TK5.20730.42475.10830.35255.02430.3101
Importance sampling4.79670.11124.78050.11634.76590.1216
10010TK5.09330.28865.02180.25444.96020.2345
Importance sampling5.28500.14995.27180.14165.25990.1344
30TK5.17080.30325.09240.25915.02530.2168
Importance sampling5.24520.12685.23290.12015.22190.1145
12010TK5.09830.23315.03850.21094.98720.1947
Importance sampling5.17830.12615.16580.11985.15470.1147
30TK5.16360.24305.09870.21105.04310.1896
Importance sampling5.09760.09585.08470.09175.07330.0885
censored at 2002
8010TK5.06300.30234.98260.27424.91580.2615
Importance sampling5.11480.09765.10700.09555.10010.0937
30TK5.14180.31645.05650.27214.98120.2506
Importance sampling5.03160.08715.02300.08635.01530.0858
10010TK5.08440.22605.02370.20814.96620.1990
Importance sampling5.25610.15095.25110.14795.24660.1453
30TK5.13110.25815.06150.22885.00100.2104
Importance sampling5.20130.12175.19650.11945.19220.1173
12010TK5.02440.18334.97260.17404.92710.1699
Importance sampling5.13050.09455.12740.09365.12460.0928
30TK5.12700.20745.07100.18625.02030.1716
Importance sampling5.06510.07815.06190.07765.05900.0771
Table A4. The results of informative Bayesian estimates for λ using TK approximation and importance sampling.
Table A4. The results of informative Bayesian estimates for λ using TK approximation and importance sampling.
nTrunc.pMethodSELFLLF
s = 0.5s = 1
VALUEMSEVALUEMSEVALUEMSE
censored at 1998
8010TK1.82720.12011.79970.12341.77590.1279
Importance sampling1.87880.03261.87280.03391.86660.0352
30TK1.57250.25321.55170.26381.53310.2757
Importance sampling1.77020.06911.76400.07181.75770.0745
10010TK1.80530.11111.78270.11871.76410.1210
Importance sampling1.73190.08671.72820.08861.72430.0905
30TK1.56870.23961.54920.25021.53570.2614
Importance sampling1.63370.14821.62950.15121.62530.1541
12010TK1.78950.10231.77070.10571.75620.1095
Importance sampling1.66050.13121.65730.13331.65410.1355
30TK1.56200.23731.54820.24611.53600.2553
Importance sampling1.53980.23161.53550.23541.53110.2394
censored at 2002
8010TK1.82430.11711.79970.12051.77660.1249
Importance sampling1.80660.05851.80210.06011.79760.0618
30TK1.58770.23901.56630.25371.54190.2591
Importance sampling1.69490.11541.69000.11831.68500.1212
10010TK1.81010.10431.79180.10801.77360.1124
Importance sampling1.73390.08781.73080.08941.72760.0911
30TK1.58710.22731.56950.23651.55460.2470
Importance sampling1.63600.14811.63250.15061.62900.1532
12010TK1.81040.08871.79260.09211.77840.0962
Importance sampling1.74290.08461.74020.08591.73740.0872
30TK1.56280.23441.55010.24261.53810.2521
Importance sampling1.64060.14781.63720.15021.63380.1526

Appendix B. The Results of Interval Estimates Based on Simulated Data Set

Table A5. The simulation results of five intervals for σ at 90 % confidence/credible level.
Table A5. The simulation results of five intervals for σ at 90 % confidence/credible level.
nTrunc.pACIBootstrapHPD
boot-pboot-tNon-InfoInformative
ALCRALCRALCRALCRALCR
censored at 1998
80101.91750.91121.86330.91161.91250.94361.91330.91401.83840.8612
302.00340.92111.84720.92851.99490.93172.00490.92371.91390.9042
100101.71840.90751.66450.91551.71670.90711.71790.90441.65990.8577
301.79050.92641.65370.92701.78480.95431.78980.92321.56980.8962
120101.57910.90731.51570.90921.56170.93431.57880.90541.48180.8722
301.63950.92491.50290.92531.62840.94071.63560.92501.49770.9149
censored at 2002
80101.80400.91011.75460.91081.79060.91311.80190.91141.72860.8735
301.88350.91921.76260.92611.86480.93731.87850.92141.62540.8929
100101.61060.90201.56160.90681.60630.91371.61530.90171.51670.8992
301.67940.90741.56420.91081.66040.92221.67940.90441.54980.8749
120101.47670.91091.43060.91601.46790.91311.47600.91171.42560.8793
301.54690.91071.43990.91441.54080.92881.55180.91361.40640.8701
Table A6. The simulation results of five intervals for λ at 90 % confidence/credible level.
Table A6. The simulation results of five intervals for λ at 90 % confidence/credible level.
nTrunc.pACIBootstrapHPD
boot-pboot-tNon-InfoInformative
ALCRALCRALCRALCRALCR
censored at 1998
80101.02640.77181.07730.77751.01770.78231.02850.76880.92990.7597
300.93750.66731.19710.66960.91980.69380.93280.66590.90620.6328
100100.90580.73440.95080.74150.90470.74250.90870.73510.81670.7087
301.07560.71811.05820.72381.06800.73311.07740.71751.04960.6770
120100.82090.69940.85690.70730.80360.72670.82500.70120.74650.6526
300.74810.61690.71240.62480.73590.62420.74740.61480.69450.6033
censored at 2002
80101.01660.77941.03850.77971.00730.79591.01930.78130.98370.7775
300.91690.67321.09830.68110.89880.69980.91990.67320.87010.6291
100100.89590.73560.97000.74420.88020.75290.89100.73510.85840.7275
300.82260.71920.91270.72630.81090.72670.82040.72110.75770.6966
120100.81430.72060.82370.72940.81370.73370.81190.72370.79060.6766
300.74150.67280.86770.67760.72220.68260.74380.67460.64190.6585

Appendix C. The Estimation Results Based on Real Data Set

Table A7. The results of MLEs for σ and λ .
Table A7. The results of MLEs for σ and λ .
Substraction TypeDivision TypeMaleFemale
σ M λ M σ F λ F
Ii12.41135.085910.68688.9206
ii4.13805.08323.56228.9206
iii2.48225.08592.13738.9206
IIi12.058412.614610.596123.5695
ii4.019412.61463.532023.5693
iii2.411612.61452.119223.5692
Table A8. The results of Bayesian estimates for σ using TK approximation and importance sampling.
Table A8. The results of Bayesian estimates for σ using TK approximation and importance sampling.
Substraction TypeDivision TypeMethodMaleFemale
SELFLLFSELFLLF
s = 0.5s = 1s = 0.5s = 1
IiTK12.560012.107911.750910.704410.593410.4898
Importance sampling12.107812.085312.065410.447110.445510.4439
iiTK4.18674.13104.08153.56823.55553.5431
Importance sampling4.05314.05054.04813.48043.48023.4800
iiiTK2.51202.49122.47242.14092.13632.1318
Importance sampling2.42912.42812.42722.08802.08792.0879
IIiTK12.064611.693611.389710.572110.470310.3745
Importance sampling12.252712.248712.245310.091610.090910.0902
iiTK4.02163.97693.93613.52423.51253.5012
Importance sampling4.08624.08584.08543.36613.36603.3660
iiiTK2.41292.39652.38112.11452.11032.1061
Importance sampling2.46102.46082.46072.01972.01962.0196
Table A9. The results of Bayesian estimates for λ using TK approximation and importance sampling.
Table A9. The results of Bayesian estimates for λ using TK approximation and importance sampling.
Substraction TypeDivision TypeMethodMaleFemale
SELFLLFSELFLLF
s = 0.5s = 1s = 0.5s = 1
IiTK5.23214.92334.67309.07888.64908.2902
Importance sampling5.11615.10965.10238.41638.41508.4138
iiTK5.23234.92324.67309.07818.64908.2902
Importance sampling5.07835.07205.06498.42518.42398.4226
iiiTK5.23234.92334.67309.07838.64908.2902
Importance sampling5.06265.05615.04898.43358.43238.4310
IIiTK13.582110.87979.455124.540420.286217.8998
Importance sampling12.395812.361212.320023.767623.758023.7470
iiTK13.581010.87969.455024.540120.286217.8999
Importance sampling12.387212.353412.312523.700523.691423.6809
iiiTK13.580510.87939.453924.540420.286217.8999
Importance sampling12.360912.324312.279523.741023.732123.7219
Table A10. The results of four intervals for σ at 90 % confidence/credible level.
Table A10. The results of four intervals for σ at 90 % confidence/credible level.
Subtraction TypeDivision TypeACIBootstrapHPD
boot-pboot-t
LowerUpperLowerUpperLowerUpperLowerUpper
Male
Ii10.160114.662610.163414.696510.166814.653010.164314.6666
ii3.38674.88753.39024.90913.40184.87263.38694.8794
iii2.03202.93252.03632.96792.04242.92062.03682.9269
IIi9.990814.12609.990614.14109.993914.116710.000414.1241
ii3.33034.70873.33124.72723.33634.69973.34004.7118
iii1.99822.82522.00422.85932.00552.81372.00592.8238
Female
Ii9.581511.79219.577711.79569.599911.77269.590811.7860
ii3.19383.93073.19893.96773.21473.92513.20513.9276
iii1.91632.35841.91172.39771.93122.34741.92622.3597
IIi9.530811.66159.529311.70489.543211.64199.537111.6565
ii3.17693.88723.17493.92813.19363.87623.18303.8794
iii1.90622.33231.90282.36831.91422.32711.91682.3242
Table A11. The results of four intervals for λ at 90 % confidence/credible level.
Table A11. The results of four intervals for λ at 90 % confidence/credible level.
Subtraction TypeDivision TypeACIBootstrapHPD
boot-pboot-t
LowerUpperLowerUpperLowerUpperLowerUpper
Male
Ii3.23176.94013.22726.98433.25136.93213.23836.9350
ii3.23196.94013.23126.98033.24216.93353.23376.9354
iii3.23196.94013.23546.95633.25116.93273.23566.9452
IIi6.542418.68696.541618.71906.554118.68636.554018.6902
ii6.542318.68706.543018.71596.560518.67776.542918.6886
iii6.542418.68686.543918.69466.562318.67766.552518.6869
Female
Ii6.717211.12426.723111.13306.740611.11596.722211.1282
ii6.717211.12426.720611.15676.741811.10556.718711.1286
iii6.717211.12426.713111.13146.718211.11756.722811.1325
IIi15.781931.357215.783831.385815.802731.352215.792531.3622
ii15.781931.356915.784731.357915.787431.342415.788431.3522
iii15.781831.356715.779631.372115.806431.341115.789631.3637
Table A12. The expectations and variances of EHL distributions obtained by MLEs in Table A7.
Table A12. The expectations and variances of EHL distributions obtained by MLEs in Table A7.
Substraction TypeDivision TypeMaleFemale
E X V a r X E X V a r X
Ii1080.2536154,839.68101089.754918,721.4480
ii1080.2724154,857.33001089.755118,721.4760
iii1080.2532154,831.97501089.755118,721.4925
IIi1078.8222234,413.56901089.223818,524.9650
ii1078.8227234,413.95501089.224018,525.0240
iii1078.8225234,413.90751089.224018,525.0675

References

  1. Kang, S.B.; Seo, J.I. Estimation in an Exponentiated Half Logistic Distribution under Progressively Type-II Censoring. J. Korean Data Inf. Sci. Soc. 2011, 18, 5253–5261. [Google Scholar] [CrossRef] [Green Version]
  2. Rastogi, M.K.; Tripathi, Y.M. Parameter and reliability estimation for an exponentiated half-logistic distribution under progressive Type-II censoring. J. Stat. Comput. Simul. 2014, 84, 1711–1727. [Google Scholar] [CrossRef]
  3. Gadde, S.R.; Ch, R.N. An Exponentiated Half Logistic Distribution to Develop a Group Acceptance Sampling Plans with Truncated Time. J. Stat. Manag. Syst. 2015, 18, 519–531. [Google Scholar]
  4. Seo, J.I.; Kang, S.B. Notes on the exponentiated half logistic distribution. Appl. Math. Model. 2015, 39, 6491–6500. [Google Scholar] [CrossRef]
  5. Gui, W. Exponentiated half logistic distribution: Different estimation methods and joint confidence regions. Commun. Stat. Simul. Comput. 2017, 46, 4600–4617. [Google Scholar] [CrossRef]
  6. Balakrishnan, N.; Asgharzadeh, A. Inference for the Scaled Half-Logistic Distribution Based on Progressively Type-II Censored Samples. Commun. Stat. Theory Methods 2005, 34, 73–87. [Google Scholar] [CrossRef]
  7. Giles, D.E. Bias Reduction for the Maximum Likelihood Estimators of the Parameters in the Half-Logistic Distribution. Commun. Stat. Theory Methods 2012, 41, 212–222. [Google Scholar] [CrossRef]
  8. Wang, B. Interval Estimation for the Scaled Half-Logistic Distribution Under Progressive Type-II Censoring. Commun. Stat. Theory Methods 2008, 38, 364–371. [Google Scholar] [CrossRef]
  9. Rosaiah, K.; Kantam, R.; Rao, B.S. Reliability Test Plan for Half Logistic Distribution. Calcutta Stat. Assoc. Bull. 2009, 61, 183–196. [Google Scholar] [CrossRef]
  10. Gupta, R.D.; Kundu, D. Generalized exponential distributions. Aust. N. Z. J. Stat. 1999, 41, 173. [Google Scholar] [CrossRef]
  11. Hong, Y.; Meeker, W.Q.; Mccalley, J.D. Prediction of Remaining Life of Power Transformers Based on Left Truncated and Right Censored Lifetime Data. Ann. Appl. Stat. 2009, 3, 857–879. [Google Scholar] [CrossRef]
  12. Balakrishnan, N.; Mitra, D. Left truncated and right censored Weibull data and likelihood inference with an illustration. Comput. Stat. Data Anal. 2012, 56, 4011–4025. [Google Scholar] [CrossRef]
  13. Kundu, D.; Mitra, D.; Ganguly, A. Analysis of left truncated and right censored competing risks data. Comput. Stat. Data Anal. 2016, 108, 12–26. [Google Scholar] [CrossRef]
  14. Kundu, D.; Mitra, D. Bayesian inference of Weibull distribution based on left truncated and right censored data. Comput. Stat. Data Anal. 2016, 99, 38–50. [Google Scholar] [CrossRef]
  15. Xiong, Z.; Gui, W. Classical and Bayesian Inference of an Exponentiated Half-Logistic Distribution under Adaptive Type II Progressive Censoring. Entropy 2021, 23, 1558. [Google Scholar] [CrossRef] [PubMed]
  16. Hastie, T.; Tibshirani, R.; Friedman, J.H.; Friedman, J.H. The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer: Berlin/Heidelberg, Germany, 2009; Volume 2. [Google Scholar]
  17. Zellner, A. Bayesian estimation and prediction using asymmetric loss functions. J. Am. Stat. Assoc. 1986, 81, 446–451. [Google Scholar] [CrossRef]
  18. Hoa, L.; Uyen, P.; Phuong, N.; Pham, T.B. Improvement on Monte Carlo estimation of HPD intervals. Commun. Stat. Simul. Comput. 2020, 49, 2164–2180. [Google Scholar] [CrossRef]
  19. Chen, M.; Shao, Q. Monte Carlo Estimation of Bayesian Credible and HPD Intervals. J. Comput. Graph. Stat. 1999, 8, 69–92. [Google Scholar] [CrossRef]
  20. Canty, A.; Ripley, B.D. boot: Bootstrap R (S-Plus) Functions; R package Version 1.3-25; R Archive Network: Vienna, Austria, 2020. [Google Scholar]
  21. Shang, X.; Ng, H.K.T. On parameter estimation for the generalized gamma distribution based on left-truncated and right-censored data. Comput. Math. Methods 2021, 3, e1091. [Google Scholar] [CrossRef]
Figure 1. PDF (left) and CDF (right) of the exponentiated half-logistic distribution.
Figure 1. PDF (left) and CDF (right) of the exponentiated half-logistic distribution.
Mathematics 10 03838 g001
Figure 2. Failure rate function of exponentiated half-logistic distribution.
Figure 2. Failure rate function of exponentiated half-logistic distribution.
Mathematics 10 03838 g002
Figure 3. The true and fitted PDFs, and the density of a simulated data set.
Figure 3. The true and fitted PDFs, and the density of a simulated data set.
Mathematics 10 03838 g003
Table 1. The simulation results of MLEs for σ and λ .
Table 1. The simulation results of MLEs for σ and λ .
nTrunc.p σ λ
VALUEMSEVALUEMSE
censored at 1998
80105.07010.34771.7790.1334
305.12320.37281.55070.2766
100105.20050.31171.78500.1269
305.13240.30031.52160.2826
120105.08730.24331.75450.1156
305.12520.24351.52420.2699
censored at 2002
80104.97960.29551.81340.1240
305.06690.31801.56490.2593
100104.99920.22841.79500.1126
305.10010.26371.53710.2624
120105.01160.19911.79020.0994
305.07630.21701.53650.2574
Table 2. Basic statistics of the Channing House data.
Table 2. Basic statistics of the Channing House data.
GenderSample SizeComplete LifetimeAVG Age EntryAVG Age Death/Exit
Male9751917.9991.6
Female365235902.7984.5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, X.; Xiong, Z.; Gui, W. Parameter Estimation of Exponentiated Half-Logistic Distribution for Left-Truncated and Right-Censored Data. Mathematics 2022, 10, 3838. https://doi.org/10.3390/math10203838

AMA Style

Song X, Xiong Z, Gui W. Parameter Estimation of Exponentiated Half-Logistic Distribution for Left-Truncated and Right-Censored Data. Mathematics. 2022; 10(20):3838. https://doi.org/10.3390/math10203838

Chicago/Turabian Style

Song, Xifan, Ziyu Xiong, and Wenhao Gui. 2022. "Parameter Estimation of Exponentiated Half-Logistic Distribution for Left-Truncated and Right-Censored Data" Mathematics 10, no. 20: 3838. https://doi.org/10.3390/math10203838

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop