Next Article in Journal
The Second Hankel Determinant Problem for a Class of Bi-Close-to-Convex Functions
Previous Article in Journal
Fractional Order Unknown Inputs Fuzzy Observer for Takagi–Sugeno Systems with Unmeasurable Premise Variables
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Different Estimation Methods for Type I Half-Logistic Topp–Leone Distribution

by
Ramadan A. ZeinEldin
1,2,
Christophe Chesneau
3,*,
Farrukh Jamal
4 and
Mohammed Elgarhy
5
1
Deanship of Scientific Research, King AbdulAziz University, Jeddah 21442, Saudi Arabia
2
Faculty of Graduate Studies for Statistical Research, Cairo University, Cairo 11865, Egypt
3
Department of Mathematics, Université de Caen, LMNO, Campus II, Science 3, Caen 14032, France
4
Department of Statistics, Govt. S.A Postgraduate College Dera Nawab Sahib, Bahawalpur, Punjab 63100, Pakistan
5
Valley High Institute for Management Finance and Information Systems, Obour, Qaliubia 11828, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(10), 985; https://doi.org/10.3390/math7100985
Submission received: 20 September 2019 / Revised: 11 October 2019 / Accepted: 12 October 2019 / Published: 17 October 2019

Abstract

:
In this study, we propose a new flexible two-parameter continuous distribution with support on the unit interval. It can be identified as a special member of the so-called type I half-logistic-G family of distributions, defined with the Topp–Leone distribution as baseline. Among its features, the corresponding probability density function can be left skewed, right-skewed, approximately symmetric, J-shaped, as well as reverse J-shaped, making it suitable for modeling a wide variety of data sets. It thus provides an alternative to the so-called beta and Kumaraswamy distributions. The mathematical properties of the new distribution are determined, deriving the asymptotes, shapes, quantile function, skewness, kurtosis, some power series expansions, ordinary moments, incomplete moments, moment-generating function, stress strength parameter, and order statistics. Then, a statistical treatment of the related model is proposed. The estimation of the unknown parameters is performed by a simulation study exploring seven methods, all described in detail. Two practical data sets are analyzed, showing the usefulness of the new proposed model.

1. Introduction

Over the past several decades, motivated by the growing demand of statistical models in many applied areas, numerous general families of continuous distributions have been introduced. Most of them consist of adding extra parameters to well-established continuous distributions in order to give them new interesting features. The most notorious of these families are the exponentiated-G family [1], the beta-G family [2], and the gamma-G family [3]. Recent promising families include the Kumaraswamy-G family [4], the type I half-logistic-G family [5], the generalized odd log-logistic family [6], the odd power Cauchy family [7], the exponentiated generalized Topp–Leone-G family [8], and the type II general inverse exponential-G family [9].
The foundation of our study is the type I half-logistic-G (TIHL-G) family [5]. This general family is characterized by the cumulative distribution function (cdf) given by
F ( x ; λ , ζ ) = 1 [ 1 G ( x ; ζ ) ] λ 1 + [ 1 G ( x ; ζ ) ] λ , x R ,
where λ > 0 and G ( x ; ζ ) is a baseline cdf of a continuous distribution which may depend on a parameter vector ζ . The corresponding probability density function (pdf) is given by
f ( x ; λ , ζ ) = 2 λ g ( x ; ζ ) [ 1 G ( x ; ζ ) ] λ 1 1 + [ 1 G ( x ; ζ ) ] λ 2 , x R ,
where g ( x ; ζ ) denotes the pdf associated to G ( x ; ζ ) . Also, the corresponding hazard rate function (hrf) is given by
h ( x ; λ , ζ ) = λ g ( x ; ζ ) [ 1 G ( x ; ζ ) ] 1 + [ 1 G ( x ; ζ ) ] λ , x R .
In a former work, Cordeiro et al. [5] studied the main mathematical and statistical properties of the type I half-logistic-G family, with an emphasis on special members with support on the semi-infinite interval ( 0 , + ) . In particular, it is shown that the additional parameter λ plays a significant role, allowing new features to be reached in terms of flexibility in comparison to the baseline distribution. Applications to two practical data sets are given for the type I half-logistic-G family defined with the Weibull distribution as baseline. Numerical results show great adjustments of the proposed model for both data sets, with the use of the maximum likelihood method to estimate the model parameters.
In this study, we investigate the potential of a new special member of the type I half-logistic-G family with support over the unit interval ( 0 , 1 ) , defined with the Topp–Leone distribution as baseline. The Topp–Leone distribution was introduced by [10] and has been the object of attention in many studies, mainly thanks to its tractability and various favorable mathematical properties. We refer the interested reader to [11,12], among others. We would also like to mention that the Topp–Leone distribution has inspired some general families, such as [13] for the Topp–Leone-G family, [14] for the Topp–Leone-G exponential power series family, and [15] for the type II Topp–Leone-G family. Here, we aim to profit from the combined features of the type I half-logistic-G family and the Topp–Leone distribution by introducing a new distribution called the type I half-logistic Topp–Leone distribution. As initial motivation, we would like to mention that the corresponding probability density function shows a great flexibility in terms of shape; it can be left-skewed, right-skewed, approximately symmetrical, J-shaped, as well as reverse J-shaped. This is an undeniable plus from a data fitting point of view. Thus, with the same support, the type I half-logistic Topp–Leone distribution provides an alternative to other well-established two-parameter continuous distributions with support on ( 0 , 1 ) , such as the beta distribution introduced by [16] and the Kumaraswamy distribution introduced by [17]. This study is devoted to the complete mathematical and statistical treatments of this new distribution. An important part presents the estimation of the model parameters, where the following seven methods are investigated: maximum likelihood estimation, least squares and weighted least squares estimation, Cramer–von Mises minimum distance estimation, percentile estimation, as well as Anderson–Darling and right-tail Anderson–Darling estimation. Applications to two practical data sets show the potential of the type I half-logistic Topp–Leone model, with favorable goodness-of-fit results in comparison to the beta and Kumaraswamy models, among others.
The rest of this paper is outlined as follows. In Section 2, we present the type I half-logistic Topp–Leone distribution. Section 3 is devoted to its mathematical properties. The model parameter estimations are investigated in Section 4. The applications are given in Section 5. Concluding remarks are formulated in Section 6.

2. The Type I Half-Logistic Topp–Leone Distribution

As previously mentioned, the type I half-logistic Topp–Leone (TIHLTL) distribution is defined as the special member of the type I half-logistic-G (TIHL-G) family with the Topp–Leone distribution as baseline. Let us recall that the Topp–Leone distribution is characterized by the cdf and pdf given by, respectively,
G ( x ; α ) = x α ( 2 x ) α , g ( x ; α ) = 2 α x α 1 ( 1 x ) ( 2 x ) α 1 , x ( 0 , 1 ) ,
where α > 0 . The crucial functions of the TIHLTL distribution are presented below.
The cdf of the TIHLTL distribution is given by
F ( x ; λ , α ) = 1 [ 1 x α ( 2 x ) α ] λ 1 + [ 1 x α ( 2 x ) α ] λ , x ( 0 , 1 ) ,
where λ , α > 0 .
The corresponding probability density function (pdf) is given by
f ( x ; λ , α ) = 4 λ α x α 1 ( 1 x ) ( 2 x ) α 1 [ 1 x α ( 2 x ) α ] λ 1 1 + [ 1 x α ( 2 x ) α ] λ 2 , x ( 0 , 1 ) .
The corresponding hazard rate function (hrf) and cumulative hazard rate function (chrf) are respectively given by:
h ( x ; λ , α ) = 2 λ α x α 1 ( 1 x ) ( 2 x ) α 1 [ 1 x α ( 2 x ) α ] 1 + [ 1 x α ( 2 x ) α ] λ , x ( 0 , 1 )
and
H ( x ; λ , α ) = log ( 2 ) λ log [ 1 x α ( 2 x ) α ] + log 1 + [ 1 x α ( 2 x ) α ] λ , x ( 0 , 1 ) .
Some plots of the pdf and hrf of the TIHLTL distribution are displayed in Figure 1 and Figure 2, respectively. One can observe that the pdf can be left-skewed, right-skewed, approximate symmetrical, J-shaped, as well as reverse J-shaped, whereas the hrf can be increasing and bathtub shapes. This shows the great flexibility of the TIHLTL distribution and its potential in terms of data fitting.

3. Mathematical Properties

In this section, some properties of the TIHLTL distribution are discussed.

3.1. Asymptotes and Shapes

Here, we investigate the influence of the parameters λ and α on the asymptotes of the main functions of the TIHLTL distribution. When x 0 , we have
F ( x ; λ , α ) λ 2 α 1 x α , f ( x ; λ , α ) λ 2 α 1 α x α 1 , h ( x ; λ , α ) λ 2 α 1 α x α 1 .
Thus, when α ( 0 , 1 ) , we obtain f ( x ; λ , α ) + ; when α = 1 , we get f ( x ; λ , α ) λ ; and when α > 1 , we have f ( x ; λ , α ) 0 . The same holds for h ( x ; λ , α ) , under the same conditions.
When x 1 , we have
1 F ( x ; λ , α ) 2 α λ ( 1 x ) 2 λ , f ( x ; λ , α ) 4 λ α λ ( 1 x ) 2 λ 1 , h ( x ; λ , α ) 2 λ ( 1 x ) 1 .
Thus, when λ ( 0 , 1 / 2 ) , we have f ( x ; λ , α ) + ; when λ = 1 / 2 , we obtain f ( x ; λ , α ) 2 α 1 / 2 ; and when λ > 1 / 2 , we get f ( x ; λ , α ) 0 . We always have h ( x ; λ , α ) + .
The shapes of the pdf and hrf of the TIHLTL distribution can be described analytically. By adopting a common approach, the critical point(s) for f ( x ; λ , α ) is (are) the root(s) of the following equation: log [ f ( x ; λ , α ) ] = 0 , that is,
2 α ( x 2 2 x + 1 ) x 2 + 2 x 2 x ( 1 x ) ( 2 x ) ( λ 1 ) 2 α x α 1 ( 1 x ) ( 2 x ) α 1 1 x α ( 2 x ) α + 4 λ α x α 1 ( 1 x ) ( 2 x ) α 1 [ 1 x α ( 2 x ) α ] λ 1 1 + [ 1 x α ( 2 x ) α ] λ = 0 .
The analytical expression for the solution(s) is clearly not available. However, for given α and λ , numerical evaluation of this(these) solution(s) is possible by the use of any mathematical software (e.g., Mathematica, R, etc.). As usual, if x = x * is a root of this equation, then the study of υ * = log [ f ( x ; λ , α ) ] x = x * is useful to identify its nature; it is a local maximum if υ * < 0 , a local minimum if υ * > 0 , and it is a point of inflection if υ * = 0 .
Adopting a similar methodology, the critical point(s) for h ( x ; λ , α ) is(are) the root(s) of the following equation: log [ h ( x ; λ , α ) ] = 0 , that is,
2 α ( x 2 2 x + 1 ) x 2 + 2 x 2 x ( 1 x ) ( 2 x ) + 2 α x α 1 ( 1 x ) ( 2 x ) α 1 1 x α ( 2 x ) α + 2 λ α x α 1 ( 1 x ) ( 2 x ) α 1 [ 1 x α ( 2 x ) α ] λ 1 1 + [ 1 x α ( 2 x ) α ] λ = 0 .
As usual, if x = x o is a root of this equation, then the study of τ o = log [ h ( x ; λ , α ) ] x = x o is useful to identify its nature.
Hereafter, we consider a random variable (rv) X following the TIHLTL distribution, that is, having the cdf given by (1).

3.2. Quantile Function

The quantile function (qf) of X is given by the function Q ( y ; λ , α ) satisfying the equations: F ( Q ( y ; λ , α ) ; λ , α ) = Q ( F ( y ; λ , α ) ; λ , α ) = y , y ( 0 , 1 ) . After some algebraic manipulations, we obtain
Q ( y ; λ , α ) = 1 1 1 1 y 1 + y 1 / λ 1 / α , y ( 0 , 1 ) .
In particular, the median of X is given by
M = Q 1 2 ; λ , α = 1 1 1 1 3 1 / λ 1 / α .
Thanks to the qf, the TIHLTL distribution can be simulated as follows. For any rv U following the uniform distribution U ( 0 , 1 ) , the rv x U = Q U ; λ , α follows the TIHLTL distribution. Thus, with this formula, realization of U gives realizations of x U . This aspect will be developed in Section 4.
Finally, upon differentiation of Q ( y ; λ , α ) with respect to y, the quantile density function given by
q ( y ; λ , α ) = 1 y ( 1 + y ) 2 + 1 1 + y 1 y 1 + y 1 / λ 1 1 1 y 1 + y 1 / λ 1 / α 1 2 λ α 1 1 1 y 1 + y 1 / λ 1 / α , y ( 0 , 1 ) .
This function plays an important role in defining some useful statistical tools (see [18]).

3.3. Skewness and Kurtosis

The skewness and kurtosis of the TIHLTL distribution naturally depend on the values of λ and α . The precise effect of these parameters can be measured by the Bowley skewness and the Moors kurtosis, introduced by [19,20], respectively. They are both defined with the qf as, for the Bowley skewness,
S ( λ , α ) = Q ( 1 4 ; λ , α ) + Q ( 3 4 ; λ , α ) 2 Q ( 1 2 ; λ , α ) Q ( 3 4 ; λ , α ) Q ( 1 4 ; λ , α )
and, for the Moors kurtosis,
K ( λ , α ) = Q ( 7 8 ; λ , α ) Q ( 5 8 ; λ , α ) + Q ( 3 8 ; λ , α ) Q ( 1 8 ; λ , α ) Q ( 6 8 ; λ , α ) Q ( 2 8 ; λ , α ) .
The Bowley skewness and Moors kurtosis are well-known to be less sensitive to eventual outliers than other measures of skewness and kurtosis. As benchmark, for the standard normal distribution, we inform that the Bowley skewness is equal to 0 and the Moors kurtosis is equal to 1.233.
The plots of S ( λ , α ) and K ( λ , α ) are displayed in Figure 3 and Figure 4 for λ , α ( 1 , 5 ) , respectively. In particular, we see that the effects of α and λ on S ( λ , α ) are significant, showing negative and positive values, indicating negative and positive skewness properties, respectively. Also, we observe that the Moors kurtosis increases as λ and α increase.

3.4. Power Series Expansion

The following result presents a useful power series expansion for f ( x ; λ , α ) .
Proposition 1.
For any x ( 0 , 1 ) , we have the following power series expansion:
f ( x ; λ , α ) = k , , m = 0 + a k , , m x α + m 1 ,
where
a k , , m = λ k α m ( 1 ) k + + m 2 α m + 1 ( α + m )
(with a k , 0 , 0 = 0 , such that the sums can be considered without the conjoint value ( , m ) = ( 0 , 0 ) ) and v u denotes the generalized binomial coefficient defined by v u = v ( v 1 ) ( v u + 1 ) / u ! .
Proof. 
Since [ 1 x α ( 2 x ) α ] λ ( 0 , 1 ) , by using the standard geometric formula, we have
F ( x ; λ , α ) = 1 + 2 1 1 + [ 1 x α ( 2 x ) α ] λ = 1 + 2 k = 0 + ( 1 ) k [ 1 x α ( 2 x ) α ] λ k .
Since x α ( 2 x ) α ( 0 , 1 ) and x / 2 ( 0 , 1 ) , by applying the generalized binomial formula two times in a row, we obtain
[ 1 x α ( 2 x ) α ] λ k = = 0 + λ k ( 1 ) x α ( 2 x ) α = , m = 0 + λ k α m ( 1 ) + m 2 α m x α + m .
By combining the above equalities, we get
F ( x ; λ , α ) = 1 + k , , m = 0 + λ k α m ( 1 ) k + + m 2 α m + 1 x α + m .
Upon differentiation, we obtain the desired result. This ends the proof of Proposition 1. □
The above result will be used to determine some structural properties of the TIHLTL distribution.

3.5. Ordinary Moments

In view of the asymptotes of f ( x ; λ , α ) , for any positive integer r, the r-th ordinary moment of X exists. By using Proposition 1, we can express it as
μ r = E ( X r ) = 0 1 x r f ( x ; λ , α ) d x = k , , m = 0 + a k , , m 1 r + α + m .
Owing to this expression, the mean of X can be expressed as
μ = μ 1 = k , , m = 0 + a k , , m 1 1 + α + m .
Similarly, the variance of X is given by σ 2 = μ 2 μ 2 . Other important quantities can be deduced, such as the r-th central moment of X given by
μ r = E [ ( X μ ) r ] = k = 0 r r k ( 1 ) k μ k μ r k ,
as well as the skewness and kurtosis coefficients of X based on central moments given by, respectively,
S ( X ) = μ 3 μ 2 3 / 2 , K ( X ) = μ 4 μ 2 2 .
As the Bowley skewness and Moors kurtosis, these two coefficients are of importance because they measure the asymmetry and the peakedness of the TIHLTL distribution, respectively.

3.6. Incomplete Moments

Let A be an event and 1 A be the rv such that 1 A = 1 if the event A is realized, and 0 otherwise. Then, for any t ( 0 , 1 ) , the r-th incomplete moment of X is given by
μ r ( t ) = E ( X r 1 { X t } ) = 0 t x r f ( x ; λ , α ) d x = k , , m = 0 + a k , , m t r + α + m r + α + m .
In particular, the incomplete mean of X is given by
μ 1 ( t ) = k , , m = 0 + a k , , m t 1 + α + m 1 + α + m .
Other important probabilistic quantities include the mean and the mean of X about the median, respectively given by
δ 1 = E ( | X μ | ) = 2 μ F ( μ ; λ , α ) 2 μ 1 ( μ ) , δ 2 = E ( | X M | ) = μ 2 μ 1 ( M ) .

3.7. Moment-Generating Function

The moment-generating function of X is obtained as, for t R ,
M ( t ) = E ( e t X ) = 0 1 e t x f ( x ; λ , α ) d x = k , , m = 0 + a k , , m 0 1 x α + m 1 e t x d x .
The last integral term can be calculated in different manners according to the value of t. For instance, if t 0 , one can remark that 0 1 x α + m 1 e t x d x = ( t ) α m γ ( α + m , t ) , where γ ( s , x ) denotes the lower incomplete gamma function defined by γ ( s , x ) = 0 x u s 1 e u d u . Another point of view is to consider the power series of the exponential function: for any t R , we have
0 1 x α + m 1 e t x d x = k = 0 + t k k ! ( k + α m + ) .

3.8. Stress Strength Parameter

Here, we derive the stress strength parameter defined by R = P ( X 2 < X 1 ) , when X 1 and X 2 are independent rvs following the TIHLTL distribution with the parameters ( λ 1 , α 1 ) and ( λ 2 , α 2 ) , respectively. The parameter R is central in reliability theory and deserves special treatment in our context. Further details can be found in [21]. First of all, let μ 0 ( 1 ) = 1 and, for any r > 0 ,
μ r ( 1 ) = E ( X 1 r ) = u , v , w = 0 + a u , v , w ( 1 ) 1 r + α 1 v + w ,
where a u , v , w ( 1 ) = λ 1 u v α 1 v w ( 1 ) u + v + w 2 α 1 v w + 1 ( α 1 v + w ) . Then, by virtue of the expression of F ( x ; λ 2 , α 2 ) given by (4), we have
R = 0 1 F ( x ; λ 2 , α 2 ) f ( x ; λ 1 , α 1 ) d x = 1 + k , , m = 0 + λ 2 k α 2 m ( 1 ) k + + m 2 α 2 m + 1 μ α 2 + m ( 1 ) .
When λ 1 = λ 2 and α 1 = α 2 , we have R = 1 / 2 .

3.9. Order Statistics

Many natural phenomena can be modeled by the so-called order statistics, explaining their importance for statisticians. The complete theory can be found in [22]. Here, we provide a contribution to the subject by investigating some properties of the order statistics for the TIHLTL distribution. Let X 1 , , X n be n independent and identically distributed rvs following the TIHLTL distribution. Then, the i-th order statistic is defined by the i-th rv X ( i ) such that, after reordering of X 1 , , X n in an ascending manner, X ( 1 ) X ( 2 ) X ( n ) . Then, a well-established result states that the pdf of the i-th order statistic is given by
f ( i ) ( x ; λ , α ) = c i , n f ( x ; λ , α ) F ( x ; λ , α ) i 1 1 F ( x ; λ , α ) n i , x ( 0 , 1 ) ,
where c i , n = n ! / [ ( i 1 ) ! ( n i ) ! ] . As an alternative expression, the binomial formula gives
f ( i ) ( x ; λ , α ) = c i , n j = 0 n i n i j ( 1 ) j f ( x ; λ , α ) F ( x ; λ , α ) j + i 1 .
The following result provides a series expansion for the exponentiated cdf F ( x ; λ , α ) .
Proposition 2.
Let τ be a positive integer. Then, we have the following power series expansion:
F ( x ; λ , α ) τ = u = 0 τ v , w , z = 0 + b u , v , w , z ( τ ) x α w + z ,
where
b u , v , w , z ( τ ) = τ u u v λ v w α w z ( 1 ) w + z + τ u 2 u + α w z .
Proof. 
We adopt a different strategy to the proof of Proposition 1. Since [ 1 x α ( 2 x ) α ] λ ( 0 , 1 ) , it follows from the (standard and generalized) binomial formula that
F ( x ; λ , α ) τ = 1 + 2 1 1 + [ 1 x α ( 2 x ) α ] λ τ = u = 0 τ τ u ( 1 ) τ u 2 u 1 + [ 1 x α ( 2 x ) α ] λ u = u = 0 τ v = 0 + τ u u v ( 1 ) τ u 2 u [ 1 x α ( 2 x ) α ] λ v .
Since x α ( 2 x ) α ( 0 , 1 ) and x / 2 ( 0 , 1 ) , by applying the binomial formula two times in a row, we obtain
[ 1 x α ( 2 x ) α ] λ v = w = 0 + λ v w ( 1 ) w x α w ( 2 x ) α w = w , z = 0 + λ v w α w z ( 1 ) w + z 2 α w z x α w + z .
By combining the above equalities, we obtain the desired result. This ends the proof of Proposition 2. □
By Proposition 2, upon differentiation of F ( x ; λ , α ) j + i , we obtain the following series expansion:
f ( x ; λ , α ) F ( x ; λ , α ) j + i 1 = 1 j + i u = 0 j + i v , w , z = 0 + b u , v , w , z ( j + i ) ( α w + z ) x α w + z 1 .
Therefore, we can write
f ( i ) ( x ; λ , α ) = j = 0 n i u = 0 j + i v , w , z = 0 + d i , j , n , u , v , w , z x α w + z 1 ,
where
d i , j , n , u , v , w , z = c i , n n i j 1 j + i ( 1 ) j b u , v , w , z ( j + i ) ( α w + z ) .
From this expression, one can derive several interesting structural properties on the i-th order statistic, such as the r-th ordinary moment, that is,
μ ( i ) , r = E ( X ( i ) r ) = j = 0 n i u = 0 j + i v , w , z = 0 + d i , j , n , u , v , w , z 1 r + α w + z .

4. Parameter Estimation

We now explore the statistical aspect of the TIHLTL distribution and investigate the estimation of the unknown parameters λ and α by seven methods. Hereafter, x 1 , , x n denote realizations from a random sample of size n from X, and x ( 1 ) , , x ( n ) their values in ascending order.

4.1. Method of Maximum Likelihood Estimation

The most useful parametric estimation method is the maximum likelihood method. The reason for its popularity is explained by the theoretical guarantees of the resulting estimators; they enjoy tractable properties such as consistency and asymptotic normality, allowing the construction of reliable objects as confidence interval or statistical tests. See, for instance, [23]. The essential of the method of maximum likelihood estimation in the context of the TIHLTL model is described below. Here, the maximum likelihood estimates (MLEs) of λ and α can be obtained by maximizing, with respect to λ and α , the log-likelihood function for ( λ , α ) given by
( λ , α ) = i = 1 n log [ f ( x i ; λ , α ) ] = n log ( 4 ) + n log ( λ ) + n log ( α ) + ( α 1 ) i = 1 n log ( x i ) + i = 1 n log ( 1 x i ) + ( α 1 ) i = 1 n log ( 2 x i ) + ( λ 1 ) i = 1 n log [ 1 x i α ( 2 x i ) α ] 2 i = 1 n log 1 + [ 1 x i α ( 2 x i ) α ] λ .
Thus, the maximum likelihood estimates can be determined by solving the following equations simultaneously: ( λ , α ) / λ = 0 and ( λ , α ) / α = 0 , where
( λ , α ) λ = n λ + i = 1 n log [ 1 x i α ( 2 x i ) α ] 2 i = 1 n [ 1 x i α ( 2 x i ) α ] λ log [ 1 x i α ( 2 x i ) α ] 1 + [ 1 x i α ( 2 x i ) α ] λ
and
( λ , α ) α = n α + i = 1 n log ( x i ) + i = 1 n log ( 2 x i ) ( λ 1 ) i = 1 n x i α ( 2 x i ) α log [ x i ( 2 x i ) ] 1 x i α ( 2 x i ) α + 2 λ i = 1 n x i α ( 2 x i ) α log [ x i ( 2 x i ) ] [ 1 x i α ( 2 x i ) α ] λ 1 1 + [ 1 x i α ( 2 x i ) α ] λ .
Owing to the complexity of these equations, the MLEs do not have analytical expression. However, one can use standard statistical software to solve them (e.g., Mathematica, R, etc.). In this study, R was be used.

4.2. Methods of Least Squares and Weighted Least Squares Estimation

We now consider the methods of least squares and weighted least squares estimation introduced by [24]. The least square estimates (LSEs) of λ and α can be determined by minimizing, with respect to λ and α , the least square function defined by
L S ( λ , α ) = i = 1 n F ( x ( i ) ; λ , α ) i n + 1 2 = i = 1 n 1 [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ 1 + [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ i n + 1 2 .
Thus, the least square estimates can be obtained by solving the following equations simultaneously: L S ( λ , α ) / λ = 0 and L S ( λ , α ) / α = 0 , where
L S ( λ , α ) λ = 2 i = 1 n η i ( 1 ) ( λ , α ) 1 [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ 1 + [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ i n + 1
and
L S ( λ , α ) α = 2 i = 1 n η i ( 2 ) ( λ , α ) 1 [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ 1 + [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ i n + 1 ,
where
η i ( 1 ) ( λ , α ) = 2 [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ log [ 1 x ( i ) α ( 2 x ( i ) ) α ] 1 + [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ 2
and
η i ( 2 ) ( λ , α ) = 2 λ x ( i ) α ( 2 x ( i ) ) α log [ x ( i ) ( 2 x ( i ) ) ] [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ 1 1 + [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ 2 .
The weighted least square estimates (WLSEs) of λ and α can be obtained by minimizing, with respect to λ and α , the weighted least square function defined by
L S W ( λ , α ) = i = 1 n ( n + 1 ) 2 ( n + 2 ) i ( n i + 1 ) F ( x ( i ) ; λ , α ) i n + 1 2 = i = 1 n ( n + 1 ) 2 ( n + 2 ) i ( n i + 1 ) 1 [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ 1 + [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ i n + 1 2 .
Then, one can set similar equations developed for L S ( λ , α ) .

4.3. Method of Cramer–von Mises Minimum Distance Estimation

The Cramer–von Mises minimum distance estimates (CVEs) of λ and α can be determined by minimizing, with respect to λ and α , the Cramer–von Mises minimum distance function defined by
C ( λ , α ) = 1 12 n + i = 1 n F ( x ( i ) ; λ , α ) 2 i 1 2 n 2 = 1 12 n + i = 1 n 1 [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ 1 + [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ 2 i 1 2 n 2 .
Thus, the Cramer–von Mises minimum distance estimates can be obtained by solving the following equations simultaneously: C ( λ , α ) / λ = 0 and C ( λ , α ) / α = 0 , where
C ( λ , α ) λ = 2 i = 1 n η i ( 1 ) ( λ , α ) 1 [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ 1 + [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ 2 i 1 2 n
and
C ( λ , α ) α = 2 i = 1 n η i ( 2 ) ( λ , α ) 1 [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ 1 + [ 1 x ( i ) α ( 2 x ( i ) ) α ] λ 2 i 1 2 n ,
η i ( 1 ) ( λ , α ) and η i ( 2 ) ( λ , α ) are given by (5) and (6), respectively. Further detail on this method can be found in [25].

4.4. Method of Percentile Estimation

The method of percentile estimation was introduced by [26,27] in the context of the Weibull distribution, and has been generalized to other distributions. Here, the percentile estimates (PCEs) of λ and α can be obtained by minimizing, with respect to λ and α , the function defined by
U ( λ , α ) = i = 1 n x ( i ) Q ( p i ; λ , α ) 2 = i = 1 n x ( i ) 1 + 1 1 1 p i 1 + p i 1 / λ 1 / α 2 ,
where p i = ( i + 1 ) / n . Thus, the percentile estimates can be determined by solving the following equations simultaneously: U ( λ , α ) / λ = 0 and U ( λ , α ) / α = 0 , where
U ( λ , α ) λ = 2 i = 1 n υ i ( 1 ) ( λ , α ) x ( i ) 1 + 1 1 1 p i 1 + p i 1 / λ 1 / α
and
U ( λ , α ) α = 2 i = 1 n υ i ( 2 ) ( λ , α ) x ( i ) 1 + 1 1 1 p i 1 + p i 1 / λ 1 / α ,
where
υ i ( 1 ) ( λ , α ) = 1 p i 1 + p i 1 / λ log 1 p i 1 + p i 1 1 p i 1 + p i 1 / λ 1 / α 1 2 α λ 2 1 1 1 p i 1 + p i 1 / λ 1 / α
and
υ i ( 2 ) ( λ , α ) = 1 1 p i 1 + p i 1 / λ 1 / α log 1 1 p i 1 + p i 1 / λ 2 α 2 1 1 1 p i 1 + p i 1 / λ 1 / α .

4.5. Methods of Anderson–Darling and Right-Tail Anderson–Darling Estimation

The method of Anderson–Darling estimation was introduced by [28] in the context of statistical tests. By adapting it to the TIHLTL model, the Anderson–Darling estimates (ADEs) of λ and α can be determined by minimizing, with respect to λ and α , the function given by
A ( λ , α ) = n 1 n i = 1 n ( 2 i 1 ) log [ F ( x ( i ) ; λ , α ) ] + log [ 1 F ( x ( n + 1 i ) ; λ , α ) ] .
Thus, the Anderson–Darling estimates can be obtained by solving the following equations simultaneously: A ( λ , α ) / λ = 0 and A ( λ , α ) / α = 0 , where
A ( λ , α ) λ = 1 n i = 1 n ( 2 i 1 ) η i ( 1 ) ( λ , α ) F ( x ( i ) ; λ , α ) η n + 1 i ( 1 ) ( λ , α ) 1 F ( x ( n + 1 i ) ; λ , α )
and
A ( λ , α ) α = 1 n i = 1 n ( 2 i 1 ) η i ( 2 ) ( λ , α ) F ( x ( i ) ; λ , α ) η n + 1 i ( 2 ) ( λ , α ) 1 F ( x ( n + 1 i ) ; λ , α ) ,
η i ( 1 ) ( λ , α ) and η i ( 2 ) ( λ , α ) are given by (5) and (6), respectively. Similarly, the right-tail Anderson–Darling estimates (RTADEs) of λ and α can be determined by minimizing, with respect to λ and α , the function given by
R ( λ , α ) = n 2 2 i = 1 n log [ F ( x ( i ) ; λ , α ) ] 1 n i = 1 n ( 2 i 1 ) log [ 1 F ( x ( n + 1 i ) ; λ , α ) ] .
Thus, the right-tail Anderson–Darling estimates can be obtained by solving the following equations simultaneously: R ( λ , α ) / λ = 0 and R ( λ , α ) / α = 0 , where
R ( λ , α ) λ = 2 i = 1 n η i ( 1 ) ( λ , α ) F ( x ( i ) ; λ , α ) + 1 n i = 1 n ( 2 i 1 ) η n + 1 i ( 1 ) ( λ , α ) 1 F ( x ( n + 1 i ) ; λ , α )
and
R ( λ , α ) α = 2 i = 1 n η i ( 2 ) ( λ , α ) F ( x ( i ) ; λ , α ) + 1 n i = 1 n ( 2 i 1 ) η n + 1 i ( 2 ) ( λ , α ) 1 F ( x ( n + 1 i ) ; λ , α ) .

4.6. Simulation

Here, we came up with a numerical study to compare the behavior of the previously introduced estimates. We generated N = 1000 random samples of size n = 50, 100, 200 and 1000 from the TIHLTL distribution. Six sets of the parameters were assigned as i : ( α = 2 , λ = 0.5 ) , i i : ( α = 2 , λ = 1.5 ) , i i i : ( α = 2 , λ = 2 ) , i v : ( α = 3 , λ = 0.5 ) , v : ( α = 3 , λ = 1.5 ) , and v i : ( α = 3 , λ = 2 ) . The MLE, LSE, WLSE, PCE, CVE, ADE, and RTADE of α and λ were determined for each sample, allowing the calculus of the mean estimates (Est.) for these methods. We also evaluated the mean square errors (MSEs) defined by
M S E = 1 N i = 1 N ( ϵ ^ i ϵ ) 2 ,
where ϵ is α or λ , and ϵ ^ i denotes the corresponding estimates constructed with the i-th random sample. The obtained values are collected in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6: Table 1 for i, Table 2 for i i , Table 3 for i i i , Table 4 for i v , Table 5 for v, and Table 6 for v i .
From Table 7, and for the parameter combinations, we can conclude that the ML estimation method outperformed all the other estimation methods (with the overall rank of 43.5). Therefore, depending on our study, we can consider that the ML estimation method is the best for the TIHLTL distribution.

5. Applications

The TIHLTL model aims to provide a suitable alternative to other models available in the literature, in view of deep data analyses. Here, we compare the TIHLTL model with the following:
  • The Kumaraswamy (Kum) model with pdf given by
    f K u m ( x ; a , b ) = a b x a 1 ( 1 x a ) b 1 , x ( 0 , 1 ) ,
    where a , b > 0 ;
  • The Topp–Leone (TL) model with pdf given by
    f T L ( x ; α ) = 2 α x α 1 ( 1 x ) ( 2 x ) α 1 , x ( 0 , 1 ) ,
    where α > 0 ;
  • The beta (B) model with pdf given by
    f B ( x ; a , b ) = Γ ( a + b ) Γ ( a ) Γ ( b ) x a 1 ( 1 x ) b 1 , x ( 0 , 1 ) ,
    where a , b > 0 and Γ ( s ) denotes the gamma function defined by Γ ( s ) = 0 + u s 1 e u d u ;
  • The Topp–Leone exponential (TLE) model with pdf given by
    f T L E ( x ; α , λ ) = 2 α λ e 2 λ x 1 e 2 λ x α 1 , x > 0 ,
    where α , λ > 0 . Further details on this distribution can be found in [29].
As consequence of the simulation study performed in Section 4, we chose to estimate the model parameters by the method of maximum likelihood estimation. Another argument is based on its tractable theoretical guarantees in comparison to other estimation methods. The following goodness-of-fit measures were computed: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), Anderson–Darling (A*), and Cramer–von Mises (W*). The lower the values of these criteria, the better the fit. The minus log-likelihood ( ^ ) is also indicated. We also offer the value for the Kolmogorov–Smirnov (KS) statistic and its p-value to analyze the goodness of fit of the estimated models to the data. R software was employed for all the implementations.
The first data set. We considered the data from [30] about theordered failure of components. The data are given as follows:
0.0009, 0.004, 0.0142, 0.0221, 0.0261, 0.0418, 0.0473, 0.0834, 0.1091, 0.1252, 0.1404, 0.1498, 0.175, 0.2031, 0.2099, 0.2168, 0.2918, 0.3465, 0.4035, 0.6143
The second data set. The second data set refers to 50 observations on burr (in millimeters), with hole diameter and sheet thickness of 12 mm and 3.15 mm, respectively. We refer to [31] for the technical details behind these measures. The data are given as follows:
0.04, 0.02, 0.06, 0.12, 0.14, 0.08, 0.22, 0.12, 0.08, 0.26, 0.24, 0.04, 0.14, 0.16, 0.08, 0.26, 0.32, 0.28, 0.14, 0.16, 0.24, 0.22, 0.12, 0.18, 0.24, 0.32, 0.16, 0.14, 0.08, 0.16, 0.24, 0.16, 0.32, 0.18, 0.24, 0.22, 0.16, 0.12, 0.24, 0.06, 0.02, 0.18, 0.22, 0.14, 0.06, 0.04, 0.14, 0.26, 0.18, 0.16
Table 8 provides a basic statistical description of the two data sets. One can remark that data set 1 is highly right skewed, contrary to data set 2. Figure 5 and Figure 6 show the total test time (TTT) plots for data sets 1 and 2, respectively. In particular, the observed curves in the TTT plots are (more or less) concave, implying that a (more or less) monotonic hrf is adequate, giving first signs that the TIHLTL model is appropriate for such data. For more detail on the TTT plot, see [32]. As a complement to this, Figure 7 and Figure 8 present the boxplots for data sets 1 and 2, respectively. The MLEs of the considered models along with their standard errors are given in Table 9 and Table 10 for data sets 1 and 2, respectively. Table 11 and Table 12 contain the values of the goodness-of-fit measures for the considered models. Plots of the estimated pdfs corresponding to data sets 1 and 2 can be observed in Figure 9 and Figure 10, respectively. Plots of the estimated cdfs corresponding to data sets 1 and 2 can be observed in Figure 11 and Figure 12, respectively. Finally, confidence intervals (CIs) for the TIHLTL model parameters are given in Table 13.
From Table 11 and Table 12, we observe that the AIC, BIC, A*, and W* of the TIHLTL model had minimum values compared to the other competitive models, so we conclude that the TIHLTL model provided the best fit on these data sets. Figure 9, Figure 11, Figure 10 and Figure 12 support this claim.

6. Concluding Remarks

In this paper, we studied a special member of the type I half-logistic-G family based on the Topp–Leone distribution, constituting a new two-parameter continuous distribution with support in ( 0 , 1 ) , called the TIHLTL distribution. Among its advantages, a wide variety of shapes are observed for the corresponding pdf and hrf. Some properties of the TIHLTL distribution, such as asymptotes, shapes, quantile function, skewness, kurtosis, some power series expansions, ordinary moments, incomplete moments, moment-generating function, stress strength parameter, and order statistics, were derived. Then, the statistical aspects of the TIHLTL model were explored, assuming that the parameters λ and α were unknown. A simulation study was performed to compare the model performance of seven well-established estimation methods. An application of the TIHLTL model to two practical data sets showed that it is a serious competitor to other well-established models, such as the beta and Kumaraswamy models.

Author Contributions

R.A.Z., C.C., F.J., and M.E. contributed equally to this work.

Funding

This work was funded by the Deanship of Scientific Research (DSR), King AbdulAziz University, Jeddah, under grant No. (DF-283-305-1441).

Acknowledgments

The authors would like to thank the two reviewers for thorough and precise comments which have significantly improved the presentation of the paper. This work was funded by the Deanship of Scientific Research (DSR), King AbdulAziz University, Jeddah, under grant No. (DF-283-305-1441). The authors gratefully acknowledge the DSR technical and financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gupta, R.D.; Kundu, D. Exponentiated exponential family: An alternative to Gamma and Weibull distributions. Biom. J. 2001, 43, 117–130. [Google Scholar] [CrossRef]
  2. Eugene, N.; Lee, C.; Famoye, F. Beta-normal distribution and its applications. Commun. Stat.-Theory Methods 2002, 31, 497–512. [Google Scholar] [CrossRef]
  3. Zografos, K.; Balakrishnan, N. On families of beta- and generalized gamma-generated distributions and associated inference. Stat. Methodol. 2009, 6, 344–362. [Google Scholar] [CrossRef]
  4. Alizadeh, M.; Emadiz, M.; Doostparast, M.; Cordeiro, G.M.; Ortega, E.M.M.; Pescim, R.R. A new family of distributions: The Kumaraswamy odd log-logistic, properties and applications. Hacet. J. Math. Stat. 2015, 44, 1491–1512. [Google Scholar] [CrossRef]
  5. Cordeiro, G.M.; Alizadeh, M.; Marinho, E.P.R.D. The type I half-logistic family of distributions. J. Stat. Comput. Simul. 2015, 86, 707–728. [Google Scholar] [CrossRef]
  6. Cordeiro, G.M.; Alizadeh, M.; Ozel, G. The generalized odd log-logistic family of distributions: Properties, regression models and applications. J. Stat. Comput. Simul. 2017, 87, 908–932. [Google Scholar] [CrossRef]
  7. Alizadeh, M.; Altun, E.; Cordeiro, G.M.; Rasekhi, M. The odd power Cauchy family of distributions: Properties, regression models and applications. J. Stat. Comput. Simul. 2018, 88, 785–805. [Google Scholar] [CrossRef]
  8. Reyad, H.M.; Alizadeh, M.; Jamal, F.; Othman, S.; Hamedani, G.G. The Exponentiated Generalized Topp Leone-G Family of Distributions: Properties and Applications. Pak. J. Stat. Oper. Res. 2019, 15, 1–24. [Google Scholar] [CrossRef]
  9. Jamal, F.; Chesneau, C.; Elgarhy, M. Type II general inverse exponential family of distributions. J. Stat. Manag. Syst. 2019. Available online: https://www.tandfonline.com/toc/tsms20/current (accessed on 15 August 2019).
  10. Topp, C.W.; Leone, F.C. A family of J-shaped frequency functions. J. Am. Stat. Assoc. 1955, 50, 209–219. [Google Scholar] [CrossRef]
  11. Nadarajah, S.; Kotz, S. Moments of some J-shaped distributions. J. Appl. Stat. 2003, 30, 311–317. [Google Scholar] [CrossRef]
  12. Ghitany, M.E.; Kotz, S.; Xie, M. On some reliability measures and their stochastic orderings for the Topp-Leone distribution. J. Appl. Stat. 2005, 32, 715–722. [Google Scholar] [CrossRef]
  13. Sangsanit, Y.; Bodhisuwan, W. The Topp-Leone generator of distributions: Properties and inferences. Songklanakarin J. Sci. Technol. 2016, 38, 537–548. [Google Scholar]
  14. Kunjiratanachot, N.; Bodhisuwan, W.; Volodin, A. The Topp-Leone generalized exponential power series distribution with applications. J. Probab. Stat. Sci. 2018, 16, 197–208. [Google Scholar]
  15. Elgarhy, M.; Nasir, M.A.; Jamal, F.; Ozel, G. The type II Topp-Leone generated family of distributions: Properties and applications. J. Stat. Manag. Syst. 2018, 21, 1529–1551. [Google Scholar] [CrossRef]
  16. Mauldon, J.M. A Generalization of the Beta-distribution. Ann. Math. Stat. 1959, 30, 509–520. [Google Scholar] [CrossRef]
  17. Kumaraswamy, P. A generalized probability density function for double-bounded random processes. J. Hydrol. 1980, 46, 79–88. [Google Scholar] [CrossRef]
  18. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions, National Bureau of Standards; Applied Math. Series 55; Dover Publications: Mineola, NY, USA, 1965. [Google Scholar]
  19. Galton, F. Inquiries Into Human Faculty and Its Development; Macmillan and Company: London, UK, 1883. [Google Scholar]
  20. Moors, J.J.A. A quantile alternative for Kurtosis. J. R. Stat. Soc. Ser. D (Stat.) 1988, 37, 25–32. [Google Scholar] [CrossRef]
  21. Kotz, S.; Lumelskii, Y.; Penskey, M. The Stress-Strength Model and Its Generalizations and Applications; World Scientific: Singapore, 2003. [Google Scholar]
  22. David, H.A.; Nagaraja, H.N. Order Statistics; John Wiley and Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  23. Casella, G.; Berger, R.L. Statistical Inference; Brooks/Cole Publishing Company: Bel Air, CA, USA, 1990. [Google Scholar]
  24. Swain, J.J.; Venkatraman, S.; Wilson, J.R. Least-squares estimation of distribution functions in johnson’s translation system. J. Stat. Comput. Simul. 1988, 29, 271–297. [Google Scholar] [CrossRef]
  25. Macdonald, P.D.M. Comment on ‘An estimation procedure for mixtures of distributions’ by Choi and Bulgren. J. R. Stat. Soc. B 1971, 33, 326–329. [Google Scholar]
  26. Kao, J. Computer methods for estimating Weibull parameters in reliability studies. IRE Trans. Reliab. Qual. Control 1958, 13, 15–22. [Google Scholar] [CrossRef]
  27. Kao, J. A graphical estimation of mixed Weibull parameters in life testing electron tube. Technometrics 1959, 1, 389–407. [Google Scholar] [CrossRef]
  28. Anderson, T.W.; Darling, D.A. Asymptotic theory of certain ‘goodness-of- fit’ criteria based on stochastic processes. Ann. Math. Stat. 1952, 23, 193–212. [Google Scholar] [CrossRef]
  29. Al-Shomrani, A.; Arif, O.; Shawky, K.; Hanif, S.; Shahbaz, M.Q. Topp-Leone family of distributions: Some properties and application. Pak. J. Stat. Oper. Res. 2016, 12, 443–451. [Google Scholar] [CrossRef]
  30. Nigm, A.M.; AL-Hussaini, E.K.; Jaheen, Z.F. Bayesian one sample prediction of future observations under Pareto distribution. Statistics 2003, 37, 527–536. [Google Scholar] [CrossRef]
  31. Dasgupta, R. On the distribution of Burr with applications. Sankhya 2011, 73, 1–19. [Google Scholar] [CrossRef]
  32. Aarset, M.V. How to identify bathtub hazard rate. IEEE Trans. Reliab. 1987, 36, 106–108. [Google Scholar] [CrossRef]
Figure 1. Plots for probability density functions (pdfs) of the type I half-logistic Topp–Leone (TIHLTL) distribution.
Figure 1. Plots for probability density functions (pdfs) of the type I half-logistic Topp–Leone (TIHLTL) distribution.
Mathematics 07 00985 g001
Figure 2. Plots for hazard rate functions (hrfs) of the type I half-logistic Topp–Leone (TIHLTL) distribution.
Figure 2. Plots for hazard rate functions (hrfs) of the type I half-logistic Topp–Leone (TIHLTL) distribution.
Mathematics 07 00985 g002
Figure 3. Plots for the Bowley skewness of the TIHLTL distribution.
Figure 3. Plots for the Bowley skewness of the TIHLTL distribution.
Mathematics 07 00985 g003
Figure 4. Plots for the Moors kurtosis of the TIHLTL distribution.
Figure 4. Plots for the Moors kurtosis of the TIHLTL distribution.
Mathematics 07 00985 g004
Figure 5. TTT plot for data set 1.
Figure 5. TTT plot for data set 1.
Mathematics 07 00985 g005
Figure 6. TTT plot for data set 2.
Figure 6. TTT plot for data set 2.
Mathematics 07 00985 g006
Figure 7. Boxplot for data set 1.
Figure 7. Boxplot for data set 1.
Mathematics 07 00985 g007
Figure 8. Boxplot for data set 2.
Figure 8. Boxplot for data set 2.
Mathematics 07 00985 g008
Figure 9. Plots of estimated probability density functions (pdfs) for data set 1.
Figure 9. Plots of estimated probability density functions (pdfs) for data set 1.
Mathematics 07 00985 g009
Figure 10. Plots of estimated probability density functions (pdfs) for data set 2.
Figure 10. Plots of estimated probability density functions (pdfs) for data set 2.
Mathematics 07 00985 g010
Figure 11. Plots of estimated cumulative distribution functions (cdfs) for data set 1.
Figure 11. Plots of estimated cumulative distribution functions (cdfs) for data set 1.
Mathematics 07 00985 g011
Figure 12. Plots of estimated cumulative distribution functions (cdfs) for data set 2.
Figure 12. Plots of estimated cumulative distribution functions (cdfs) for data set 2.
Mathematics 07 00985 g012
Table 1. Simulations with the seven methods of estimation for α = 2 and λ = 0.5 .
Table 1. Simulations with the seven methods of estimation for α = 2 and λ = 0.5 .
MLE LSE WLSE PCE CVE ADE RTADE
nEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEs
502.1720.6282.0380.4732.0580.5081.8080.5832.1483.3772.0672.9742.1943.811
0.5077.116 *0.4995.885 *0.4986.713 *0.4720.0130.4996.241 *0.4996.682 *0.4987.554 *
1002.1640.2712.0110.2062.0620.3401.8520.2722.1593.1392.0642.7662.1313.288
0.5152.525 *0.4992.756 *0.4952.766 *0.4737.531 *0.5153.873 *0.4962.697 *0.5013.809 *
2002.1060.0982.0230.1572.0290.1071.9120.1302.0572.5872.0152.3792.0302.532
0.5031.184 *0.5001.891 *0.5021.514 *0.4884.459 *0.5011.827 *0.5011.220 *0.5011.848 *
10001.9970.0572.0120.0242.0120.0191.9980.0392.0202.3342.0102.2982.0272.365
0.4991.381 *0.5010.313 *0.5010.247 *0.5031.741 *0.5020.317 *0.5010.251 *0.5020.324 *
* Indicates that the value is multiplied by 10 3 . ADE: Anderson–Darling estimate; CVE: Cramer–von Mises minimum distance estimate; Est.: mean estimate; LSE: least square estimate; MLE: maximum likelihood estimate; MSE: mean square error; PCE: percentile estimate; RTADE: right-tail ADE; WLSE: weighted LSE.
Table 2. Simulations with the seven methods of estimation for α = 2 and λ = 1.5 .
Table 2. Simulations with the seven methods of estimation for α = 2 and λ = 1.5 .
MLE LSE WLSE PCE CVE ADE RTADE
nEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEs
502.1360.2131.9490.1882.0490.2101.7940.2722.0920.6142.0430.4562.1300.679
1.5930.1061.4650.0641.5380.1141.3500.1531.5880.1351.5520.0811.5730.136
1002.0630.0772.0510.1302.0080.1181.7800.1812.0790.4382.0120.3462.0690.470
1.5350.0331.5390.0511.5020.0461.3330.1671.5650.0501.5040.0301.5350.050
2002.0260.0472.0160.0572.0210.0471.8500.0852.0550.3702.0110.3122.0420.340
1.5290.0181.5160.0261.5280.0221.3870.0691.5430.0301.5030.0221.5400.021
10002.0097.847 *2.0070.0102.0088.377 *1.9610.0162.0120.2722.0060.2652.0140.276
1.5073.286 *1.5054.59 *11.5063.595 *1.4650.0131.5094.675 *1.5053.699 *1.5104.574 *
* Indicates that the value is multiplied by 10 3 .
Table 3. Simulations with the seven methods of estimation for α = 2 and λ = 2 .
Table 3. Simulations with the seven methods of estimation for α = 2 and λ = 2 .
MLE LSE WLSE PCE CVE ADE RTADE
nEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEs
502.0650.1642.0340.1992.0600.2041.7810.2742.1040.2382.0120.1572.0780.230
2.0890.1612.0470.2302.0710.2261.7900.4742.1440.3322.0420.1532.1050.255
1002.0280.0711.9840.0832.0350.1051.8610.1392.0580.0922.0320.0712.0470.101
2.0330.0882.0020.0842.0680.1221.8560.2122.0940.1222.0510.0752.0740.102
2002.0230.0432.0140.0512.0140.0341.9320.0602.0160.0522.0260.0381.9950.049
2.0380.0432.0050.0632.0170.0311.9200.1172.0130.0672.0370.0381.9930.039
10002.0086.831 *2.0078.687 *2.0087.203 *1.9660.0142.0118.796 *2.0067.359 *2.0120.010
2.0106.858 *2.0089.561 *2.0107.461 *1.9550.0282.0139.735 *2.0087.691 *2.0149.411 *
* Indicates that the value is multiplied by 10 3 .
Table 4. Simulations with the seven methods of estimation for α = 3 and λ = 0.5 .
Table 4. Simulations with the seven methods of estimation for α = 3 and λ = 0.5 .
MLE LSE WLSE PCE CVE ADE RTADE
nEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEs
503.3921.1173.1731.8833.2851.9932.6241.2413.46010.4413.1838.5043.43511.453
0.5216.864 *0.5057.689 *0.5149.365 *0.4700.0160.5238.203 *0.5076.952 *0.5137.557 *
1003.1630.4863.1860.7943.0430.5792.7200.7353.3739.0843.1027.1793.1637.792
0.5072.750 *0.5064.799 *0.5022.578 *0.4708.690 *0.5185.421 *0.5082.955 *0.5052.814 *
2003.0470.2043.1210.4853.0590.2082.8900.3573.1337.3833.0276.6303.1597.470
0.5061.541 *0.5051.617 *0.5030.950 *0.4785.915 *0.5061.689 *0.4991.462 *0.5031.766 *
10003.0260.0403.0180.0543.0180.0422.9500.0643.0306.4533.0156.3663.0416.529
0.5020.227 *0.5010.313 *0.5010.246 *0.4960.976 *0.5020.317 *0.5010.251 *0.5020.324 *
* Indicates that the value is multiplied by 10 3 .
Table 5. Simulations with the seven methods of estimation for α = 3 and λ = 1.5 .
Table 5. Simulations with the seven methods of estimation for α = 3 and λ = 1.5 .
MLE LSE WLSE PCE CVE ADE RTADE
nEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEs
503.0080.3673.0580.4903.0700.4632.7710.6543.2243.5773.1052.9213.0492.902
1.5670.0901.5510.1101.4950.0691.4510.2821.6390.1491.5380.0681.5180.078
1003.1900.2563.1030.3503.1110.2062.7530.3233.1913.1823.0162.5223.0202.630
1.5820.0591.5540.0761.5360.0411.3740.1271.6010.0891.5060.0361.5180.058
2003.0500.0812.9460.1573.0730.1442.7770.1993.0012.4083.0022.3793.0342.476
1.5300.0181.4850.0301.5230.0271.3780.0691.5100.0331.4980.0231.5210.023
10003.0140.0183.0110.0233.0140.0192.9420.0363.0182.3293.0102.2983.0212.340
1.5073.265 *1.5054.599 *1.5073.676 *1.4650.0131.5094.686 *1.5053.698 *1.5104.575 *
* Indicates that the value is multiplied by 10 3 .
Table 6. Simulations with the seven methods of estimation for α = 3 and λ = 2 .
Table 6. Simulations with the seven methods of estimation for α = 3 and λ = 2 .
MLE LSE WLSE PCE CVE ADE RTADE
nEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEsEst.MSEs
503.1650.3412.9960.3603.0030.3352.7230.7473.1251.6863.0401.5003.1091.676
2.1010.2042.0010.1832.0490.1541.8610.4482.1250.2272.0930.2002.0810.212
1003.0850.1773.0220.2263.1130.2092.6920.3173.0481.3183.0291.2533.0591.324
2.0480.0762.0450.1732.0990.1101.7650.2392.0540.1472.0300.0882.0370.086
2002.9930.0733.0320.1002.9880.1012.8250.1933.1021.3213.0251.1183.0041.145
1.9980.0362.0310.0451.9970.0481.8660.1282.0780.0602.0150.0342.0300.062
10003.0120.0153.0100.0203.0100.0162.9420.0343.0171.0543.0091.0343.0181.060
2.0106.858 *2.0089.564 *2.0097.425 *1.9470.0292.0139.728 *2.0087.689 *2.0149.413 *
* Indicates that the value is multiplied by 10 3 .
Table 7. Partial and overall ranks of the seven methods of estimation for various combinations of the parameters.
Table 7. Partial and overall ranks of the seven methods of estimation for various combinations of the parameters.
ParametersnMLELSEWLSEPCECVEADERTADE
α = 2 , λ = 0.5 504.51.02.06.03.04.57.0
1001.02.03.55.06.54.56.5
2001.04.52.04.56.53.06.5
10005.52.01.05.53.53.57.0
α = 2 , λ = 1.5 502.51.02.55.06.54.06.5
1001.04.02.06.05.03.07.0
2001.03.02.06.07.05.04.0
10001.03.52.05.57.03.55.5
α = 2 , λ = 2 502.03.53.57.06.01.05.0
1002.03.06.07.05.01.04.0
2003.55.01.07.06.02.03.5
10001.04.02.07.05.07.06.0
α = 3 , λ = 0.5 501.02.55.54.07.02.55.5
1001.53.01.56.07.03.03.0
2002.04.01.05.06.03.07.0
10001.03.02.05.55.54.07.0
α = 3 , λ = 1.5 501.54.51.56.07.03.04.5
1002.54.51.06.07.02.54.5
2001.04.02.06.07.03.05.0
10001.03.52.05.57.03.55.5
α = 3 , λ = 2 502.51.02.57.06.04.05.0
1001.55.53.07.51.57.54.0
2001.02.54.05.55.52.57.0
10001.03.52.05.57.03.55.5
Sum of ranks 43.57857.5141140.584132
Overall rank 1327645
Table 8. First statistical approach of data sets 1 and 2.
Table 8. First statistical approach of data sets 1 and 2.
nMeanMedianStandard DeviationSkewnessKurtosis
Data set 1200.160.130.161.231.07
Data set 2500.160.160.080.07−0.87
Table 9. MLEs and their standard errors (in parentheses) for data set 1. B: beta model; Kum: Kumaraswamy model. TL: Topp–Leone model; TLE: Topp–Leone exponential model.
Table 9. MLEs and their standard errors (in parentheses) for data set 1. B: beta model; Kum: Kumaraswamy model. TL: Topp–Leone model; TLE: Topp–Leone exponential model.
Modelλαab
TIHLTL2.13420.6028--
(0.6336)(0.1706)--
Kum--0.76393.4341
--(0.1751)(1.3113)
TL-0.5112--
-(0.1143)--
B--0.71343.7459
--(0.1931)( 1.3085)
TLE0.79252.6644--
(0.2208)(0.8240)--
Table 10. MLEs along with their standard errors (in parentheses) for data set 2.
Table 10. MLEs along with their standard errors (in parentheses) for data set 2.
Modelλαab
TIHLTL10.60941.8752--
(1.1160)(0.2643)--
Kum--2.075033.0041
--(0.2542)(3.8351)
TL-0.7247--
-(0.1024)--
B--2.679913.8502
--(0.5066)(2.8249)
TLE3.17185.6800--
(0.7067)(0.7841)--
Table 11. Statistical measures for data set 1. A*: Anderson–Darling criterion; AIC: Akaike Information Criterion; BIC: Bayesian Information Criterion; KS: Kolmogorov–Smirnov statistic; W*: Cramer–von Mises criterion.
Table 11. Statistical measures for data set 1. A*: Anderson–Darling criterion; AIC: Akaike Information Criterion; BIC: Bayesian Information Criterion; KS: Kolmogorov–Smirnov statistic; W*: Cramer–von Mises criterion.
Model ^ AICBICW*A*KSp-Value (KS)
TIHLTL−17.3028−30.6057−28.61430.02080.13480.09000.9912
Kum−17.2047−30.4094−28.41800.02890.16910.10260.9700
TL−15.6166−29.2333−28.23760.03000.17540.18480.4481
B−17.2532−30.5064−28.51500.02630.15650.09800.9782
TLE−16.8608−29.7216−27.73010.03750.21240.12250.8899
Table 12. Statistical measures for data set 2.
Table 12. Statistical measures for data set 2.
Model ^ AICBICW*A*KSp-Value (KS)
TIHLTL−56.4261−108.8522−105.02820.08930.54960.10310.6623
Kum−56.0686−108.1373−104.31320.10240.62480.11040.5755
TL−28.4078−54.8156−52.90350.16530.99190.36220.000003
B−54.6066−105.2133−101.38920.14790.89260.14140.2697
TLE−52.2862−100.5725−96.74840.21201.26340.16520.1302
Table 13. Confidence intervals of the TIHLTL model parameters for data sets 1 and 2, respectively.
Table 13. Confidence intervals of the TIHLTL model parameters for data sets 1 and 2, respectively.
CIλα
95%[0.8923 3.3760][0.2684 0.9371]
99%[0.4995 3.7688][0.1626 1.0429]
CIλα
95%[8.4220 12.7967][1.3571 2.3932]
99%[7.7301 13.4886][1.1933 2.5570]

Share and Cite

MDPI and ACS Style

ZeinEldin, R.A.; Chesneau, C.; Jamal, F.; Elgarhy, M. Different Estimation Methods for Type I Half-Logistic Topp–Leone Distribution. Mathematics 2019, 7, 985. https://doi.org/10.3390/math7100985

AMA Style

ZeinEldin RA, Chesneau C, Jamal F, Elgarhy M. Different Estimation Methods for Type I Half-Logistic Topp–Leone Distribution. Mathematics. 2019; 7(10):985. https://doi.org/10.3390/math7100985

Chicago/Turabian Style

ZeinEldin, Ramadan A., Christophe Chesneau, Farrukh Jamal, and Mohammed Elgarhy. 2019. "Different Estimation Methods for Type I Half-Logistic Topp–Leone Distribution" Mathematics 7, no. 10: 985. https://doi.org/10.3390/math7100985

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop