Next Article in Journal
Confidence Intervals for the Signal-to-Noise Ratio and Difference of Signal-to-Noise Ratios of Log-Normal Distributions
Previous Article in Journal
Saddlepoint Approximation for Data in Simplices: A Review with New Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Inference Approach for Type-II Generalized Birnbaum-Saunders Distribution

Department of Mathematical Sciences, University of Texas at El Paso, El Paso, TX 79968, USA
Current address: 500 W University Ave., El Paso, TX 79968, USA.
Stats 2019, 2(1), 148-163; https://doi.org/10.3390/stats2010011
Submission received: 7 January 2019 / Revised: 5 February 2019 / Accepted: 17 February 2019 / Published: 19 February 2019

Abstract

:
The Birnbaum-Saunders (BS) distribution, with its generalizations, has been successfully applied in a wide variety of fields. One generalization, type-II generalized BS (denoted as GBS-II), has been developed and attracted considerable attention in recent years. In this article, we propose a new simple and convenient procedure of inference approach for GBS-II distribution. An extensive simulation study is carried out to assess performance of the methods under various settings of parameter values with different sample sizes. Real data are analyzed for illustrative purposes to display the efficiency of the proposed method.

1. Introduction

Over the past decades, numerous researches of flexible distributions have been developed and studied to model fatigue failure (or life) time of products for reliability/survival analysis in engineering and other fields, see [1,2,3,4], just to name a few. As a result of the Birnbaum-Saunders (BS) distribution [5] being successful in modeling fatigue failure times, this model and its extensions have been attracted considerable attention in recent years. The distribution was developed to model failures due to fatigue under cyclical stress on materials. The failure follows from the development and growth of a dominant crack in the product. The BS distribution can be widely applied for describing fatigue life in general, and its application has been extended to other fields where an accumulation forces a quantity to exceed a critical threshold. Thus, the model becomes the most versatile within the popular distributions for failure times due to fatigue and cumulative damage phenomena. The distribution function of the failure time T is expressed
F ( t ) = Φ 1 α t β 1 / 2 β t 1 / 2 , t > 0 ,
where α > 0 , β > 0 are shape and scale parameters, Φ ( · ) is the distribution function of standard normal variate. The BS distribution can be widely applied to describe fatigue life and lifetimes in general. Its field of application has been extended beyond the original context of material fatigue, and the model becomes a fairly versatile within the popular distributions for failure times. Over the years, various approaches of parameter inference, generalizations, and applications of the distribution have been introduced and developed by many authors (see, for example, [6,7,8]). A comprehensive review of the statistical theory, methodology, and applications of BS distribution can be revised in [9].
In the past decade, numerous researches have been dedicated to generalizations of the distribution and their applications. Using elliptical distributions to replace the normal function Φ ( · ) , Ref. [10] extended the BS to a very broad family of spherical distributions. This generalized BS (GBS) is a highly flexible life distribution that admits different degrees of kurtosis and asymmetry, and possesses unimodality and bimodality. As the BS distribution, the GBS distribution can be widely applied in problems involving cumulative damage, which occurs commonly in engineering, environmental, and medical studies. Various researches of the GBS have been studied on the theory and applications in [10,11,12,13], and others. Based on a multivariate elliptically symmetric distribution, a generalized multivariate BS distribution was introduced in [14], who discussed its general properties and presented the statistical inference of parameters. Most recently, Ref. [15] presented moment-type estimation methods for the parameters of the generalized bivariate BS distribution and showed the asymptotic normality of the estimators. However, the use of symmetric distributions as a generalization of the normal model is not based on empirical argument neither on physical laws. One reasonable generalization was first proposed in [16] by allowing the exponent (presently set 1/2 on the BS) to take on other values. Recently, considering the original BS distribution obtained from a homogeneous Poisson process, Ref. [17] derived the same generalized BS depending on a non-homogeneous Poisson process. To distinguish from the GBS developed in [18], this GBS is referred to as Type-II GBS, denoted by GBS-II ( m , α , β ) , whose distribution and density functions are given by
F ( t ) = Φ 1 α t β m β t m ,
f ( t ) = m α t t β m + β t m ϕ 1 α t β m β t m , t > 0 ,
where m > 0 and α > 0 are both shape-type parameters, β > 0 is a scale parameter, ϕ ( · ) is the density function of standard normal distribution. As the BS distribution, the transformation Z = ( ( T / β ) m ( β / T ) m ) / α leads to a standard normal variate, and it is useful for random value generation, integer moments derivation as well as the development of the estimation procedure presented in this article.
So far, little research has been seen on the analysis for the GBS-II distribution in Equation (2). Ref. [19] discussed the likelihood-based estimation of parameters and provided interval estimation based on “observed” Fisher’s information. In this article, to contribute to the relatively neoteric body of research, we propose a new inference method for the parameter estimation and hypothesis testing for the GBS-II model. Our method provides explicit expressions and easier computations for the estimates. The rest of the article is arranged as follows. Section 2 presents some interesting properties of GBS-II distribution. The methodology is presented for inference procedure in Section 3. Subsequently, we carry out simulation studies to investigate the performance of proposed methods in Section 4. For illustrative purposes, one real data set is analyzed in Section 5, followed by some concluding remarks in Section 6.

2. Properties of GBS-II

The three-parameter GBS-II distribution in Equation (2) is a flexible family of distribution, and the shape of the density widely varies with different values of the parameters. Specifically (a detailed proof is provided in Appendix A), (i) when 0 < m 1 , the density is unimodal or upside-down bathtub; (ii) when m > 1 and α 2 m / ( m 1 ) , the density is also unimodal (upside-down); (iii) if m > 1 and α 2 > m / ( m 1 ) , then the density is either unimodal or bimodal. Figure 1 shows various graphs of the density function for different values of m and α with the scale parameter β fixed at unity. Additionally, Figure 2 presents the failure rate function given by λ ( t ) = f ( t ) / ( 1 F ( t ) ) with various values of m and α , showing that it could have increasing, decreasing, bathtub, and upside-down bathtub-shapes. Hence, it seems that the distribution is flexible enough to model various situations of product life.
The GBS-II has some interesting properties as BS distribution. For example, β remains the median for the distribution. The reciprocal property is also preserved by that if T GBS-II ( m , α , β ) , then T 1 GBS-II ( m , α , β 1 ) . In fact, the GBS-II generally describes the distribution family of power transformations for BS random variable from the following: if T BS ( α , β ) , then for any nonzero real-valued constant r , T r GBS-II ( 0.5 | r | 1 , α , β r ) , where | · | represents the absolute value function. Conversely, given T GBS-II ( m , α , β ) , then T 2 m BS ( α , β 2 m ) . In addition, similar to the BS distribution which can be written as an equal mixture of an inverse normal distribution and distribution of the reciprocal of an inverse normal random variable [20], the GBS-II distribution can be also expressed as a mixture of power inverse normal-type distributions [19].
In respect of numeric characteristics for the GBS-II, generally, there is no analytic form of moments except for some special cases. For example, from the fact that T m = β m α Z + α 2 Z 2 + 4 / 2 with a standard normal variate Z, one may easily obtain the expressions of the moments E ( T k m ) with an even number k, such as E ( T 2 m ) = β 2 m ( 1 + α 2 / 2 ) , E ( T 4 m ) = β 4 m ( 1 + 2 α 2 + 11 α 4 / 4 ) , etc. One general moment expression was obtained in [21], who used the relationship between the GBS-II and three-parameter sinh-normal distribution described in [22]
E ( T r ) = β r exp ( α 2 ) α 2 π K ( r / m + 1 ) / 2 ( α 2 ) + K ( r / m 1 ) / 2 ( α 2 ) ,
where K ω ( x ) is the third kind modified Bessel function of order ω , which can be expressed in an integral form as K ω ( x ) = 0 exp { x cosh ( s ) } cosh ( ω s ) d s with the hyperbolic cosine function cosh ( x ) = [ exp ( x ) + exp ( x ) ] / 2 . Numerous software packages can be used to evaluate K ω ( x ) for specific values of ω and x for calculating the moments such as the mean and variance. The finite moments guarantee the rationality of the moment-based type estimation methods provided in the following section.

3. Inference Approach

Throughout this section, we denote ( t 1 , t 2 , , t n ) as the random observational data of size n from the GBS-II distribution. To make notation simple, let ε ( t ) = ( t / β ) m ( β / t ) m and δ ( t ) = ( t / β ) m + ( β / t ) m . Then the log-likelihood function based on the density function in Equation (3) is given by
= n 2 log ( 2 π ) + n log m n log α i = 1 n log t i + i = 1 n log δ ( t i ) 1 2 α 2 i = 1 n ε 2 ( t i ) ,
and the score functions for each parameter are the followings
α = n α + 1 α 3 i = 1 n ε 2 ( t i ) , β = m β i = 1 n ε ( t i ) δ ( t i ) 1 α 2 i = 1 n ε ( t i ) δ ( t i ) ,
m = n m + i = 1 n log ( β 1 t i ) ε ( t i ) δ ( t i ) 1 α 2 i = 1 n log ( β 1 t i ) ε ( t i ) δ ( t i ) .
Due to the complexity of the expression above, there are no tractable forms of maximum likelihood estimates (MLEs). Some powerful computational techniques, such as the general-purpose optimization method, EM algorithm, or its extensions, can be applied to obtain MLEs m ^ , α ^ , β ^ by solving the equations / m = 0 , / α = 0 , and / β = 0 simultaneously. Since there is no analytic form of Fisher’s information matrix, Ref. [19] used the “observed” one to obtain a large-sample based interval estimation for the parameters. We propose an alternative and comparatively simple procedure for applicable inference in the following.

3.1. New Estimation Method

First, since the scale parameter β is also the median of the GBS-II, one simple estimate of β is the sample median below
β ^ M = median ( t 1 , t 2 , , t n ) .
Secondly, for T GBS-II ( m , α , β ) , the transformed random variable W = m ( Y μ ) with Y = log T and μ = log β has a sinh-normal distribution (see, for example, [22]), whose distribution and density functions are given by
F W ( w ) = Φ 1 α [ exp ( w ) exp ( w ) ] ,
f w ( w ) = 1 α [ exp ( w ) + exp ( w ) ] ϕ 1 α [ exp ( w ) exp ( w ) ] , < w < .
The distribution has the expression of moments below
E ( W k ) = log α z + α 2 z 2 + 4 2 k ϕ ( z ) d z , k even 0 , k odd .
By the fact that E ( Y ) = μ from E ( W ) = 0 , we set up the equation by the first moment E ( Y ) to the sample mean y ¯ = i = 1 n y i / n with the transformed samples y i = log t i , and obtain the method of moment estimate of β , which is the geometric mean of the data, given by
β ^ G = i = 1 n t i 1 / n .
Further, E ( W 2 ) = m 2 E ( Y μ ) 2 and E ( W 4 ) = m 4 E ( Y μ ) 4 result in the same kurtosis for W and Y, that is, E ( W 4 ) / [ E ( W 2 ) ] 2 = E ( Y μ ) 4 / [ E ( Y μ ) 2 ] 2 . By equating sample kurtosis to the theoretical one, the moment estimate α ˇ can be obtained numerically from the following equation
E ( W 4 ) [ E ( W 2 ) ] 2 = κ ,
where the sample kurtosis of Y is κ = m 4 / m 2 2 with m k = i = 1 n ( y i y ¯ ) k / n , k = 2 , 4 . Although there is no analytic form, the uniqueness of α ˇ is justified in Appendix A. Finally, the estimate of m is
m ˇ = E ( W 2 ) ^ m 2 1 2 = log α ˇ z + α ˇ 2 z 2 + 4 2 2 ϕ ( z ) d z m 2 1 2 .
Additionally, by the following Taylor expansions
exp [ m ( y μ ) ] = 1 + m ( y μ ) + m 2 ( y μ ) 2 2 ! + m 2 ( y μ ) 3 3 ! + ,
exp [ m ( y μ ) ] = 1 m ( y μ ) + m 2 ( y μ ) 2 2 ! m 3 ( y μ ) 3 3 ! + ,
the transformed random variable Y = log ( T ) has the approximate distribution function given by
F Y ( y ) = Φ 1 α exp [ m ( y μ ) ] exp [ m ( y μ ) ] = Φ 1 α 2 m ( y μ ) + 2 m 3 ( y μ ) 3 3 ! + = Φ 2 m ( y μ ) α + Ψ ( y ) Φ 2 m ( y μ ) α , < y < .
where
Ψ ( y ) = ϕ 2 m ( y μ ) α 2 m 3 ( y μ ) 3 3 ! α + O ( y μ ) 5 = 1 2 π exp 2 m 2 ( y μ ) 2 α 2 2 m 3 ( y μ ) 3 3 ! α + O ( y μ ) 5 .
It is easily seen that (i) the value of Ψ ( y ) is close to zero if y is close to μ ; (ii) if y is away from μ , Ψ ( y ) is also close to zero since the decay rate of exponentiation with power ( y μ ) 2 is much faster than the growth rate of the polynomials of ( y μ ) k , k = 1 , 2 , . Hence, Y a p p r o x N ( μ , σ 2 ) with σ = α / ( 2 m ) . By this distribution approximation, a moment estimate of m is given by
m ˘ = α ˇ 2 S Y ,
with the sample variance S Y 2 = i = 1 n ( y i y ¯ ) 2 / ( n 1 ) . The estimate, indeed, is an approximation of the estimate in Equation (14) where the function ( log ( α ˇ z + α ˇ 2 z 2 + 4 ) / 2 ) 2 is approximated by the first term of its Taylor series and m 2 replaced by S Y 2 .
Finally, from the well-known fact that ( Y ¯ μ ) / ( S Y / n ) a p p r o x t n 1 and ( n 1 ) S Y 2 / σ 2 a p p r o x χ n 1 2 , the approximate 100 ( 1 γ ) % confidence intervals (CI) of β and σ 2 are respectively given by
exp y ¯ t γ / 2 , n 1 S Y n , exp y ¯ + t γ / 2 , n 1 S Y n , ( n 1 ) S Y 2 χ γ / 2 , n 1 2 , ( n 1 ) S Y 2 χ 1 γ / 2 , n 1 2 ,
where χ γ / 2 , n 1 2 , t γ / 2 , n 1 are the upper 100 × γ / 2 -th percentile of the χ 2 and t distributions with degrees of freedom n 1 . Accordingly, the approximate 100 ( 1 γ ) % CIs of m and α , from the relation σ = α / ( 2 m ) , are in the following
α ˇ 2 χ 1 γ / 2 , n 1 2 ( n 1 ) S Y 2 , α ˇ 2 χ γ / 2 , n 1 2 ( n 1 ) S Y 2 , 2 m ˇ ( n 1 ) S Y 2 χ γ / 2 , n 1 2 , 2 m ˇ ( n 1 ) S Y 2 χ 1 γ / 2 , n 1 2 .
It is worthwhile to point out that the presented method can be extended to the censored observations which is a usual scenario for real life data in engineering. As an illustrative example, we briefly describe the estimation procedure for right-censored data. Suppose that the failure time T is a right-censored variable at c, with c being a pre-specified censored value. Then the transformed time Y = log ( T ) may be regarded as a mixture of a binary and approximated right-truncated N ( μ , σ 2 ) at c * = log ( c ) . The moment-type estimates of the parameters can be obtained numerically in the moment equations, where the theoretical moments provided in [23] are given below,
E ( Y ) = [ 1 Φ ( d ) ] c * + Φ ( d ) [ μ λ σ ] , V a r ( Y ) = Φ ( d ) [ 1 d λ λ 2 + ( d + λ ) 2 ( 1 Φ ( d ) ) ] σ 2
where d = ( c * μ ) / σ , λ = ϕ ( d ) / Φ ( d ) . Additionally, the interval estimation can be constructed by the method in [24] who provide formulas for confidence intervals around the truncated moments. Thus, for the censoring case, the computational complexity will increase due to no explicit forms.

3.2. Hypothesis Tests

Here we specifically consider the gradient test [25], and for comparison purposes, the likelihood ratio test is also presented. Generally, let ( θ ) and U ( θ ) = ( θ ) / θ T be the log-likelihood and score functions with the p-vector parameter θ . Consider a partition θ = ( θ 1 T , θ 2 T ) T , where θ 1 , θ 2 are the q and p q parameter vector, respectively. Suppose the interest lies in testing the composite null hypothesis H 0 : θ 1 = θ 10 versus H 1 : θ 1 θ 10 , where θ 10 is a specified q-dimensional vector, and θ 2 is a ( p q ) -vector nuisance parameter. The partition for θ induces the corresponding partition of score function U ( θ ) = ( U 1 ( θ ) T , U 2 ( θ ) T ) T with U i ( θ ) = ( θ ) / θ i T , i = 1 , 2 . The likelihood ratio and gradient statistics for H 0 versus H 1 are given by
S L R = 2 ( ( θ ˜ ) ( θ ^ ) ) , S G = U 1 ( θ ˜ ) T ( θ ^ 1 θ ˜ 10 ) ,
where θ ^ = ( θ ^ 1 T , θ ^ 2 T ) T and θ ˜ = ( θ 10 T , θ ˜ 2 T ) T denote the MLEs of θ = ( θ 1 T , θ 2 T ) T under H 1 and H 0 , respectively. Both limiting distributions of S L R and S G are χ q 2 , i.e., chi-square with q degrees of freedom. In practice, the simplicity for the gradient statistic is always an attraction since the score function is quite bit simpler than the log-likelihood itself in many cases. Also, it does not require one to obtain, estimate, or invert an information matrix as the Wald and score statistics [26]. Hence, the gradient statistic makes the testing quiet convenient, especially for the GBS-II distribution, which possesses a complicated likelihood function and an intractable information matrix.
For the GBS-II distribution, the log-likelihood function ( θ ) and the score function U ( θ ) = ( U m ( θ ) , U α ( θ ) , U β ( θ ) ) T are given in Equations (5)–(7), respectively, with the parameter vector θ = ( m , α , β ) T and the MLEs θ ^ = ( m ^ , α ^ , β ^ ) T . The interest lies in testing the three null hypotheses:
H m 0 : m = m 0 , H α 0 : α = α 0 , H β 0 : β = β 0 .
against H m 1 : m m 0 , H α 1 : α α 0 , H β 1 : β β 0 , respectively, where m 0 , α 0 , β 0 are the specified positive scalars. Let m ˜ , α ˜ , and β ˜ be the restricted MLEs under each null hypothesis, then the likelihood ratio and gradient statistics, respectively, are given by
S L R ( m ) = 2 [ ( m 0 , α ˜ , β ˜ ) ( m ^ , α ^ , β ^ ) ] , S G ( m ) = U m ( m 0 , α ˜ , β ˜ ) ( m ^ m 0 ) ,
S L R ( α ) = 2 [ ( m ˜ , α 0 , β ˜ ) ( m ^ , α ^ , β ^ ) ] , S G ( α ) = U α ( m ˜ , α 0 , β ˜ ) ( α ^ α 0 ) ,
S L R ( β ) = 2 [ ( m ˜ , α ˜ , β 0 ) ( m ^ , α ^ , β ^ ) ] , S G ( β ) = U β ( m ˜ , α ˜ , β 0 ) ( β ^ β 0 ) .
The asymptotic distribution of these statistics is χ 1 2 under the respective null hypotheses and the test is significant if the test statistic exceeds the upper 100 × γ % percentile χ γ , 1 2 . In the following section, we conduct an extensive simulation study to evaluate and compare the performance of the proposed estimation and the test statistics.

4. Simulation Study

First, we carry out a simulation study to investigate the performance of two estimates of β in Equation (8) and Equation (12). We fix the scale parameter β = 1 and set m = 0.25 , 0.75 , α = 0.25 , 1.00 . Under each combination of the parameter values, we generate 10,000 data sets of GBS-II random observations for each sample size n = 10 , 15 , 20 , 25 , 30 to calculate the two estimates, as well as their mean square error (MSE). The comparison plot of two estimates is shown in Figure 3, where, from top to bottom, two plots correspond to the estimation results under the parameter settings ( m , α ) = ( 0.25 , 0.25 ) , ( 0.25 , 1.00 ) , ( 0.75 , 0.25 ) , ( 0.75 , 1.00 ) , and the left and right panels demonstrate the averaged estimates and their MSEs, respectively. From these plots, one may see that the geometric mean β ^ G is the better estimate since it has both smaller bias and MSE than these by the sample median β ^ M , especially when sample size is small.
Next, we conduct another simulation study to assess performance of parameter estimation by the new method, where β is estimated by β ^ G , m ˇ in Equation (14) and α ˇ in Equation (13). With fixing the scale parameter β = 1 , we take five settings of other two parameters as ( m , α ) = ( 0.25 , 0.50 ) , ( 0.50 , 0.50 ) , ( 1.00 , 1.00 ) , ( 1.50 , 2.00 ) , ( 2.00 , 2.50 ) . For each of these parameter settings with three sample sizes n = 20 , 30 , 50 , we generate 10,000 data sets to compute the averaged biases and MSEs of the estimates, as well as average lengths (AL) of the 95% confidence intervals (CI), and coverage probability (CP) for the parameters. The results are summarized in Table 1, along with these estimates from the ML method for comparison purposes. The main features are summarized as follows: (i) As expected, the bias, MSE, and length of 95% CI decrease, and CP is closer to the nominal level as the sample size n increases for all cases; (ii) the estimation of all parameters from the new method is much better than from the ML approach in terms of smaller biases and MSEs, narrow CIs, and higher CPs; (iii) comparatively, for both methods, much more accurate estimation of β are obtained (especially for the new method), whereas a less precise for the estimation of m and least accurate for the estimation of α . Particularly, the MLE α ^ does not perform well for the small to moderate sample sizes ( n = 20 , 30 ). With the larger sample size ( n = 50 ), the performances of the β estimates are similar for both methods; (iv) it seems that the estimate of α has a smaller MSE when the true value of α is small, whereas the estimate of m has a smaller MSE when the true value of m is large. Overall, the numerical results favor the new inference method for the estimation on the parameters, especially when the sample size is small.
Finally, we evaluate and compare the performance of the likelihood ratio and gradient tests for the hypotheses on the parameters. With 10,000 Monte Carlo replications, Table 2 presents the null rejection rates of the two statistics S L R and S G under various parameter settings, sample sizes, and the nominal levels γ = 10 % and 5%. Table 3 contains the powers obtained under the significance level γ = 5 % and the alternative hypotheses: H m 1 : m = m 1 , H α 1 : α = α 1 , and H β 1 : β = β 1 for various values of m 1 , α 1 and β 1 . Our main findings are as follows. First, the test S G is less size distorted than the S L R test. In fact, the gradient test produces null rejection rates close to the nominal levels in all cases. For example, for n = 30 , m 0 = 1.0 with α = 0.5 , β = 1.0 , and γ = 5 % , the null rejection rates are 6.28 ( S L R ( m ) ) and 5.39 ( S G ( m ) ). In the test of α for the values of α > 1 , the likelihood ratio test is oversize whereas the gradient test becomes undersized. All the tests become less size distortion as the sample size increases, as expected. Additionally, the S G is more powerful than S L R for all the cases. For example, when n = 50 , m = 1.5 , α = 1.0 , β = 2.0 in the hypothesis H α 0 : α = 1.0 vs. H α 1 : α = 1.15 , the powers are 61.76% ( S L R ( α ) ) and 64.16% ( S G ( α ) ). It is also clear that the powers of the two tests increase with the sample size and alternative values m 1 , α 1 , and β 1 . In summary, the simulation studies imply that the new estimation method and the gradient test perform better than ML approach and the likelihood ratio test, respectively, for all parameter settings, particularly in the small and moderate sample sizes.

5. Real Data Analysis

To further demonstrate the usefulness of our method in parameter inference for the GBS-II, we consider a real data presented in [27] on active repair times (in hours) for an airborne communications transceiver. To illustrate the estimation performance on a small sample size, we randomly select 20 repair times out of the total 46 observations to have following data: 0.2, 0.5, 0.5, 0.6, 0.7, 0.7, 0.8, 1.0, 1.0, 1.1, 1.5, 2.0, 2.2, 3.0, 4.0, 4.5, 5.4, 7.5, 8.8, 10.3. Modeling the data by the GBS-II distribution by ML and the new methods, the estimation results are summarized in Table 4, where the produced standard errors (SE) and 95% CIs by the new approach are much smaller and narrower for the parameters, especially for the intervals of m and α . In addition, one interest of hypothesis test lies in testing H 0 : m 0 = 0.5 since GBS-II reduces to BS under the null. The likelihood ratio and gradient tests yield the statistics S L R ( m ) = 3.64 and S G ( m ) = 1.87 , showing a highly insignificant from the gradient test, which is consistent with the outcome of CIs for m. Finally, the chi-squared goodness of fit statistic and BIC values of model fitting by the new method are smaller than these by ML method, indicating the greater accuracy of the proposed new method. Figure 4 shows that the fitted GBS-II density curve estimated by the new method is closer to the histogram than the one by the ML approach. These outcomes demonstrate that the proposed new method produces more accurate inference under the small sample size.

6. Conclusions Remarks

We presented the parameter inference for a generalized three-parameter Birnbaum-Saunders (GBS-II) distribution, which exhibits a very flexible characteristic in modeling various life behavior of products. To circumvent the arduous expressions of likelihood function, a new method of analysis was proposed to make inference more applicable and convenient. Simulation studies suggest that, compared with the likelihood-based approach, the new method produces a reasonable estimation result and provides a feasible inference procedure for the GBS-II distribution. We have also illustrated, with one real dataset, that our method can be readily applied for convenient, practical, and reliable inference.

Funding

This research was funded by NSF CMMI-0654417 and NIMHD-2G12MD007592.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

1. A detail presentation of the shape for GBS-II density curve is provided here. Without loss of generality and making notations simple, we fix β = 1 and let y = t 2 m . Then the derivative of density function can be written as
f ( y ) = m 2 α 3 y 1 / m + 3 / 2 ϕ 1 α [ y 1 / 2 y 1 / 2 ] g ( y ) , y > 0 ,
where g ( y ) = y 3 [ 1 α 2 ( 1 1 / m ) ] y 2 + [ 1 α 2 ( 1 + 1 / m ) ] y + 1 . Since the sign of f ( y ) only depends on the sign of g ( y ) , we consider the behavior of g ( y ) . First, we have lim y 0 g ( y ) = 1 > 0 and lim y g ( y ) = . Let g 1 ( y ) = g ( y ) = 3 y 2 2 [ 1 α 2 ( 1 1 / m ) ] y + 1 α 2 ( 1 + 1 / m ) , then lim y 0 g 1 ( y ) = 1 α 2 ( 1 + 1 / m ) , lim y g 1 ( y ) = . Further, let g 2 ( y ) = g 1 ( y ) = 6 y 2 [ 1 α 2 ( 1 1 / m ) ] , and then lim y 0 g 2 ( y ) = 2 [ 1 α 2 ( 1 1 / m ) ] , lim y g 2 ( y ) = . Now we consider three cases below.
(1)
0 < m 1 .
In this case, g 1 ( y ) = g 2 ( y ) < 0 for y > 0 . (i) if 1 α 2 ( 1 + 1 / m ) < 0 , then lim y 0 g 1 ( y ) < 0 . With g 1 ( y ) < 0 , we know that g ( y ) = g 1 ( y ) < 0 in y > 0 . Thus, there exists a root y 0 of g ( y ) to have f ( y 0 ) = g ( y 0 ) = 0 , and when 0 < y < y 0 , f ( y ) = g ( y ) > 0 ; when y 0 < y < , f ( y ) = g ( y ) < 0 . Therefore f ( y ) is a unimodal. (ii) if 1 α 2 ( 1 + 1 / m ) > 0 , then lim y 0 g 1 ( y ) > 0 . Thus there is a unique root y 10 of the quadratic function g 1 ( y ) with g ( y 10 ) = g ( y 10 ) = 0 , and when 0 < y < y 10 , g ( y ) = g 1 ( y ) > 0 ; when y 10 < y < , g ( y ) = g 1 ( y ) < 0 . Therefore there is one root y 0 of g ( y ) such that f ( y 0 ) = g ( y 0 ) = 0 , and when 0 < y < y 0 , f ( y ) = g ( y ) > 0 ; when y 0 < y < , f ( y ) = g ( y ) < 0 . It results in f ( y ) is unimodal.
(2)
m > 1 and 1 α 2 ( 1 1 / m ) 0 .
(i) if 1 α 2 ( 1 + 1 / m ) < 0 , then lim y 0 g 1 ( y ) < 0 . Also g 1 ( y ) < 0 , and so g ( y ) = g 1 ( y ) < 0 in y > 0 . Hence there is a root y 0 of g ( y ) with f ( y 0 ) = g ( y 0 ) = 0 , and when 0 < y < y 0 , f ( y ) = g ( y ) > 0 ; when y 0 < y < , f ( y ) = g ( y ) < 0 . It indicates that f ( y ) is unimodal. (ii) if 1 α 2 ( 1 + 1 / m ) > 0 , then lim y 0 g 1 ( y ) > 0 . Hence there is a unique root y 10 of g 1 ( y ) with g ( y 10 ) = g 1 ( y 10 ) = 0 , and when 0 < y < y 10 , g ( y ) = g 1 ( y ) > 0 ; when y 10 < y < , g ( y ) = g 1 ( y ) < 0 . Thus there exists a root y 0 of g ( y ) with f ( y 0 ) = g ( y 0 ) = 0 , and when 0 < y < y 0 , f ( y 0 ) = g ( y ) > 0 ; when y 0 < y < , f ( y ) = g ( y ) < 0 . Therefore f ( y ) is unimodal.
(3)
m > 1 and 1 α 2 ( 1 1 / m ) < 0 .
Thus lim y 0 g 2 ( y ) > 0 , and so there is a unique root y 20 of g 2 ( y ) satisfying that: when 0 < y < y 20 , g 1 ( y ) = g 2 ( y ) > 0 and when y 20 < y < , g 1 ( y ) = g 2 ( y ) < 0 . In addition, lim y 0 g 1 ( y ) = 1 α 2 ( 1 + 1 / m ) < 1 α 2 ( 1 1 / m ) < 0 , and so two cases need to be discussed. (i) if there is only one root y 10 for g 1 ( y ) , then g ( y ) = g 1 ( y ) < 0 for 0 < y < . Hence there is a unique root y 0 of the cubic function g ( y ) . Thus f ( y 0 ) = g ( y 0 ) = 0 , and when 0 < y < y 0 , f ( y ) = g ( y ) > 0 , when y 0 < y < , f ( y ) = g ( y ) < 0 . So f ( y ) is unimodal. (ii) if there are two distinct roots y 10 < y 11 for g 1 ( y ) , then we know that: when 0 < y < y 10 , g ( y ) = g 1 ( y ) < 0 , when y 10 < y < y 11 , g ( y ) = g 1 ( y ) > 0 , and when y 11 < y < , g ( y ) = g 1 ( y ) < 0 . There could be two cases: (a) If the cubic function g ( y ) has one or two distinct roots, then one root, say y 0 , leads to f ( y 0 ) = g ( y 0 ) = 0 , and f ( y ) = g ( y ) > 0 in 0 < y < y 0 ; f ( y ) = g ( y ) < 0 in y 0 < y < . Hence f ( y ) is unimodal; (b) If g ( y ) has three distinct real roots y 0 < y 1 < y 2 , then we have that f ( y j ) = g ( y j ) = 0 , j = 0 , 1 , 2 , and that: when 0 < y < y 0 , f ( y ) = g ( y ) > 0 , when y 0 < y < y 1 , f ( y ) = g ( y ) < 0 , when y 1 < y < y 2 , f ( y ) = g ( y ) > 0 , when y 2 < y < , f ( y ) = g ( y ) < 0 . It indicates f ( y ) is bimodal.
2. We provide the proof of uniqueness of the root α ˇ in the Equation (13). Let g ( z , α ) = log ( ( α z + α 2 z 2 + 4 ) / 2 ) , and so it is an odd function for z, that is, g ( z , α ) = g ( z , α ) . We denote
G ( α ) = E ( W 4 ) [ E ( W 2 ) ] 2 = [ g ( z , α ) ] 4 ϕ ( z ) d z [ [ g ( z , α ) ] 2 ϕ ( z ) d z ] 2 = 0 [ g ( z , α ) ] 4 ϕ ( z ) d z 2 [ 0 [ g ( z , α ) ] 2 ϕ ( z ) d z ] 2 .
In the following, we show that the function G ( α ) is a monotone decreasing for α > 0 . Since g ( z , α ) / α = z / α 2 z 2 + 4 ,
G ( α ) = 2 0 [ g ( z , α ) ] 3 z α 2 z 2 + 4 ϕ ( z ) d z [ 0 [ g ( z , α ) ] 2 ϕ ( z ) d z ] 2 2 0 [ g ( z , α ) ] 4 ϕ ( z ) d z 0 g ( z , α ) z α 2 z 2 + 4 ϕ ( z ) d z [ 0 [ g ( z , α ) ] 2 ϕ ( z ) d z ] 3 = 0 0 2 [ g ( y , α ) ] 2 [ g ( z , α ) ] 3 z α 2 z 2 + 4 g ( y , α ) [ g ( z , α ) ] 4 y α 2 y 2 + 4 ϕ ( y ) ϕ ( z ) d y d z [ 0 [ g ( z , α ) ] 2 ϕ ( z ) d z ] 3 = 0 0 H ( y , z , α ) ϕ ( y ) ϕ ( z ) d y d z [ 0 [ g ( z , α ) ] 2 ϕ ( z ) d z ] 3 .
where, by switching the role of y and z, the function H ( y , z , α ) above can be written as
H ( y , z , α ) = [ g ( y , α ) ] 2 [ g ( z , α ) ] 3 z α 2 z 2 + 4 + [ g ( y , α ) ] 3 [ g ( z , α ) ] 2 y α 2 y 2 + 4 g ( y , α ) [ g ( z , α ) ] 4 y α 2 y 2 + 4 [ g ( y , α ) ] 4 g ( z , α ) z α 2 z 2 + 4 = g ( y , α ) [ g ( z , α ) ] 3 z α 2 z 2 + 4 g ( y , α ) y α 2 y 2 + 4 g ( z , α ) [ g ( y , α ) ] 3 g ( z , α ) z α 2 z 2 + 4 g ( y , α ) y α 2 y 2 + 4 g ( z , α ) = y z ( α 2 y 2 + 4 ) ( α 2 z 2 + 4 ) × g ( y , α ) g ( z , α ) [ g ( y , α ) ] 2 [ g ( z , α ) ] 2 × α 2 y 2 + 4 y g ( y , α ) α 2 z 2 + 4 z g ( z , α ) .
Let g 1 ( z , α ) = g ( z , α ) α 2 z 2 + 4 / z , and then g 1 ( z , α ) / z = [ α z α 2 z 2 + 4 4 g ( z , α ) ] / ( z 2 α 2 z 2 + 4 ) . Further, let g 2 ( z , α ) = α z α 2 z 2 + 4 4 g ( z , α ) , then g 2 ( z , α ) / z = 2 α 3 z 2 / α 2 z 2 + 4 > 0 . Hence g 2 ( z , α ) is increasing in z > 0 . Also g 2 ( 0 , α ) = 4 g ( 0 , α ) = 0 , thus g 2 ( z , α ) > 0 , and so g 1 ( z , α ) / z = g 2 ( z , α ) / ( z 2 α 2 z 2 + 4 ) > 0 , that is, g 1 ( z , α ) is increasing of z in z > 0 . In addition, g ( z , α ) is also an increasing function of z and g ( z , α ) > 0 in z > 0 since g ( z , α ) / z = α / α 2 z 2 + 4 > 0 and g ( 0 , α ) = 0 . Thus both functions g ( y , α ) g ( z , α ) and g 1 ( y , α ) g 1 ( z , α ) have the same sign for any values of y , z in y , z > 0 , and it leads to [ g ( y , α ) g ( z , α ) ] [ g 1 ( y , α ) g 1 ( z , α ) ] > 0 . Hence the function
H ( y , z , α ) = y z ( α 2 y 2 + 4 ) ( α 2 z 2 + 4 ) × g ( y , α ) g ( z , α ) [ g ( y , α ) + g ( z , α ) ] × [ g ( y , α ) g ( z , α ) ] [ g 1 ( y , α ) g 1 ( z , α ) ] < 0 ,
resulting in
G ( α ) = 0 0 H ( y , z , α ) ϕ ( y ) ϕ ( z ) d y d z [ 0 [ g ( z , α ) ] 2 ϕ ( z ) d z ] 3 < 0 .
Additionally, by Taylor expansion, we have
lim α 0 G ( α ) = lim α 0 z 4 ( 1 2 α 2 z 2 48 + ) 4 ϕ ( z ) d z [ z 2 ( 1 2 α 2 z 2 48 + ) 2 ϕ ( z ) d z ] 2 = z 4 ϕ ( z ) d z [ z 2 ϕ ( z ) d z ] 2 = 3 ,
along with
lim α G ( α ) = lim α log α 2 4 1 + log ( z + z 2 + 4 / α 2 ) log ( α / 2 ) 4 ϕ ( z ) d z log α 2 2 1 + log ( z + z 2 + 4 / α 4 ) log ( α / 2 ) 2 ϕ ( z ) d z 2 = ϕ ( z ) d z [ ϕ ( z ) d z ] 2 = 1 .
Figure A1 shows the plot of the function G ( α ) .
Figure A1. Kurtosis Function G ( α ) .
Figure A1. Kurtosis Function G ( α ) .
Stats 02 00011 g0a1
Finally, with m k = i = 1 n ( y i y ¯ ) k / n , k = 2 , 4 , it is well known that for any samples y 1 , y 2 , , y n , the sample kurtosis κ = m 4 / m 2 2 > 1 . Hence the equation E ( W 4 ) / [ E ( W 2 ) ] 2 = κ has a unique root for α .

References

  1. Stacy, E.W. A Generalization of the Gamma Distribution. Ann. Math. Stat. 1962, 33, 1187–1192. [Google Scholar] [CrossRef]
  2. Mudholkar, G.S.; Srivastava, D.K. Exponentiated Weibull family for analyzing bathtub failure-ratedata. IEEE Trans. Reliab. 1993, 42, 299–302. [Google Scholar] [CrossRef]
  3. Marshall, A.W.; Olkin, I. A new method for adding a parameter to a family of distributions with application to the exponential and Weibull families. Biometrika 1997, 84, 641–652. [Google Scholar] [CrossRef]
  4. Rubio, F.J.; Hong, Y. Survival and lifetime data analysis with a flexible class of distributions. J. Appl. Stat. 2016, 43, 1794–1813. [Google Scholar] [CrossRef]
  5. Birnbaum, Z.W.; Saunders, S.C. A new family of life distributions. J. Appl. Probab. 1969, 6, 319–327. [Google Scholar] [CrossRef]
  6. Ng, H.K.T.; Kundub, D.; Balakrishnan, N. Modified moment estimation for the two-parameter Birnbaum–Saunders distribution. Comput. Stat. Data Anal. 2003, 43, 283–298. [Google Scholar] [CrossRef]
  7. Lemonte, A.J.; Cribari-Neto, F.; Vasconcellos, K.L.P. Improved statistical inference for the two-parameter Birnbaum-Saunders distribution. Comput. Stat. Data Anal. 2007, 51, 4656–4681. [Google Scholar] [CrossRef]
  8. Balakrishnan, N.; Zhu, X. An improved method of estimation for the parameters of the Birnbaum-Saunders distribution. J. Stat. Comput. Simul. 2014, 84, 2285–2294. [Google Scholar] [CrossRef]
  9. Leiva, V. The Birnbaum-Saunders Distribution; Academic Press: New York, NY, USA, 2016. [Google Scholar]
  10. Díaz-García, J.A.; Leiva, V. A new family of life distributions based on elliptically contoured distributions. J. Stat. Plan. Inference 2005, 128, 445–457, Erratum in 2007, 137, 1512–1513. [Google Scholar] [CrossRef]
  11. Leiva, V.; Riquelme, M.; Balakrishnan, N.; Sanhueza, A. Lifetime analysis based on the generalized Birnbaum-Saunders distribution. Comput. Stat. Data Anal. 2008, 52, 2079–2097. [Google Scholar] [CrossRef]
  12. Leiva, V.; Vilca-Labra, F.; Balakrishnan, N.; Sanhueza, A. A skewed sinh-normal distribution and its properties and application to air pollution. Commun. Stat. Theory Methods 2010, 39, 426–443. [Google Scholar] [CrossRef]
  13. Sanhueza, A.; Leiva, V.; Balakrishnan, N. The generalized Birnbaum-Saunders and its theory, methodology, and application. Commun. Stat. Theory Methods 2008, 37, 645–670. [Google Scholar] [CrossRef]
  14. Kundu, D.; Balakrishnan, N.; Jamalizadeh, A. Generalized multivariate Birnbaum-Saunders distributions and related inferential issues. J. Multivar. Anal. 2013, 116, 230–244. [Google Scholar] [CrossRef]
  15. Saulo, H.; Balakrishnan, N.; Zhu, X.; Gonzales, J.F.B.; Leao, J. Estimation in generalized bivariate Birnbaum-Saunders models. Metrika 2017, 80, 427–453. [Google Scholar] [CrossRef]
  16. Díaz-García, J.A.; Domínguez-Molina, J.R. Some generalizations of Birnbaum-Saunders and sinh-normal distributions. Int. Math. Forum 2006, 1, 1709–1727. [Google Scholar] [CrossRef]
  17. Fierro, R.; Leiva, V.; Ruggeri, F.; Sanhuezad, A. On a Birnbaum–Saunders distribution arising from a non-homogeneous Poisson process. Stat. Probab. Lett. 2013, 83, 1233–1239. [Google Scholar] [CrossRef]
  18. Owen, W.J. A new three-parameter extension to the Birnbaum-Saunders distribution. IEEE Trans. Reliab. 2006, 55, 475–479. [Google Scholar] [CrossRef]
  19. Owen, W.J.; Ng, H.K.T. Revisit of relationships and models for the Birnbaum-Saunders and inverse-Gaussian distribution. J. Stat. Distrib. Appl. 2015, 2. [Google Scholar] [CrossRef]
  20. Desmond, A.F. On the relationship between two fatigue-life models. IEEE Trans. Reliab. 1986, 35, 167–169. [Google Scholar] [CrossRef]
  21. Rieck, J.R. A moment-generating function with application to the Birnbaum-Saunders distribution. Commun. Stat. Theory Methods 1999, 28, 2213–2222. [Google Scholar] [CrossRef]
  22. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions; John Wiley & Sons: New York, NY, USA, 1995. [Google Scholar]
  23. Greene, W.H. Econometric Analysis, 5th ed.; Prentice Hall: New York, NY, USA, 2003. [Google Scholar]
  24. Bebu, I.; Mathew, T. Confidence intervals for limited moments and truncated moments in normal and lognormal models. Stat. Probab. Lett. 2009, 79, 375–380. [Google Scholar] [CrossRef]
  25. Terrell, G.R. The gradient statistic. Comput. Sci. Stat. 2002, 34, 206–215. [Google Scholar]
  26. Sen, P.; Singer, J. Large Sample Methods in Statistics: An Introduction with Applications; Chapman & Hall: New York, NY, USA, 1993. [Google Scholar]
  27. Balakrishnan, N.; Leiva, V.; Sanhueza, A.; Cabrera, E. Mixture inverse Gaussian distribution and its transformations, moments and applications. Statistics 2009, 43, 91–104. [Google Scholar] [CrossRef]
Figure 1. Type-II generalized Birnbaum-Saunders (GBS-II) ( m , α , β ) density curves for various parameter values.
Figure 1. Type-II generalized Birnbaum-Saunders (GBS-II) ( m , α , β ) density curves for various parameter values.
Stats 02 00011 g001
Figure 2. GBS-II ( m , α , β ) failure rate curves for various parameter values.
Figure 2. GBS-II ( m , α , β ) failure rate curves for various parameter values.
Stats 02 00011 g002
Figure 3. Comparisons of β estimates for various sample sizes: geometric mean β ^ G (solid line), sample median β ^ M (dashed line).
Figure 3. Comparisons of β estimates for various sample sizes: geometric mean β ^ G (solid line), sample median β ^ M (dashed line).
Stats 02 00011 g003
Figure 4. Repair Data: Histogram and fitted density curves by maximum likelihood estimate (MLE) and new estimation method.
Figure 4. Repair Data: Histogram and fitted density curves by maximum likelihood estimate (MLE) and new estimation method.
Stats 02 00011 g004
Table 1. Type-II generalized Birnbaum-Saunders (GBS-II) Estimation Results for Simulated Data.
Table 1. Type-II generalized Birnbaum-Saunders (GBS-II) Estimation Results for Simulated Data.
n ML MethodNew Method
BiasMSEALCP (%)BiasMSEALCP (%)
m = 0.25 , α = 0.5 , β = 1.0
20m0.12370.29011.226692.390.11900.18820.759292.88
α 0.14050.15820.480392.150.13540.14450.461892.75
β 0.03180.07290.289493.790.02270.06380.233694.26
30 0.11030.23701.105993.140.11120.16840.534094.01
0.13020.13660.457993.250.11610.11010.408994.57
0.02510.06800.216994.240.02160.05120.121394.85
50 0.10500.13350.977394.430.09330.11200.225294.90
0.11280.11600.403994.700.11160.09740.333095.10
0.01200.02460.135895.120.00360.01160.105495.19
m = 0.5 , α = 0.5 , β = 1.0
20m0.11030.13101.014893.290.10810.11870.735793.71
α 0.12320.10770.360293.330.11280.10290.355293.44
β 0.03200.09480.319093.790.02280.07270.231194.55
30 0.10580.12400.938994.550.10230.11300.569494.80
0.11900.10680.346694.430.10530.09100.319194.75
0.01600.06790.243294.820.01030.06200.165095.10
50 0.08120.11610.551094.440.02110.08320.379495.20
0.05050.07430.213094.500.03340.05860.184295.19
0.00830.03500.196195.160.00350.02120.082395.23
m = 1.0 , α = 1.0 , β = 1.0
20m0.18010.13520.977091.900.16310.12270.711992.97
α 0.22500.24370.622492.170.20410.14980.487092.80
β 0.03740.10720.473393.480.02800.08400.220293.96
30 0.16250.12320.902593.540.14800.11120.611794.25
0.18580.20430.580493.790.15030.12550.404094.31
0.02290.07330.348494.250.01170.06290.113094.81
50 0.11510.11100.513993.420.09800.10510.428794.53
0.13530.16700.408094.290.11430.10580.309094.74
0.01370.04200.260295.020.01020.03190.085295.18
m = 1.5 , α = 2.0 , β = 1.0
20m0.27780.13420.974492.320.21800.11710.706993.35
α 0.34900.27830.868893.170.28960.22250.721393.07
β 0.05330.14400.522093.870.04320.09410.286094.10
30 0.25910.13010.897493.280.17440.11040.678594.03
0.30160.23530.774093.770.19480.20560.644294.60
0.04300.10750.424994.290.03370.07350.167094.77
50 0.16070.10950.705094.250.11570.09330.569095.01
0.20600.18450.521794.660.13120.15600.411795.08
0.02070.06810.268394.900.01110.04200.079095.24
m = 2.0 , α = 2.5 , β = 1.0
20m0.37560.21431.120890.670.29620.17451.108192.41
α 0.42310.32771.131091.830.34800.27491.110992.20
β 0.08110.17300.654292.710.05030.11210.325993.35
30 0.31840.18061.091591.880.23610.14871.067293.36
0.35230.27511.106492.130.27820.22601.079893.22
0.07290.14930.484993.180.04250.09860.226494.27
50 0.25970.15640.918893.840.19860.12040.897494.58
0.28730.21090.916793.470.23190.17080.888794.13
0.05190.09330.322594.290.02960.05070.113794.73
Table 2. Null rejection rates (%) for various parameter settings.
Table 2. Null rejection rates (%) for various parameter settings.
n H m 0 : m = m 0 with α = 0.5 , β = 1.0
m 0 = 0.5 m 0 = 1.0 m 0 = 1.5
S LR ( m ) S G ( m ) S LR ( m ) S G ( m ) S LR ( m ) S G ( m )
10 % 5 % 10 % 5 % 10 % 5 % 10 % 5 % 10 % 5 % 10 % 5 %
1012.786.749.385.7512.696.7011.095.6812.736.8011.505.89
2012.406.529.415.4912.366.5410.735.4512.616.7011.345.48
3011.826.309.645.4211.566.2810.545.3912.386.5411.195.39
4011.466.259.765.3711.206.1510.375.2612.306.4010.785.28
5011.126.059.885.2310.985.9010.235.1411.866.2210.555.19
H α 0 : α = α 0 with m = 1.0 , β = 1.0
α 0 = 0.5 α 0 = 1.0 α 0 = 1.5
S LR ( α ) S G ( α ) S LR ( α ) S G ( α ) S LR ( α ) S G ( α )
10 % 5 % 10 % 5 % 10 % 5 % 10 % 5 % 10 % 5 % 10 % 5 %
1011.386.1010.505.8111.416.1111.404.1512.606.119.024.09
2011.156.0410.325.6211.276.0311.224.3812.366.039.104.24
3010.885.6010.175.3711.155.5711.044.6012.135.449.404.40
4010.745.4910.525.2910.845.4010.804.8011.825.329.554.62
5010.395.2610.255.1910.535.3310.434.8611.625.239.744.80
H β 0 : β = β 0 with m = 1.5 , α = 1.5
β 0 = 0.5 β 0 = 1.0 β 0 = 2.0
S LR ( β ) S G ( β ) S LR ( β ) S G ( β ) S LR ( β ) S G ( β )
10 % 5 % 10 % 5 % 10 % 5 % 10 % 5 % 10 % 5 % 10 % 5 %
1011.456.8711.425.7412.486.6311.514.5912.566.3812.374.55
2011.326.6811.305.5612.346.5511.304.7512.406.2512.194.66
3010.896.5010.795.3912.256.3811.194.8312.206.2111.684.75
4010.716.3810.675.2611.946.3110.884.9011.856.0511.424.84
5010.376.1510.255.1311.306.1010.484.9411.515.8910.914.90
Table 3. Power (%) under two parameter settings at significance level γ = 5 % .
Table 3. Power (%) under two parameter settings at significance level γ = 5 % .
n m = 1.0 , α = 0.5 , β = 1.0
H m 0 : m = 1.0 H α 0 : α = 0.5 H β 0 : β = 1.0
H m 1 : m = m 1 H α 1 : α = α 1 H β 1 : β = β 1
m 1 S LR ( m ) S G ( m ) α 1 S LR ( α ) S G ( α ) β 1 S LR ( β ) S G ( β )
201.005.435.280.505.415.301.005.345.22
1.0520.8822.420.5524.1326.481.0525.3326.26
1.1031.3032.580.6034.3738.721.1040.5441.64
1.1546.9148.410.6550.0553.271.1556.3658.26
501.005.215.180.505.255.231.005.285.15
1.0528.8729.780.5528.8130.131.0533.0736.10
1.1045.4548.570.6049.3050.781.1054.4455.85
1.1560.8263.450.6564.0165.391.1567.1369.07
1001.005.085.050.505.115.041.005.065.03
1.0532.1238.360.5540.0442.771.0543.0746.08
1.1071.2877.230.6076.2479.621.1079.7783.45
1.1586.7889.140.6589.0091.311.1590.5193.05
m = 1.5 , α = 1.0 , β = 2.0
H m 0 : m = 1.5 H α 0 : α = 1.0 H β 0 : β = 2.0
H m 1 : m = m 1 H α 1 : α = α 1 H β 1 : β = β 1
m 1 S LR ( m ) S G ( κ ) α 1 S LR ( α ) S G ( α ) β 1 S LR ( β ) S G ( β )
201.505.485.311.005.505.332.005.495.30
1.5522.8025.671.0525.3627.082.0525.0527.37
1.6034.6235.101.1037.1139.122.1038.2540.18
1.6547.3149.001.1548.5050.852.1550.0851.72
501.505.375.241.005.355.292.005.345.22
1.5529.2030.501.0528.1530.742.0530.1832.85
1.6046.2048.091.1046.3848.252.1049.2852.10
1.6560.7263.741.1561.7664.162.1565.5068.07
1001.505.125.071.005.145.092.005.115.04
1.5533.0336.151.0538.0538.202.0540.1742.20
1.6069.8572.101.1074.1878.092.1078.1582.12
1.6585.8688.851.1587.1591.242.1588.2791.11
Table 4. Repair Time: Estimation Results.
Table 4. Repair Time: Estimation Results.
Methodm (SE) α (SE) β (SE) χ 2 , BIC
MLE0.8326 (0.5792)1.6820 (1.2643)2.6093 (0.1828)97.27, 103.40
95% CI(0.2170, 1.8444)(0.3743, 8.5531)(1.3854, 3.2163)
New0.4657 (0.2478)1.0322 (0.5366)1.7727 (0.1439)19.82, 93.71
95% CI(0.3229, 0.6436)(0.7634, 1.6216)(1.0545, 2.9831)

Share and Cite

MDPI and ACS Style

Sha, N. A New Inference Approach for Type-II Generalized Birnbaum-Saunders Distribution. Stats 2019, 2, 148-163. https://doi.org/10.3390/stats2010011

AMA Style

Sha N. A New Inference Approach for Type-II Generalized Birnbaum-Saunders Distribution. Stats. 2019; 2(1):148-163. https://doi.org/10.3390/stats2010011

Chicago/Turabian Style

Sha, Naijun. 2019. "A New Inference Approach for Type-II Generalized Birnbaum-Saunders Distribution" Stats 2, no. 1: 148-163. https://doi.org/10.3390/stats2010011

APA Style

Sha, N. (2019). A New Inference Approach for Type-II Generalized Birnbaum-Saunders Distribution. Stats, 2(1), 148-163. https://doi.org/10.3390/stats2010011

Article Metrics

Back to TopTop