Next Article in Journal
Graphs Based on Hoop Algebras
Previous Article in Journal
A Regularization Method to Solve a Cauchy Problem for the Two-Dimensional Modified Helmholtz Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Goodness of Fit Tests for the Log-Logistic Distribution Based on Cumulative Entropy under Progressive Type II Censoring

Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(4), 361; https://doi.org/10.3390/math7040361
Submission received: 24 March 2019 / Revised: 11 April 2019 / Accepted: 16 April 2019 / Published: 20 April 2019

Abstract

:
In this paper, we propose two new methods to perform goodness-of-fit tests on the log-logistic distribution under progressive Type II censoring based on the cumulative residual Kullback-Leibler information and cumulative Kullback-Leibler information. Maximum likelihood estimation and the EM algorithm are used for statistical inference of the unknown parameter. The Monte Carlo simulation is conducted to study the power analysis on the alternative distributions of the hazard function monotonically increasing and decreasing. Finally, we present illustrative examples to show the applicability of the proposed methods.

1. Introduction

The log-logistic distribution is a kind of classical life distribution in reliability. Scholars have extensively studied this distribution. In  [1,2], they mainly studied the nature of the distribution, and a method for obtaining the exact confidence interval of the shape parameter of the log-logistic distribution based on the maximum likelihood estimation was proposed. In  [3,4], scholars found that log-logistic distribution was often used for life analysis, especially the reliability characteristics of the research model; while [5,6,7] studied the applications of this distribution in the fields of the economy, networks, and the environment.
The log-logistic distribution has two main advantages. One is that the hazard function is not strictly monotonous and has some flexibility in representing product failure rate. Second, its cumulative distribution function has a closed form, which is more advantageous for life analysis of censored data.
A random variable X is said to have the log-logistic distribution if its cumulative distribution function (cdf) is expressed as:
F X ( x ) = 1 ( 1 + x β ) 1 x 0 , β > 0 .
The probability density function (pdf) is as follows:
f X ( x ) = β x β 1 ( 1 + x β ) 2 x 0 , β > 0 .
In many life analyses, sometimes, in order to save money and resources, the data will be truncated, and the data obtained are not complete. Such data are classified as censored data. Type I censoring, Type II censoring, and progressive censoring are the most common censoring methods.
Progressive Type I censoring is a time-fixed censorship in which individuals are deleted at a predetermined ratio at the same time interval at a fixed observation time. Progressive Type II censoring is a fixed amount of censorship, that is censoring at a certain percentage each time a failed individual occurs. Progressive censoring refers to the method of continually censoring in observation, which is different from only censoring at the end of the experiment. This article mainly discusses the progressive Type II censorship.
The steps to perform progressive Type II censoring are as follows: n individuals are tested, and after the first individual fails, randomly take R 1 individuals from the surviving n 1 individuals. Continue to observe until the next failed individual appears, then randomly select R 2 individuals from the surviving n 2 R 1 individuals to no longer observe, and repeat the process until the m th individual failure is observed, and all last R m remaining individuals are withdrawn from the observation, which results in a ( R 1 , R 2 , , R m ) censoring scheme.
There are many studies on the statistical inference problem of censored data: In [8], different progressive censoring methods for samples in life testing were systematically studied. The confidence interval of the unknown parameter under Type II censoring was provided by [9]. They derived precise confidence intervals for different vital features, such as quantile, position, reliability, and scale. The method was also applied to the single-parameter and two-parameter exponential models to verify the feasibility and practicability. An algorithm was proposed in [10] for simulating the data generation of progressive Type II censoring, which will be used in the Monte Carlo simulations later.
In this article, we will present the log-logistic distribution fitting experiment based on cumulative entropy under progressive Type II censoring. The other parts of the article are arranged as follows: In Section 2, we will introduce Kullback-Leibler (KL) information and its related properties. In Section 3, maximum likelihood estimation and the EM algorithm are used for statistical inference of the unknown parameter. We construct two test statistics due to CRKL and CKL in Section 4. The Monte Carlo simulation is presented to study power on the alternative distributions in Section 5. Finally, we will provide an illustrative example in Section 6 and prove the applicability of the statistics.

2. Entropy and Kullback-Leibler Information

As the quantification of information, the average amount of information after redundancy is excluded is called information entropy. In 1948, Shannon proposed the concept of information entropy for the first time in order to clarify the relationship between probability and information redundancy. The Shannon entropy is for the discrete random variable X. Now that we want to extend this concept to the case of continuous random variables, we will get a very important concept, namely differential entropy. The definition is as follows:
H ( X ) = 0 f ( x ) [ ln f ( x ) ] d x ,
where ln ( · ) represents the natural logarithm, f ( x ) represents the probability density function of the random variable X.
In the last few years, information theory measures have been used in many fields to verify the consistency of the distribution in statistical inference. For example, Vasicek [11] first proposed a distribution fitting test for a normal distribution using maximum entropy. Vasicek’s contribution has greatly contributed to the development of entropy-based distribution fitting test topics. Afterwards, many scholars have studied this topic; refer to [12,13,14].
A distribution fitting test using Kullback-Leibler (KL) divergence was proposed in [15,16] , and its definition is as follows:
KL ( F : G ) = 0 f ( x ) ln f ( x ) g ( x ) d x ,
where f ( x ) and g ( x ) represent the density functions related to F ( x ) and G ( x ) distributions.
Lemma 1.
It is known from the inequality ln x x 1 that K L ( F : G ) is non-negative.
Lemma 2.
F ( x ) and G ( x ) are equally distributed if and only if K L ( F : G ) = 0 .
Proof. 
Under the premise that KL information is non-negative, Formula (4) shows that the smaller the distance between the two distributions, the smaller the corresponding KL divergence. When KL ( F : G ) = 0 ,
KL ( F : G ) = 0 f ( x ) ln f ( x ) g ( x ) d x = 0 f ( x ) ln g ( x ) f ( x ) d x 0 f ( x ) ( 1 g ( x ) f ( x ) ) d x = 0 ,
the equal sign holds, i.e., ln g ( x ) f ( x ) = g ( x ) f ( x ) 1 . Referring to the inequality ln x x 1 , we get f ( x ) = g ( x ) . Because f ( x ) and g ( x ) represent the density functions related to F ( x ) and G ( x ) distributions, F ( x ) and G ( x ) are equally distributed. □
According to Lemmas 1 and 2, when G ( x ) is known as the log-logistic distribution, the empirical distribution function F n ( x ) can be obtained from the collected data. Then, the value of KL ( F n : G ) can be calculated for the distribution fitting test. If the value of KL information is approximately zero to a certain extent, we can think that F ( x ) and G ( x ) are equally distributed.
The limitation of differential entropy is that it only applies to random variables that have continuous density functions. Authors of [17] presented the concept of cumulative residual entropy (CRE), in order to extend the scope of application to random variables without specific density functions. The scheme performs a distribution fitting test according to the survival function F ¯ ( x ) = 1 F ( x ) in the reliability analysis. The cumulative residual entropy is applied to random variables that are non-negative, and the definition is:
CRE ( X ) = 0 F ¯ ( x ) [ ln F ¯ ( x ) ] d x .
Baratpour [14] proposed a new CRE-based method for comparing the distance between two distributions, named cumulative residual Kullback-Leibler (CRKL) divergence. They also constructed fitting tests under exponential distribution. It is also possible to verify that CRKL is non-negative with a method similar to that of KL information.
CRKL ( F : G ) = 0 F ¯ ( x ) [ ln F ¯ ( x ) G ¯ ( x ) ] d x [ E ( X ) E ( Y ) ] ,
where F ¯ ( x ) and G ¯ ( x ) are the survival functions of the random variables X and Y. E ( X ) and E ( Y ) are the expectations of X and Y.
Authors of [18] proposed cumulative entropy (CE), which is a new method based on classical differential information entropy. This cumulative entropy is the expected average of the inactivity time of the random lifetime X. This metric is particularly well-suited to address issues related to aging characteristics based on past and inactive time in reliability analysis. They defined CE as:
CE ( X ) = 0 F ( x ) [ ln F ( x ) ] d x .
Another cumulative KL information (CKL) related to the cumulative distribution function was proposed in [19] and was expressed as follows:
CKL ( F : G ) = 0 F ( x ) [ ln F ( x ) G ( x ) ] d x [ E ( Y ) E ( X ) ] .
According to ln x x 1 , x > 0 , we can prove that CKL ( F : G ) 0 . Only when the equation holds, F ( x ) = G ( x ) .

3. Parameter Estimations

3.1. Maximum Likelihood Estimation

We are going to discuss the maximum likelihood estimation of the log-logistic distribution under the progressive Type II censored data in this section. Suppose X ( 1 ) , X ( 2 ) , , X ( m ) are the fail times obtained by m observations for a given censoring scheme R 1 , R 2 , , R m . In the following derivation, X ( i ) will be recorded as x i . Then, we have:
L ( β ) = c i = 1 m [ 1 F ( x i ) ] R i f ( x i ) = c i = 1 m β x i β 1 ( 1 + x i β ) 2 + R i , c = n · ( n R 1 1 ) · ( n R 1 R 2 2 ) ( n R 1 R m 1 m + 1 ) ,
where β is the shape parameter, c is a fixed coefficient, and the corresponding log-likelihood function is as follows:
ln L ( β ) = ln c + ln β m + i = 1 m [ ( β 1 ) ln x i ( 2 + R i ) ln ( 1 + x i β ) ] .
The likelihood equation for the parameter β is:
d d β ln L ( β ) = m β + i = 1 m ln x i [ 1 ( 1 + R i ) x i β ] 1 + x i β = 0 .
From the above Formula (11), theoretically, we can get the maximum likelihood form β ^ of the parameter β , but the likelihood equation is complicated and nonlinear, so the explicit solution of β cannot be calculated. The approximate solution obtained by the simulation method cannot prove the uniqueness of the solution.
Based on the deficiencies of the maximum likelihood in solving traditional problems, in the actual problem processing, the EM algorithm, dichotomy, or the Newton-Raphson method can be selected according to the convenience level to solve the estimated value. Considering that the existing research of the dichotomy and Newton-Raphson method has been perfected, this paper only elaborates on the EM algorithm to further complete the maximum likelihood estimation.

3.2. Expectation Maximization Algorithm

Dempster [20] first proposed an expected maximization algorithm named the EM algorithm. Originally derived from [21], it was gradually matured and modified, such as the GEM algorithm [22] and the Monte Carlo EM algorithm.
The EM algorithm is an iterative algorithm for solving the maximum likelihood estimations in incomplete data problems. It transforms the optimization problem of a complex likelihood function into a simple function optimization problem.
In general, the order statistic of the observed data is represented by X = ( x 1 , x 2 , , x m ) . Z i = ( z i 1 , z i 2 , , z i R i ) indicates missing data, that is when x i is observed, R i variables are randomly deleted from the remaining n i j = 1 i 1 R j individuals. The individual component lifetimes of Z i are greater than x i and are out of order.
X and Z are combined to form complete data W = ( X , Z ) = ( w 1 , w 2 , , w n ) . The key to the EM algorithm is that it is easy to obtain the combined density function of the data after completion. When the effective datum X is known, the conditional density of the missing variable Z can be quickly obtained. β is the unknown parameter to be estimated. p ( x , z | β ) is the distribution density of the complete data, and p ( z | β , x ) indicates the conditional distribution of the missing variable Z after given observation data.
The EM algorithm divides the problem into the following steps:
Step 1
Add the missing variable Z. Combined with the distribution function (1), it is easy to obtain:
p ( z | β , x ) = j = 1 m k = 1 R j f ( z j k | β ) 1 F ( x j , β ) = j = 1 m k = 1 R j β z j k β 1 ( 1 + x j β ) ( 1 + z j k β ) 2 .
p ( x , z | β ) = p ( z | β , x ) p ( x | β ) = c j = 1 m [ f ( x j | β ) k = 1 R j f ( z j k | β ) ] = c j = 1 m [ β x j β 1 ( 1 + x j β ) 2 k = 1 R j β z j k β 1 ( 1 + z j k β ) 2 ] . c = n · ( n R 1 1 ) · ( n R 1 R 2 2 ) ( n R 1 R m 1 m + 1 ) .
The log-likelihood function of full data l ( β | w ) = l ( β | x , z ) is as follows:
l ( β | x , z ) = j = 1 m [ ln f ( x j | β ) + k = 1 R j ln f ( z j k | β ) ] = j = 1 m [ ln β x j β 1 ( 1 + x j β ) 2 + k = 1 R j ln β z j k β 1 ( 1 + z j k β ) 2 ] .
Step 2
E step (expectation step): Derive the maximum likelihood function of the complete data. Find their conditional expectations Q ( β | β ( i ) , w ) .
Q ( β | β ( i ) , w ) = E [ l ( β | w ) | X = x , β = β ( i ) ] = z l ( β | x , z ) p ( z | β ( i ) , x ) = j = 1 m [ ( β 1 ) ln x j + ln β 2 ln ( 1 + x j β ) + k = 1 R j ( ( β 1 ) E ( ln z j k ) + ln β 2 E ( ln ( 1 + z j k β ) ) ) ] .
Step 3
M step (maximization step): Maximize the expected value and get the next iteration value  β ( i + 1 ) .
Q β = j = 1 m [ 1 β + ln x j 2 x j β ln x j 1 + x j β + k = 1 R j ( 1 β + E ( ln z j k ) ) ] = 0 , β = j = 1 m ( 1 + R j ) j = 1 m [ 2 x j β ln x j 1 + x j β ln x j k = 1 R j E ( ln z j k ) ] .
Let:
A j = x j β ln x j 1 + x j β , E ( ln z j k ) = β ln ( 1 + x j β ) x j z j k β 1 ln z j k ( 1 + z j k β ) 2 d z j k = B ,
β ( i + 1 ) = j = 1 m ( R j + 1 ) j = 1 m [ 2 A j ln x j k = 1 R j B ] .
Step 4
Use β ( i + 1 ) instead of β ( i ) in step E, repeating the E and M steps. When | β ( i + 1 ) β ( i ) | or | Q ( β ( i + 1 ) | β ( i ) , w ) Q ( β ( i ) | β ( i ) , w ) | is sufficiently small, stop the iteration.

4. Distribution Fitting Test Statistics

EDF test statistics are usually used to measure the distance between two empirical distribution functions F n ( x ) and F β ^ ( x ) . If the original hypothesis H 0 is established, the estimated value β ^ of β can be obtained. In this paper, we will propose two EDF test statistics related to CRKL and CKL. We will use these two statistics to test whether the unknown distribution satisfies the log-logistic distribution of the null hypothesis and propose multiple alternative distributions at the same time.
Given a set of censored schemes R = ( R 1 , R 2 , . . . , R m ) , let x 1 : m : n < x 2 : m : n < < x m : m : n be a sample of the distribution F β ( x ) under progressive Type II censoring. The distribution function of the log-logistic is F β ( x ) = 1 ( 1 + x β ) 1 , where β is an unknown shape parameter.
Based on CRKL, CKL, and the obtained censored sample, we consider the following hypothesis.
H 0 : F ( x ) = F β ( x ) v . s . H 1 : F ( x ) F β ( x ) .
When the statistic is found to be large enough, we reject the null hypothesis.
The empirical distribution function of the complete data can be derived using order statistics. In the same way, we can estimate the empirical distribution function F m : n ( x ) of the censored data, which is presented as:
F m : n ( x ) = 0 x < x 1 : m : n α i : m : n x i : m : n x < x i + 1 : m : n , i = 1 , , m 1 α m : m : n x x m : m : n .
Balakrishnan [10] found that α i : m : n was the i th expected value of statistical data obeying uniformly-distributed data under progressive Type II censoring, i.e., α i : m : n = E ( U i : m : n ) .
Since U i are not independent of each other, we let 0 < V 1 , V 2 , , V m < 1 and:
V 1 = 1 U m 1 U m 1 V 2 = 1 U m 1 1 U m 2 V m 1 = 1 U 2 1 U 1 V m = 1 U 1 .
The joint density function of V 1 , V 2 , , V m is obtained by the Jacobian transform method.
f ( V 1 , V 2 , , V m ) = c i = 1 m V i i 1 + j = m i + 1 m R j .
According to the factorization theorem, V i are independent of each other and obey B e t a ( i + j = m i + 1 m R j , 1 ) ( i = 1 , , m ) , respectively. Then, we have:
α i : m : n = 1 j = m i + 1 m j + R m j + 1 + + R m j + 1 + R m j + 1 + + R m .
When the sample obeys the log-logistic distribution function (1), the censored CRKL is able to be reduced as:
CRKL ( F m : n : F β ) = 0 x m : m : n ( 1 F m : n ( x ) ) ln 1 F m : n ( x ) ( x β + 1 ) 1 d x 0 x m : m : n ( 1 F m : n ( x ) ) d x + 0 x m : m : n ( 1 + x β ) 1 d x = i = 1 m 1 ( 1 α i : m : n ) ln ( 1 α i : m : n ) ( x i + 1 : m : n x i : m : n ) + i = 1 m 1 ( 1 α i : m : n ) x i : m : n x i + 1 : m : n ln ( 1 + x β ) d x i = 1 m 1 ( 1 α i : m : n ) ( x i + 1 : m : n x i : m : n ) + [ x m : m : n 1 β x m : m : n ln ( 1 + x m : m : n β ) + 1 β 0 x m : m : n ln ( 1 + x β ) d x ] .
The unknown parameter β should be replaced by its expectation maximization estimate β ^ EM . In order to make the statistics free of units, divide (23) by i = 0 m 1 ( 1 α i : m : n ) ( x i + 1 : m : n x i : m : n ) . Get the following statistic CRKL ^ .
CRKL ^ = A + B + C 1 ,
where:
A = i = 1 m 1 ( 1 α i : m : n ) ( x i + 1 : m : n x i : m : n ) ln ( 1 α i : m : n ) i = 1 m 1 ( 1 α i : m : n ) ( x i + 1 : m : n x i : m : n ) , B = i = 1 m 1 ( 1 α i : m : n ) x i : m : n x i + 1 : m : n ln ( 1 + x β ) d x i = 1 m 1 ( 1 α i : m : n ) ( x i + 1 : m : n x i : m : n ) , C = x m : m : n 1 β x m : m : n ln ( 1 + x m : m : n β ) + 1 β 0 x m : m : n ln ( 1 + x β ) d x i = 1 m 1 ( 1 α i : m : n ) ( x i + 1 : m : n x i : m : n ) .
Similarly, the statistic based on CKL is as follows:
CKL ^ = D E F + 1 ,
where:
D = i = 1 m 1 α i : m : n ( x i + 1 : m : n x i : m : n ) ln α i : m : n i = 1 m 1 ( 1 α i : m : n ) ( x i + 1 : m : n x i : m : n ) , E = i = 1 m 1 α i : m : n x i : m : n x i + 1 : m : n ln [ 1 ( 1 + x β ) 1 ] d x i = 1 m 1 ( 1 α i : m : n ) ( x i + 1 : m : n x i : m : n ) , F = x m : m : n 1 β x m : m : n ln ( 1 + x m : m : n β ) + 1 β 0 x m : m : n ln ( 1 + x β ) d x i = 1 m 1 ( 1 α i : m : n ) ( x i + 1 : m : n x i : m : n ) .
Obviously, neither CRKL ^ nor CKL ^ depend on the scale; therefore, they are suitable for goodness-of-fit tests.

5. Monte Carlo Simulations

Monte Carlo simulation is a method of solving many computational problems using random numbers. Based on a probabilistic model, statistical simulations or samplings are performed using an electronic computer to obtain an approximate solution to the problem, according to the process depicted by the model. Currently, there is related research in many fields, such as: the environment [23], computer science and technology [24], physics [25], finance [26], etc.
For the CRKL ^ and CKL ^ tests, when the significance level α is determined and the statistic is larger than the critical value, we reject the null hypothesis. Since the quantile cannot be determined from the distribution of CRKL ^ (23) and CKL ^ (24), we use the Monte Carlo simulation method to obtain the critical values CRKL ^ 0.95 , CRKL ^ 0.90 , CKL ^ 0.95 , and CKL ^ 0.90 .
Lemma 3.
Suppose X obeys the log-logistic distribution, and let Y = ln ( 1 + x β ) , then Y obeys the standard exponential distribution.
Proof. 
Suppose the distribution functions of the random variables X and Y are F X ( x ) and F Y ( y ) , respectively.
F X ( x ) = 1 ( 1 + x β ) 1 x 0 , β > 0 .
When y > 0 :
F Y ( y ) = P ( Y y ) = P ( ln ( 1 + x β ) y ) = P ( X ( e y 1 ) 1 β ) = F X ( ( e y 1 ) 1 β ) = 1 e y .
Let X 1 < X 2 < < X m be a sample for a given progressive Type II censored scheme from the log-logistic distribution of which the unknown parameter is β . Let Y i = ln ( 1 + x i β ) . From Lemma 3, we can see that Y 1 < Y 2 < < Y m is the order statistic of the data from the standard exponential distribution under progressive Type II censoring. Refer to [10] for the following transformation:
S 1 = n Y 1 S 2 = ( n R 1 1 ) · ( Y 2 Y 1 ) S m = ( n R 1 R m m + 1 ) · ( Y m Y m 1 ) .
According to the above transformation, Thomas [27] proved that S 1 , S 2 , , S m were independent of each other and conformed to the standard exponential distribution. Ten thousand random samples ( S 1 , S 2 , , S m ) can be generated by R software and part of the code is shown in the Appendix A. The samples under progressive Type II censoring were obtained by corresponding transformation.
Under the experimental assumption of size n, censored number m, significance level α , censored scheme ( R 1 , , R m ) , and distribution (1) with β = 2 , we randomly generated 10,000 sets of progressive Type II censoring samples. From the law of large numbers, we know that if a test is repeated multiple times under a constant condition, the probability can be replaced by the frequency of the random event.
Because the study of the exponential distribution [14] indicated that the hazard function had a great influence on the study of the power of statistics, we present alternatives in Table 1 classified by the type of hazard function:
The power study indicates the probability of rejecting the null hypothesis under the condition that the alternative hypothesis is established. It is a test of whether or not the test statistics we have proposed are feasible.
According to the classification method of the hazard function, the power study of this paper involved seven alternative hypotheses. To calculate the value of the test statistics, the parameters of the alternative distributions were estimated using the EM algorithm.
(a)
H 0 : X log-logistic(2)    vs.     H 1 : X Pareto(5):
f ( x ; β ) = β ( 1 + x ) ( β + 1 ) , x 0 , β > 0 , F ( x ; β ) = 1 ( 1 + x ) β , h ( x ; β ) = β 1 + x , L ( β ) = c β m i = 1 m ( 1 + x i ) ( β + 1 + β R i ) , c = n · ( n R 1 1 ) · ( n R 1 R 2 2 ) ( n R 1 R m 1 m + 1 ) .
The shape parameter is β , and the failure rate had a downward trend as x increased.
(b)
H 0 : X log-logistic(2)    vs.     H 1 : X Rayleigh(0.2):
f ( x ; β ) = x e x 2 / 2 β 2 β 2 , x 0 , β > 0 , F ( x ; β ) = 1 e x 2 / 2 β 2 , h ( x ; β ) = x β 2 , L ( β ) = c β 2 m i = 1 m x i e x i 2 ( R i + 1 ) / 2 β 2 .
The shape parameter is β , and the failure rate was on the rise as x increased.
(c)
H 0 : X log-logistic(2)    vs.     H 1 : X Weibull(0.2,1):
f ( x ; k , λ ) = k λ ( x λ ) k 1 e ( x / λ ) k , x 0 , k > 0 , λ > 0 , F ( x ; k , λ ) = 1 e ( x / λ ) k , h ( x ; k , λ ) = k λ ( x λ ) k 1 , L ( k , λ ) = c ( k λ ) m i = 1 m ( x i λ ) k 1 e ( x i / λ ) k ( R i + 1 ) ,
where k > 0 and λ > 0 are the shape parameter and scale parameter corresponding to the distribution, respectively. k < 1 indicates that the failure rate tended to decrease as the experiment time elapsed. k = 1 means that the change in failure rate was independent of time. k > 1 indicates that the failure rate was increasing as the experiment time went on.
(d)
H 0 : X log-logistic(2)    vs.     H 1 : X Bathtub-shaped(2,15):
f ( x ; β , λ ) = λ β x β 1 e λ ( 1 e x β ) + x β , x 0 , β > 0 , λ > 0 , F ( x ; β , λ ) = 1 e λ ( 1 e x β ) , h ( x ; β , λ ) = λ β x β 1 e x β , L ( β , λ ) = c λ m β m i = 1 m x i β 1 e x i β + λ ( 1 e x i β ) ( R i + 1 ) .
The unknown shape parameter and scale parameter of the distribution are β and λ . In fact, λ did not have an influence on h ( x ) . This distribution had an increased hazard function when β > 1 . Otherwise, h ( x ) was bathtub shaped.
(e)
H 0 : X log-logistic(2)    vs.     H 1 : X Gexp(0.5,1):
f ( x ; β , λ ) = β λ ( 1 e λ x ) β 1 e λ x , x 0 , β > 0 , λ > 0 , F ( x ; β , λ ) = ( 1 e λ x ) β , h ( x ; β , λ ) = β λ e λ x ( 1 e λ x ) β 1 1 ( 1 e λ x ) β , L ( β , λ ) = c β m λ m i = 1 m ( 1 e λ x i ) β 1 e λ x i ( 1 ( 1 e λ x i ) β ) R i ,
where β is the shape parameter and λ is the scale parameter. The hazard function of this distribution tended to increase when β > 1 . h ( x ) is constant over time if β = 1 . When β < 1 , h ( x )  decreased.
To analyze the power of these alternative distributions, we selected 12–18 sets of censored schemes for n = 10, 20, and 30, and 10,000 Monte Carlo simulations were performed under each censorship plan. Table 2, Table 3 and Table 4 and Figure 1, Figure 2, Figure 3 and Figure 4 show the simulation results.
The average values of power in Table 2, Table 3 and Table 4 were 0.575, 0.561, and 0.706, respectively. The ratio of power over 0.65 in each table reached 52.43 % , 54.17 % , and 66.32 % .
The trend of the test statistic on the censored program can be obtained by the analysis of Figure 1, Figure 2, Figure 3 and Figure 4:
  • From Figure 1, different censorship schemes had little effect on power for the CRKL ^ test when the hazard function monotonically decreased.
  • From Figure 2, for the CKL ^ test, there was no obvious trend in power study for the monotone decreasing hazard function under different censored schemes.
  • From Figure 3, when the censorship only happened at the end, the power for the monotone increasing hazard function for the CRKL ^ test was lower.
  • From Figure 4, when the censorship only happened at the beginning and the hazard function monotonically increased, the power for the CKL ^ test was lower.
Therefore, we obtained the following conclusions: Before performing the goodness-of-fit test, we selected the test statistic by analyzing the hazard function of the relevant distributions. When the hazard function was monotonically decreasing, the statistic CKL ^ was selected. When the hazard function increased monotonically, if the censoring occurred only at the beginning of the experiment, we used CRKL ^ ; if the censoring only occurred at the end of the experiment, we used CKL ^ ; in other cases, both test statistics were acceptable.

6. Illustrative Example

6.1. Application Prospect

The modern quality concept holds that product quality is a combination of characteristics that meet the requirements of use, that is, applicability. Improving quality in the process of product realization is an important way to improve product quality. Therefore, in the product reliability analysis, it is important to determine the product probability distribution to further analyze the shape characteristics of the function such as the failure rate.
For example, quality fluctuations common in product formation have unacceptable parts and installation errors. In order to establish a reliability model under two quality fluctuations, it is first necessary to determine the product probability distribution. When relevant research shows that the product may be the log-logistic distribution, the algorithm proposed in this paper can be used to test the goodness of fit.
Because the failure rate function is not strictly monotonic under the log-logistic distribution, there is flexibility in establishing a reliability model. Our proposed algorithm has practical value in effectively controlling quality fluctuations and improving product reliability. Below, we will introduce actual data for simulation studies.

6.2. Real Data Application

We have proven that the proposed goodness-of-fit tests are discriminative by analyzing the power of hypothesis testing. In this part, we will introduce a set of real data to continue to illustrate the applicability of the algorithm. The work in [9] proposed a progressive censored sample of the logarithmic lifetime of the insulating fluid tested by 34 KV to the breakdown data. These data of size m = 5 generated from n = 13 are listed in Table 5. We will use the two test statistics to test the collected censored data and analyze whether the sample obeys the log-logistic distribution.
Firstly, calculate the value of the test statistics according to Equations (23) and (24). The value of the unknown parameter β was replaced by β ^ obtained under the parameter estimation method in the third section. The results of the calculations were CRKL ^ = 1.221995 and CKL ^ = 0.025522 . Referring to the algorithm for calculating the p-values by R in the Monte Carlo simulation in Section 5, we obtained p-values corresponding to the two goodness-of-fit tests of 0.898 and 0.605, respectively. From the results, there is sufficient evidence to show that the sample observed under progressive Type II censoring obeyed the log-logistic distribution. Through the simulation of this set of data, we can get the conclusion that the test statistic is applicable.

7. Conclusions

In this paper, the fitting test of the log-logistic distribution was discussed under progressive Type II censoring. We established CRKL ^ and CKL ^ test statistics based on the cumulative residual entropy and cumulative entropy. Among them, we used the maximum likelihood estimation and EM algorithm for parameter estimation, and the Monte Carlo simulation was presented to analyze the multiple alternative hypotheses. Trend analysis of the power study under different censorship schemes demonstrated the feasibility of the test statistics.
We have come to the conclusion that the test statistic was chosen based on the monotonicity of the risk function of the relevant distribution. When the hazard function monotonically decreased, we selected the statistic CKL ^ . When the hazard function increased monotonically, it was necessary to select the statistic according to the time when the censorship occurred. The power analysis showed that the proposed goodness-of-fit test had the feasibility of discriminating whether the null hypothesis was true. At the same time, the actual data case showed the combination of algorithm and practical application. The idea of the fitting test in this paper can also be applied to more distributions in subsequent studies, such as: Weibull, Rayleigh, Pareto, etc.

Author Contributions

Methodology and Writing, Y.D.; Supervision, W.G.

Funding

This research was supported by Project 201910004001 which was supported by National Training Program of Innovation and Entrepreneurship for Undergraduates.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

          ### R code in Monte Carlo simulation
        
          ### Randomly generate progressive type-II censored samples
        
           
        
          n<-20
        
          m<-10
        
          aa<-1000
        
          r<-c(0,0,0,0,0,0,10,0,0,0)
        
           
        
          xx<-matrix(1:(aa*m),aa,m)
        
          for(j in 1:aa){
        
          s<-rexp(m,1)
        
          y<-rep(0,m)
        
          x<-rep(0,m)
        
          y[1]<-s[1]/n
        
          rr<-rep(0,m)
        
          for(i in 1:(m-1)){
        
          rr[1]<-r[1]
        
          rr[i+1]<-rr[i]+r[i+1]
        
          y[i+1]<-y[i]+s[i+1]/(n-rr[i]-i)
        
          i<-i+1
        
          }
        
           
        
          ### H0:log-logistic(2)
        
          x<-(exp(y)-1)^1/2
        
          xx[j,]<-x
        
          j<-j+1
        
          }
        
           
        
          beta1<-rep(0,aa)
        
          for(k in 1:aa){
        
          x1<-xx[k,]
        
          log.like<-function(x,beta){
        
          AA<-m*log(beta)+sum((beta-1)*log(x)-(2+r)*log(1+x^beta))
        
          return(-AA)
        
          }
        
          res<-optim(c(1),log.like,method="L-BFGS-B",lower=0.01,x=x1,
        
          hessian=T,control=list(trace=F,maxit=100))
        
          beta1[k]<-res$par
        
          }
        
           
        
          ### calculate alpha
        
          alpha<-rep(0,m)
        
          for(i in 1:m){
        
          jj<-seq(m-i+1,m)
        
          alpha[i]<-1-prod((jj+sum(r[jj]))/(jj+sum(r[jj])+1))
        
          i<-i+1
        
          }
        
           
        
          #### Find the critical value
        
          A<-rep(0,aa)
        
          B<-rep(0,aa)
        
          C<-rep(0,aa)
        
          D<-rep(0,aa)
        
          E<-rep(0,aa)
        
          crkl<-rep(0,aa)
        
          ckl<-rep(0,aa)
        
           
        
          for(j in 1:aa){
        
          xxx<-xx[j,]
        
          beta<-beta1[j]
        
          A[j]<-sum((1-alpha[1:(m-1)])*(xxx[1:(m-1)+1]-xxx[1:(m-1)])
        
          *log(1-alpha[1:(m-1)]))/sum((1-alpha[1:(m-1)])*(xxx[1:(m-1)+1]-xxx[1:(m-1)]))
        
          B[j]<-sum((1-alpha[1:m-1])*(log(xxx[1:(m-1)+1]+xxx[1:(m-1)+1]^(1+beta)
        
          /(1+beta))-log(xxx[1:(m-1)]+xxx[1:(m-1)]^(1+beta)/(1+beta))))
        
          /(sum((1-alpha[1:(m-1)])*(xxx[1:(m-1)+1]-xxx[1:(m-1)])))
        
          C[j]<-(xxx[m]-1/beta*xxx[m]*log(1+xxx[m]^beta)+1/beta*log(xxx[m]
        
          +xxx[m]^(1+beta)/(1+beta)))/(sum((1-alpha[1:(m-1)])*(xxx[1:(m-1)+1]-
        
          xxx[1:(m-1)])))D[j]<-sum(alpha[1:(m-1)]*(xxx[1:(m-1)+1]-xxx[1:(m-1)])
        
          *log(alpha[1:(m-1)]))
        
          /sum((1-alpha[1:(m-1)])*(xxx[1:(m-1)+1]-xxx[1:(m-1)]))
        
          E[j]<-sum(alpha[1:m-1]*(log(xxx[1:(m-1)])-log(xxx[1:(m-1)+1])))
        
          /(sum((1-alpha[1:(m-1)])*(xxx[1:(m-1)+1]-xxx[1:(m-1)])))
        
          j<-j+1
        
          }
        
          crkl<-A+B+C-1
        
          ckl<-D-E-C+1
        
          crkll<-sort(crkl)
        
          ckll<-sort(ckl)
        
          t1<-crkll[aa*0.95]
        
          t2<-ckll[aa*0.95]
        
          t11<-crkll[aa*0.9]
        
          t22<-ckll[aa*0.9]
        
           
        
          ### calculate power
        
           
        
          count1<-rep(0,aa)
        
          count2<-rep(0,aa)
        
           
        
          for(j in 1:aa){
        
          z1<-crkl[j]
        
          z2<-ckl[j]
        
           
        
          if(z1>=t1) count1[j]<-1
        
          else count1[j]<-0
        
          if(z2>=t2) count2[j]<-1
        
          else count2[j]<-0
        
          j<-j+1
        
          }
        
           
        
          power1<-sum(count1)/aa
        
          power2<-sum(count2)/aa
        

References

  1. Chen, Z. Estimating the shape parameter of the Log-logistic distribution. Int. J. Reliab. Qual. Saf. Eng. 2006, 13, 257–266. [Google Scholar] [CrossRef]
  2. Zhou, R.; Sivaganesan, S.; Longla, M. An objective Bayesian estimation of parameters in a log-binomial model. J. Stat. Plan. Inference 2014, 146, 113–121. [Google Scholar] [CrossRef]
  3. Granzotto, D.C.T.; Louzada, F. The Transmuted Log-Logistic Distribution: Modeling, Inference, and an Application to a Polled Tabapua Race Time up to First Calving Data. Commun. Stat. 2015, 44, 3387–3402. [Google Scholar] [CrossRef]
  4. Serkan, E. Reliability Properties of Systems with Two Exchangeable Log-Logistic Components. Commun. Stat. 2012, 41, 3416–3427. [Google Scholar]
  5. Kantam, R.R.L.; Rao, G.S.; Sriram, B. An economic reliability test plan: Log-logistic distribution. J. Appl. Stat. 2006, 33, 291–296. [Google Scholar] [CrossRef]
  6. Wang, J.; Wu, Z.; Shu, Y.; Zhang, Z. An imperfect software debugging model considering log-logistic distribution fault content function. J. Syst. Softw. 2015, 100, 167–181. [Google Scholar] [CrossRef]
  7. Surendran, S.; Tota-Maharaj, K. Log logistic distribution to model water demand data. Procedia Eng. 2015, 119, 798–802. [Google Scholar] [CrossRef] [Green Version]
  8. Cohen, A.C. Progressively censored samples in the life testing. Technometrics 1963, 5, 327–339. [Google Scholar] [CrossRef]
  9. Román, V.; Balakrishnan, N. Interval Estimation of Parameters of Life From Progressively Censored Data. Technometrics 1994, 36, 84–91. [Google Scholar]
  10. Balakrishnan, N.; Sandhu, R.A. A Simple Simulational Algorithm for Generating Progressive Type-II Censored Samples. Am. Stat. 1995, 49, 229–230. [Google Scholar]
  11. Vasicek, O. A test for normality based on sample entropy. J. R. Stat. Soc. 1976, 38, 54–59. [Google Scholar] [CrossRef]
  12. Taufer, E. On entropy based tests for exponentiality. Commun. Stat. Simul. Comput. 2002, 31, 189–200. [Google Scholar] [CrossRef]
  13. Noughabi, H.A.; Arghami, N.R. Testing exponentiality based on characterizations of the exponential distribution. J. Stat. Comput. Simul. 2011, 81, 1641–1651. [Google Scholar] [CrossRef]
  14. Baratpour, S.; Rad, A.H. Exponentiality Test Based on the Progressive Type II Censoring via Cumulative Entropy. Commun. Stat. Simul. Comput. 2016, 45, 2625–2637. [Google Scholar] [CrossRef]
  15. Kullback, S.; Leibler, R.A. On Information and Sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  16. Kullback, S. Information Theory and Statistics; Dover Publicaitons, Inc.: New York, NY, USA, 1959. [Google Scholar]
  17. Fei, W.; Vemuri, B.C.; Rao, M.; Chen, Y. Cumulative residual entropy, a new measure of information & its application to image alignment. IEEE Trans. Inf. Theory 2004, 50, 1220–1228. [Google Scholar]
  18. Di Crescenzo, A.; Longobardi, M. On cumulative entropies. J. Stat. Plan. Inference 2009, 139, 4072–4087. [Google Scholar] [CrossRef]
  19. Park, S.; Rao, M.; Dong, W.S. On cumulative residual Kullback–Leibler information. Stat. Probab. Lett. 2012, 82, 2025–2032. [Google Scholar] [CrossRef]
  20. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. 1977, 39, 1–38. [Google Scholar] [CrossRef]
  21. Sampford, M.R.; Taylor, J. Censored Observations in Randomized Block Experiments. J. R. Stat. Soc. 1959, 21, 214–237. [Google Scholar] [CrossRef]
  22. Meng, X.L.; Rubin, D.B. Asymptotically optimal estimation for the reduced second moment measure of point processes. Biometrika 1993, 80, 443–449. [Google Scholar]
  23. Marica, B.; Albéri, M.; Carlo, B. Investigating the potentialities of Monte Carlo simulation for assessing soil water content via proximal gamma-ray spectroscopy. J. Environ. Radioact. 2018, 192, 105–116. [Google Scholar]
  24. Ma, Y.; Chen, X.; Biegler, L.T. Monte-Carlo-Simulation-based Optimization for Copolymerization Processes with Embedded Chemical Composition Distribution. Comput. Chem. Eng. 2018, 109, 261–275. [Google Scholar] [CrossRef]
  25. Ni, W.; Li, G.; Zhao, J. Use of Monte Carlo simulation to evaluate the efficacy of tigecycline and minocycline for the treatment of pneumonia due to carbapenemase-producing Klebsiella pneumoniae. Infect. Dis. 2018, 50, 507–513. [Google Scholar] [CrossRef] [PubMed]
  26. Bormetti, G.; Callegaro, G.; Livieri, G.; Pallavicini, A. A backward Monte Carlo approach to exotic option pricing. Eur. J. Appl. Math. 2018, 29, 146–187. [Google Scholar] [CrossRef]
  27. Thomas, D.; Wilson, W. Linear Order Statistic Estimation for the Two-Parameter Weibull and Extreme-Value Distributions from Type II Progressively Censored Samples. Technometrics 1972, 14, 679–691. [Google Scholar] [CrossRef]
Figure 1. Powers of different schemes for the monotone decreasing hazard function for the CRKL ^ test, in the case of n = 10, m = 5 and 7.
Figure 1. Powers of different schemes for the monotone decreasing hazard function for the CRKL ^ test, in the case of n = 10, m = 5 and 7.
Mathematics 07 00361 g001
Figure 2. Powers of different schemes for the monotone decreasing hazard function for the CKL ^ test, in the case of n = 10, m = 5 and 7.
Figure 2. Powers of different schemes for the monotone decreasing hazard function for the CKL ^ test, in the case of n = 10, m = 5 and 7.
Mathematics 07 00361 g002
Figure 3. Powers of different schemes for the monotone increasing hazard function Bathtub-shaped(15) for the CRKL ^ test, in the case of n = 20 and 30.
Figure 3. Powers of different schemes for the monotone increasing hazard function Bathtub-shaped(15) for the CRKL ^ test, in the case of n = 20 and 30.
Mathematics 07 00361 g003
Figure 4. Powers of different schemes for the monotone increasing hazard function Rayleigh(0.2) for the CKL ^ test, in the case of n = 20 and 30.
Figure 4. Powers of different schemes for the monotone increasing hazard function Rayleigh(0.2) for the CKL ^ test, in the case of n = 20 and 30.
Mathematics 07 00361 g004
Table 1. Alternative hypotheses for power study.
Table 1. Alternative hypotheses for power study.
StatisticsHazard FunctionDistributions
1 CRKL ^ monotone decreasingPareto(5); Weibull: W(0.2,1);
Generalized exponential distribution: Gexp(0.5,1)
2 CRKL ^ monotone increasingBathtub-shaped: B(2,15)
3 CKL ^ monotone decreasingPareto(12); Weibull: W(0.2,1);
Generalized exponential distribution: Gexp(0.5,1)
4 CKL ^ monotone increasingRayleigh: R(0.2); Bathtub-shaped: B(2,200)
Table 2. Powers of CRKL ^ for the monotone decreasing hazard at the 5 % and 10 % significance levels for the different censorship schemes in the case of sample sizes n = 10, 20, and 30.
Table 2. Powers of CRKL ^ for the monotone decreasing hazard at the 5 % and 10 % significance levels for the different censorship schemes in the case of sample sizes n = 10, 20, and 30.
Schemes 5 % 10 %
nm ( R 1 , , R m ) Pareto(5)W(0.2)Gexp(0.5)Pareto(5)W(0.2)Gexp(0.5)
1055,0,0,0,00.7330.1870.3480.8360.2040.476
50,5,0,0,00.7160.2000.2820.8270.2080.410
50,0,5,0,00.6740.2380.3220.8210.2610.426
50,0,0,5,00.7220.2830.2850.8240.3160.384
50,0,0,0,50.7770.3450.2450.8530.3720.328
51,1,1,1,10.7110.3780.3360.8350.3990.410
73,0,0,0,0,0,00.8250.1080.4700.9020.1150.560
70,3,0,0,0,0,00.7660.1070.4210.8870.1140.547
70,0,3,0,0,0,00.7220.1040.3610.8910.1140.479
70,0,0,0,0,3,00.6720.2470.3530.8170.2660.491
70,0,0,0,0,0,30.7010.4750.3860.8380.5030.489
71,0,0,1,0,0,10.7700.2650.3790.8840.2900.520
201010,0,0,0,0,0,0,0,0,00.9180.0680.5850.9690.0740.704
100,10,0,0,0,0,0,0,0,00.8200.0540.5340.9400.0570.661
100,0,0,0,0,10,0,0,0,00.8660.1060.4190.9370.1140.580
100,0,0,0,0,0,0,10,0,00.7100.2510.4280.8610.2650.531
100,0,0,0,0,0,0,0,0,100.8230.6270.4540.9010.6450.526
101,1,1,1,1,1,1,1,1,10.7930.3490.5150.9110.3720.638
155,0,0,...,0,00.9800.0260.7140.9920.0290.880
150,5,0,...,0,00.9590.0180.6810.9870.0230.813
150,...,0,5,0,...,00.9630.0380.6630.9830.0450.735
150,0,...,0,5,00.8580.1630.5390.9270.1780.679
150,0,0,...,0,50.8600.4530.6470.9310.4750.761
151,1,..,1,...,1,10.9310.1210.6300.9770.1300.758
182,0,0,...,0,00.9950.0040.7850.9980.0040.906
180,2,0,...,0,00.9870.0120.7820.9960.0120.866
180,...,0,2,0,...,00.9870.0170.7380.9940.0180.861
180,0,...,0,2,00.9850.0350.7040.9960.0380.806
180,0,0,...,0,20.9670.0820.6940.9940.0920.803
181,0,0,...,0,10.9930.0390.7520.9990.0400.873
301515,0,0,...,0,00.9750.0240.7200.9910.0260.886
150,15,0,...,0,00.9300.0200.6510.9780.0240.795
150,...,0,15,0,...,00.8980.0600.5280.9560.0670.646
150,0,...,0,15,00.7610.3790.4540.8500.3980.567
150,0,0,...,0,150.8920.7130.5590.9450.7230.631
151,1,...,1,...,10.8760.2870.6530.9450.3000.401
2010,0,0,...,0,00.9950.0060.8500.9980.0060.935
200,10,0,...,0,00.9820.0060.7920.9980.0080.913
200,...,0,10,0,...00.9780.0130.7340.9920.0160.848
200,0,...,0,10,00.8430.2820.5660.9130.2970.694
200,0,0,...,0,100.8980.7120.6660.9470.7400.779
201,0,1,0,...,0,1,00.9710.0350.6600.9900.0450.790
255,0,0,...,0,00.9950.0010.9090.9970.0020.967
250,5,0,...,0,00.9960.0000.9020.9980.0000.966
250,...,0,5,0,...,00.9890.0020.8270.9980.0030.935
250,0,...,0,5,00.9550.0540.7180.9860.0660.813
250,0,0,...,0,50.9770.1630.7800.9990.1820.910
251,1,..,1,...,1,10.9970.0240.8520.9990.0270.937
Table 3. Powers of CKL ^ for the monotone decreasing hazard at the 5 % and 10 % significance levels for the different censorship schemes in the case of sample sizes n = 10, 20, and 30.
Table 3. Powers of CKL ^ for the monotone decreasing hazard at the 5 % and 10 % significance levels for the different censorship schemes in the case of sample sizes n = 10, 20, and 30.
Schemes 5 % 10 %
nm ( R 1 , , R m ) Pareto(12)W(0.2)Gexp(0.5)Pareto(12)W(0.2)Gexp(0.5)
1050,5,0,0,00.7860.6020.1610.9040.7650.232
50,0,5,0,00.6780.4240.1950.8660.6600.302
50,0,0,5,00.5270.3090.2470.8380.4660.334
50,0,0,0,50.0840.7030.3660.2360.7440.478
51,1,1,1,10.4490.3690.2720.9140.4650.383
73,0,0,0,0,0,00.5900.7080.0510.7920.8260.080
70,3,0,0,0,0,00.8350.7610.0930.9340.8620.120
70,0,3,0,0,0,00.8280.7120.0770.9140.8500.107
70,0,0,0,0,3,00.9020.5730.1930.9750.7580.260
70,0,0,0,0,0,30.8510.4290.3850.9890.5490.518
71,0,0,1,0,0,10.9830.6840.2761.0000.8480.395
201010,0,0,0,0,0,0,0,0,00.3760.7110.0070.6370.8270.019
100,10,0,0,0,0,0,0,0,00.8550.8700.0530.9200.9140.073
100,0,0,0,0,10,0,0,0,00.8140.7280.0650.9290.8280.099
100,0,0,0,0,0,0,10,0,00.9280.6790.1850.9690.7930.228
100,0,0,0,0,0,0,0,0,100.8350.7900.5320.9760.8270.660
101,1,1,1,1,1,1,1,1,10.9980.7070.3701.0000.8590.506
155,0,0,...,0,00.0450.6590.0020.1930.8030.002
150,5,0,...,0,00.4940.8830.0040.7170.9270.005
150,...,0,5,0,...,00.4610.8030.0030.7830.8730.004
150,0,...,0,5,00.9740.7460.0810.9920.8490.135
150,0,0,...,0,51.0000.8770.5650.9980.9610.697
151,1,..,1,...,1,11.0000.9650.1311.0000.9890.216
182,0,0,...,0,00.0050.6940.0020.0640.8580.003
180,2,0,...,0,00.1550.8500.0030.3370.9160.005
180,...,0,2,0,...,00.1220.7930.0020.3840.8800.002
180,0,...,0,2,00.5470.7930.0010.8480.8960.004
180,0,0,...,0,20.9990.9740.0951.0000.9930.158
181,0,0,...,0,10.9130.9370.0050.9800.9660.011
301515,0,0,...,0,00.0540.6700.0020.1910.8290.002
150,15,0,...,0,00.7980.9030.0120.8760.9560.015
150,...,0,15,0,...,00.7300.7630.0240.8560.8540.033
150,0,...,0,15,00.9370.6100.2970.9800.7540.359
150,0,0,...,0,150.9450.8850.6311.0000.9050.762
151,1,...,1,...,11.0000.9020.4011.0000.9790.508
2010,0,0,...,0,00.0040.6310.0020.0260.8520.001
200,10,0,...,0,00.5270.8970.0020.7120.9630.003
200,...,0,10,0,...00.4480.8150.0030.6660.9140.004
200,0,...,0,10,00.9930.7120.1740.9990.8850.269
200,0,0,...,0,101.0000.6320.6691.0000.8020.779
201,0,1,0,...,0,1,00.8300.8060.0070.9550.9040.013
255,0,0,...,0,00.0020.6920.0010.0030.8290.001
250,5,0,...,0,00.0950.9280.0010.2730.9740.002
250,...,0,5,0,...,00.0540.8530.0000.2210.9310.001
250,0,...,0,5,00.8940.7810.0030.9880.9110.011
250,0,0,...,0,51.0001.0000.6001.0001.0000.722
251,1,..,1,...,1,10.9900.9800.0021.0000.9910.007
Table 4. Powers for the monotone increasing hazard at the 5 % and 10 % significance levels for the different censorship schemes in the case of sample sizes n = 10, 20, and 30.
Table 4. Powers for the monotone increasing hazard at the 5 % and 10 % significance levels for the different censorship schemes in the case of sample sizes n = 10, 20, and 30.
5 % 10 %
Schemes CRKL ^ CKL ^ CRKL ^ CKL ^
nm ( R 1 , , R m ) B(15)R(0.2)B(200)B(15)R(0.2)B(200)
1055,0,0,0,00.5720.8480.6650.9100.9290.906
50,5,0,0,00.6310.9160.8630.9340.9630.990
50,0,5,0,00.5570.8410.3000.8410.9540.793
50,0,0,5,00.4430.7860.0270.7390.9360.401
50,0,0,0,50.5090.7570.0200.7700.9180.001
51,1,1,1,10.4200.8540.0410.6880.9680.405
73,0,0,0,0,0,00.5690.6580.4740.9700.8060.801
70,3,0,0,0,0,00.8240.8740.9080.9910.9270.981
70,0,3,0,0,0,00.8590.9110.9240.9920.9640.987
70,0,0,0,0,3,00.3890.9520.6990.7090.9790.958
70,0,0,0,0,0,30.3250.9930.2550.5391.0000.789
71,0,0,1,0,0,10.4040.9920.9660.7291.0000.998
201010,0,0,0,0,0,0,0,0,00.8600.3580.1271.0000.5040.488
100,10,0,0,0,0,0,0,0,00.9500.8580.9881.0000.9241.000
100,0,0,0,0,10,0,0,0,00.8680.8840.9620.9970.9351.000
100,0,0,0,0,0,0,10,0,00.5160.9250.9390.7880.9730.998
100,0,0,0,0,0,0,0,0,100.1170.9900.0020.3700.9990.019
101,1,1,1,1,1,1,1,1,10.3580.9950.9990.6450.9990.999
155,0,0,...,0,01.0000.0420.0080.9990.1230.067
150,5,0,...,0,01.0000.4630.5361.0000.6340.846
150,...,0,5,0,...,00.9990.5310.4481.0000.7420.879
150,0,...,0,5,00.5680.9600.9060.8850.9890.996
150,0,0,...,0,50.2661.0000.9990.5261.0001.000
151,1,..,1,...,1,10.5950.9980.9990.9091.0001.000
182,0,0,...,0,00.9970.0100.0010.9990.0420.002
180,2,0,...,0,00.9980.1010.0131.0000.2080.132
180,...,0,2,0,...,01.0000.1370.0140.9960.3340.160
180,0,...,0,2,00.9780.4220.1411.0000.7070.596
180,0,0,...,0,20.6791.0000.9960.9551.0001.000
181,0,0,...,0,10.9250.8030.5481.0000.9380.872
301515,0,0,...,0,00.9990.0400.0010.9980.1330.017
150,15,0,...,0,01.0000.7500.9221.0000.8610.990
150,...,0,15,0,...,00.9710.8240.8640.9960.9170.996
150,0,...,0,15,00.2890.9430.7460.5680.9850.990
150,0,0,...,0,150.1421.0000.0170.2541.0000.122
151,1,...,1,...,10.5111.0001.0000.8041.0001.000
2010,0,0,...,0,00.9980.0020.0020.9980.0090.005
200,10,0,...,0,00.9970.3700.5270.9950.5350.794
200,...,0,10,0,...01.0000.4870.5661.0000.6610.898
200,0,...,0,10,00.5010.9770.9870.7340.9951.000
200,0,0,...,0,100.1731.0000.9970.3681.0000.999
201,0,1,0,...,0,1,00.9870.7640.7381.0000.9050.996
255,0,0,...,0,01.0000.0020.0011.0000.0030.002
250,5,0,...,0,01.0000.0440.0231.0000.0940.110
250,...,0,5,0,...,00.9990.0880.0141.0000.1670.108
250,0,...,0,5,00.8570.6580.3090.9990.8650.886
250,0,0,...,0,50.5041.0001.0000.8361.0001.000
251,1,..,1,...,1,10.9970.9250.9401.0000.9870.998
Table 5. A progressive censored sample of the logarithmic lifetime of the insulating fluid.
Table 5. A progressive censored sample of the logarithmic lifetime of the insulating fluid.
i12345
x i : 5 : 13 0.27001.02241.57891.87181.9947
R i 03005

Share and Cite

MDPI and ACS Style

Du, Y.; Gui, W. Goodness of Fit Tests for the Log-Logistic Distribution Based on Cumulative Entropy under Progressive Type II Censoring. Mathematics 2019, 7, 361. https://doi.org/10.3390/math7040361

AMA Style

Du Y, Gui W. Goodness of Fit Tests for the Log-Logistic Distribution Based on Cumulative Entropy under Progressive Type II Censoring. Mathematics. 2019; 7(4):361. https://doi.org/10.3390/math7040361

Chicago/Turabian Style

Du, Yuge, and Wenhao Gui. 2019. "Goodness of Fit Tests for the Log-Logistic Distribution Based on Cumulative Entropy under Progressive Type II Censoring" Mathematics 7, no. 4: 361. https://doi.org/10.3390/math7040361

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop