Next Article in Journal
A Note on the Minimum Size of a Point Set Containing Three Nonintersecting Empty Convex Polygons
Previous Article in Journal
Multi-Granulation Graded Rough Intuitionistic Fuzzy Sets Models Based on Dominance Relation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Inference for the Information Entropy of the Log-Logistic Distribution under Progressive Type-I Interval Censoring Schemes

Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Symmetry 2018, 10(10), 445; https://doi.org/10.3390/sym10100445
Submission received: 9 September 2018 / Revised: 21 September 2018 / Accepted: 25 September 2018 / Published: 28 September 2018

Abstract

:
In recent years, information entropy has been studied and developed rapidly across disciplines as a measure of information value. In this article, the maximum likelihood estimation and EM algorithm are used to estimate the parameters of the log-logistic distribution for progressive type-I interval censored data, and the hypothesis testing algorithm of information entropy is proposed. Finally, Monte Carlo numerical simulations are conducted to justify the feasibility of the algorithm.

1. Introduction

The log-logistic distribution, which is also known as the Fisk distribution in economics, is a continuous probability distribution for a non-negative random variable. The works in [1,2] first studied the characteristics of the log-logistic distribution in detail. The work in [3] introduced a new survival model based on logistic and log-logistic distributions. The work in [4] considered the best linear unbiased estimators of the unknown parameters of the log-logistic distribution. When the product’s failure density model is a log-logistic distribution, the acceptance test based on the life test was studied by [5].
The log-logistic distribution is widely and extensively applied in survival analysis as a parametric model for events in which the rate initially increases and then decreases, such as cancer mortality after treatment in medical research, simple models for wealth or income distribution in economics, etc.
In order to improve the reliability of industrial products, many statisticians have begun to study various types of lifetime data. In the industrial life inspection and medical survival analysis, it was usually impossible for the experimenters to observe the lifetime of all individuals due to time and cost. That is, the complete sample is unpractical and unrealistic. The censored sample is common in practice.
Typically, the unit enters a failure or death before the loss, withdrawal or one only knows the lifetime of the units in a range of circumstances. The most popular censoring schemes mainly include type-I censorship, conventional type-II censorship and progressive censoring.
For the type-I censoring scheme, the product life testing is terminated at an advance planned time; see [6,7,8,9]. As for the conventional type-II censored scheme, the product life testing is finished whenever the pre-specified life time is reached; see [10,11,12]. Both type-I and type-II censoring schemes allow the test item to be withdrawn simply at the end of the life test. However, the progressive censored scheme allows test items to be deleted and removed sometime before the end of the life test.
In this article, we mainly focus on the inference of the entropy of the log-logistic distribution under the progressive type-I interval censoring scheme. The progressively censored method for obtaining data is proposed, that is observation is performed within two consecutive predetermined time periods, and the items are only allowed to be withdrawn at specific time points. Then, the statistical analysis is studied on the data obtained under this censored method.
Suppose n items are put on a life testing at the initial time t 0 = 0 and they will be checked and inspected at m predetermined times 0 < t 1 < t 2 < < t m , where t m is the pre-specified time to end the experiment. At the i th checking time t i , the amount of failures X i in ( t i 1 , t i ] is counted, and R i surviving items i = 1 , , m are randomly taken out and deleted from the life testing. The amount, S i , of surviving units is a random variable at a predetermined time t i , and R i is no larger than S i . Furthermore, R i is determined by a predetermined percentage of the surviving items remaining at t i , i = 1 , , m . For instance, if the pre-specified percentage values are p 1 , , p m 1 and p m = 1 , R i can be determined by using R i = f l o o r [ p i S i ] at each checking time t i , where f l o o r [ x ] gives the largest positive integer not exceeding x. Thus, progressive type-I interval censored data of size n can be expressed as Y i = ( X i , R i , t i ) , i = 1 , , m . If R i = 0 , i = 1 , 2 , , m 1 and R m = n n = 1 m X i , and the progressive type-I interval censored data becomes the known type-I interval censored data.
Let R = ( R 1 , R 2 , , R m ) , X = ( X 1 , X 2 , , X m ) , B ( a , b ) represent the binomial distribution with the number of trials a and the probability of success b and F ( t ; β ) be the distribution function of the inspection times. Let v 1 = n , δ 1 ( β ) = F ( t 1 ; β ) , v i = n j = 1 i 1 ( X j + R j ) and δ i ( β ) = F ( t i | β ) F ( t i 1 | β ) 1 F ( t i 1 | β ) i = 2 , , m .
It can be shown that:
X 1 B ( v 1 , δ 1 ( β ) )
X i | X i 1 , , X 1 ; R i 1 , , R 1 B ( v i , δ i ( β ) )
The rest of the article is organized as follows. In Section 2, the information entropy of the log-logistic distribution is proposed and introduced. In Section 3, the maximum likelihood estimation and EM algorithm are used to derive the parameter estimation and Fisher information of information entropy. The asymptotic normal distribution is also obtained. In Section 4, we develop a hypothesis testing algorithm on information entropy to test whether the amount of information can eliminate the average uncertainty. The power function analysis of the proposed algorithm is also conducted in this section. We use Monte Carlo simulation to justify and illustrate the algorithm in Section 5. Finally, the conclusions about our proposed algorithm are presented in Section 6.

2. Information Entropy

A random variable T is said to have the log-logistic distribution, if its cumulative distribution function (cdf) is:
F T ( t ) = 1 ( 1 + t β ) 1 , t 0 , β > 0
The probability density function (pdf) is as follows:
f T ( t ) = β t β 1 ( 1 + t β ) 2 t 0 , β > 0
In 1948, the statistician Shannon proposed the definition of information entropy. He successfully solved the problem of quantitative measurement of information and first explained the relationship between probability and information redundancy in mathematical language. Entropy is especially used to measure the uncertainty of objects in information theory, which is a perfect mathematical theory supporting all modern digital communications.
The physical meaning of information entropy has the following three aspects:
(1)
Indicating the average amount of information provided by each message or symbol after the source is output;
(2)
Indicating the average uncertainty of the source after the source is output;
(3)
Measuring the uncertainty on a random variable T.
Let T be a random variable having distribution function F ( t ) = P ( T t ) , and its probability density function is f ( t ) . The Shannon’s entropy can be defined as:
H ( f ) = 0 f ( t ) [ ln f ( t ) ] d t
For the log-logistic distribution, the entropy can be calculated as:
H ( f ) = 0 f ( t ) [ ln f ( t ) ] d t = 2 ln β

3. Parameter Estimations

3.1. Maximum Likelihood Estimation

Considering the progressively type-I censored data, Y i = ( X i , R i , t i ) , i = 1 , 2 , , n , from a continuous log-logistic distribution with cdf F ( t ; β ) based on progressive type-I interval censoring, the likelihood function is defined as (see [13]):
L ( Y | β ) [ F ( t 1 ; β ) ] X 1 [ 1 F ( t 1 ; β ) ] R 1 × [ F ( t 2 ; β ) F ( t 1 ; β ) ] X 2 [ 1 F ( t 2 ; β ) ] R 2
× × [ F ( t m ; β ) F ( t m 1 ; β ) ] X m [ 1 F ( t m ; β ) ] R m
= i = 1 m [ F ( t i ; β ) F ( t i 1 ; β ) ] X i [ 1 F ( t i ; β ) ] R i
where t 0 = 0 and β is the parameter vector. For the log-logistic distribution proposed as above, the likelihood function can be given as:
L ( β ) = i = 1 m [ F T ( t i ) F T ( t i 1 ) ] X i [ 1 F T ( t i ) ] R i = i = 1 m [ ( 1 + t i 1 β ) 1 ( 1 + t i β ) 1 ] X i ( 1 + t i β ) R i
The calculation of the maximum likelihood estimator for a single parameter function is given by [13]. If X 1 = n , then ln ( L ( β ) ) is maximized when β approaches zero.
ln L ( β ) = i = 1 m [ X i ln [ ( 1 + t i 1 β ) 1 ( 1 + t i β ) 1 ] R i ln ( 1 + t i β ) ]
Set the derivative of the log-likelihood function with respect to β to be zero, and the maximum likelihood estimate (MLE) of β is the solution of Equation (7). The log-likelihood function is given by:
d d β ln L ( β ) = i = 1 m [ X i ( 1 + t i β ) 2 t i β ln t i ( 1 + t i 1 β ) 2 t i 1 β ln t i 1 ( 1 + t i 1 β ) 1 ( 1 + t i β ) 1 R i t i β ln t i 1 + t i β ] 0
Under fairly weak regular conditions, the maximum likelihood estimator is consistent. Under a slightly stronger assumption, MLE is asymptotically normal. The work in [14] discussed the asymptotic properties of the maximum likelihood estimator under the left truncated and right censored data and have proven that the MLE are asymptotic normal. Fisher’s information is I ( β ) = E [ d 2 ln L ( β ) d β 2 ] . From (7), we have:
d 2 d β 2 ln L ( β ) = i = 1 m [ X i A B A B [ ( 1 + t i 1 β ) 1 ( 1 + t i β ) 1 ] 2 R i ( 1 + t i β ) ln t i ( β ln t i ) 2 ( 1 + t i β ) 2 ]
A = ( 1 + t i β ) 2 t i β ln t i ( 1 + t i 1 β ) 2 t i 1 β ln t i 1 A = ln 2 t i ( t i β t i 2 β ) ( 1 + t i β ) 3 ln 2 t i 1 ( t i 1 β t i 1 2 β ) ( 1 + t i 1 β ) 3 B = ( 1 + t i 1 β ) 1 ( 1 + t i β ) 1 B = ( 1 + t i β ) 2 t i β ln t i ( 1 + t i 1 β ) 2 t i 1 β ln t i 1 C = t i β ln t i 1 + t i β C = t i β ln 2 t i ( 1 + t i β ) 2
From the initial data of size n putting life testing at time zero, the cdf of the log-logistic distribution, the following conclusions can be drawn:
X i | X i 1 , , X 1 ; R i 1 , , R 1
B i n o m i a l ( n j = 1 i 1 ( X j + R j ) , F ( t i | β ) F ( t i 1 | β ) 1 F ( t i 1 | β ) )
Let q i = F ( t i | β ) F ( t i 1 | β ) 1 F ( t i 1 | β ) = 1 1 + t i 1 β 1 + t i β , i = 1 , , m . From (10), we have:
j = 1 n f a i l n ( 1 q 1 ) c e n s o r n ( 1 q 1 ) ( 1 p 1 ) = n 1 j = 2 n 1 f a i l n 1 ( 1 q 2 ) c e n s o r n 1 ( 1 q 2 ) ( 1 p 2 ) = n 2 j = i 1 n i 2 f a i l n i 2 ( 1 q i 1 ) c e n s o r n j = 1 i 1 ( 1 q j ) ( 1 p j )
E ( X i ) = E E ( X i | X i 1 , , X 1 , R i 1 , , R 1 )    = n q i j = 1 i 1 ( 1 q j ) ( 1 p j ) , i = 1 , , m
E ( R i ) = n [ j = 1 i 1 ( 1 q j ) ( 1 p j ) ] ( 1 q i ) p i
The reduced Fisher information can be obtained from the above (8), (11) and (12):
I ( β ) = E [ d 2 ln L ( β ) d β 2 ] = n i = 1 m [ q i A B A B B 2 p i ( 1 q i ) C ] j = 1 i 1 ( 1 q j ) ( 1 p j )
A , A , B , B , C can be obtained from (9). Therefore, β obeys the asymptotic normal distribution.
m ( β ˜ β ) m d N ( 0 , I 1 ( β ) )
The observation of the sample under the progressive type-I censoring is performed within two consecutive specified times. Considering the special case that the interval duration is equal, letting t i 1 = ( i 1 ) t , t i = i t , then the monitoring and deletion take place in cycles of t. (7) can be converted into:
d d β ln L ( β ) = i = 1 m [ X i ( 1 + ( i t ) β ) 2 ( i t ) β ln i t ( 1 + ( ( i 1 ) t ) β ) 2 ( ( i 1 ) t ) β ln ( ( i 1 ) t ) ( 1 + ( ( i 1 ) t ) β ) 1 ( 1 + ( i t ) β ) 1 R i ( i t ) β ln i t 1 + ( i t ) β ] = 0
We can get the MLE of β as β ˜ through the above equation. Because the above equation does not have a closed form of the maximum likelihood estimation solution, a Newton–Raphson method is proposed here, which is a method of iterative numerical search to obtain MLE.
Let g ( β ) = d d β ln L ( β ) . The iteration is defined as β i = β i 1 + g ( β i 1 ) g ( β i 1 ) , where β i is the estimated value of β for the i th iteration and β 0 is the initial estimate of β . When | β i β i 1 | < 10 4 holds, the iteration is over. On this basis, each time with the same percentage p of censored data, (13) can be converted into:
I ( β ) = E [ d 2 ln L ( β ) d β 2 ] = n i = 1 m [ q i A B A B B 2 p ( 1 q i ) C ] j = 1 i 1 ( 1 q j ) ( 1 p j )
Then, the asymptotic variance of β ˜ can be given by V ( β ˜ ) = I 1 ( β ) . According to the invariance of the maximum likelihood estimate, H f ˜ is the likelihood estimate of H f . Since we have that m ( β ˜ β ) m d N ( 0 , I 1 ( β ) ) obeys the asymptotic normal distribution:
H f ˜ m d N ( H f , V ( H f ˜ ) )
The MLE H f ˜ = 2 l o g β ˜ is an asymptotic and unbiased estimator of H f . By using the delta method, the estimate of the asymptotic variance of H f ˜ is proposed as:
g ( β ) = d H f d β = 1 β
V ( H f ˜ ) = I 1 ( β ) [ g ( β ) ] 2 = 1 β 2 I 1 ( β ) = γ ( β )
The explicit solution of the shape parameter β cannot be obtained from the equation. If the approximate solution is obtained by numerical methods, it cannot be proven that such a solution is unique, so the maximum likelihood estimation of the shape parameter β is not easy to obtain. We apply the Expectation Maximization Algorithm (EM algorithm) to determine the estimate of the parameter.

3.2. EM Algorithm

The EM algorithm is an iterative method mainly used to find the maximum likelihood estimation of the posterior distribution. Each iteration includes two steps: E step (seeking expectation) and M step (maximizing). The posterior density function of β obtained from the observation data Y is represented by f ( β | Y ) , which is called the observation posterior density, and f ( β | Y , Z ) represents the posterior density function about β obtained after adding the data Z; called the addition of a posterior distribution. f ( Z | β , Y ) represents the conditional density function of the latent data Z for a given β and observation data Y. The EM algorithm proceeds as follows. Let β ( i ) be the estimated value of the parameter β at the beginning of the ( i + 1 ) th iteration, then the two steps of the ( i + 1 ) th iteration are:
  • Step E: Find the conditional distribution of f ( β | Y , Z ) or ln f ( β | Y , Z ) with respect to Z, i.e.,
    Q ( β | β ( i ) , Y ) = E [ ln f ( β | Y , Z ) | β ( i ) , Y ]
  • Step M: Maximize Q ( β | β ( i ) , Y ) , that is find a point β ( i + 1 ) to make Q ( β ( i + 1 ) | β ( i ) , Y ) = m a x Q ( β | β ( i ) , Y ) .
This completes the iteration, β ( i ) β ( i + 1 ) . Iterate the above step E and step M until | β ( i + 1 ) β ( i ) | or | Q ( β ( i + 1 ) | β ( i ) , Y ) Q ( β ( i ) | β ( i ) , Y ) | is sufficiently small to stop. For the previous progressive type-I interval censorship test, record the observation results Y = { ( X j , R j , t j ) , ( j = 1 , 2 , , k ) } , and Z j is the random variable falling into the interval ( t j 1 , t j ] . According to the conditional density formula, the conditional density function of Z j can be obtained as:
f j ( z | β ( i ) , Y ) = β ( i ) z β ( i ) 1 ( 1 + z β ( i ) ) 2 t j 1 t j β ( i ) z β ( i ) 1 ( 1 + z β ( i ) ) 2 d z = β ( i ) z β ( i ) 1 ( 1 + z β ( i ) ) 2 ( 1 + t j 1 β ( i ) ) 1 ( 1 + t j β ( i ) ) 1
  • Step E: According to the concept of the likelihood function:
    f ( β | Y , Z ) = j = 1 k β X j Z j ( β 1 ) X j ( 1 + Z j β ) 2 X j ( 1 + t j ) R j
    ln f ( β | Y , Z ) = j = 1 k ( X j ln β + ( β 1 ) X j ln Z j 2 X j ln ( 1 + Z j β ) R j ln ( 1 + t j β ) )
    Q ( β | β ( i ) , Y ) = E [ ln f ( β | Y , Z ) | β ( i ) , Y ] = j = 1 k ( X j ln β + ( β 1 ) X j E ( ln Z j ) 2 X j E ( ln ( 1 + Z j β ) ) R j ln ( 1 + t j β ) )
  • Step M: Deriving Q ( β | β ( i ) , Y ) with respect to β to find the maximum point β ( i + 1 ) of Q ( β | β ( i ) , Y ) :
    Q β = j = 1 k [ X j β + X j E ( ln Z j ) R j t j β ln t j 1 + t j β ] = 1 β j = 1 k X j + j = 1 k [ X j E ( ln Z j ) R j t j β ln t j 1 + t j β ]
    Let Q β = 0 ; β can be calculated as:
    β = j = 1 k X j j = 1 k [ X j E ( l n Z j ) R j t j β l n t j 1 + t j β ]
    E [ ln Z j ] = [ ( 1 + t j 1 β ( i ) ) 1 ( 1 + t j β ( i ) ) 1 ] t j 1 t j β ( i ) z β ( i ) 1 ln z ( 1 + z β ( i ) ) 2 d z = A j
    Let B j = t j β l n t j 1 + t j β , and replace β with β ( i + 1 ) , then β ( i + 1 ) can be defined as:
    β ( i + 1 ) = j = 1 k X j j = 1 k [ X j A j R j B j ]
An iterative process from β ( i ) β ( i + 1 ) is completed by Equation (26), and the estimation of the parameter β can be obtained by using Equation (26) repeatedly, which is denoted as β ˜ . The EM algorithm is an effective way to find the parameter estimates, the biggest advantage of which is the stability of the estimate, especially in the case of censored samples. The following two lemmas illustrate the convergence of the EM algorithm.
Lemma 1.
The EM algorithm increases the posterior density function value (observes) after each iteration, i.e., f ( β ( i + 1 ) | Y ) f ( β ( i ) | Y ) .
Lemma 2.
1. If f ( β | Y ) has an upper bound, then L ( β ( i ) | Y ) converges to L * .
2. If Q ( β | φ ) is continuous with respect to β and φ, the convergence value β * of the estimated sequence β ( i ) obtained by the EM algorithm is a stable point of L under a condition on L that is very general.
Proofs of Lemma 1 and 2 are found in [15].

4. Hypothesis Testing Algorithm

In recent years, interdisciplinary research on “information entropy” has developed rapidly and has penetrated into many disciplines such as computer science and technology, systems science and geography and has derived research topics from multiple interdisciplinary subjects, such as:
(i)
The estimation of the (conditional) entropy or mutual information can be applied to independent component analysis in order to measure the dependency among random variables (see [16]).
(ii)
A discretization algorithm for continuous features and attributes of a raw and rough set for selecting cut points can be illustrated on the basis of information entropy, which is defined for every candidate cut point and treated as a measurement of importance (see [17]).
(iii)
The information entropy can be applied in a multi-population genetic algorithm used to narrow the search space. By defining the probabilities that the best solution appears in each population, information-entropy is introduced into the evolution process, which will enhance the ability of searching optimization solution for the evolution algorithms (see [18]).
In summary, any activity process that decreases or increases the affirmation, organization, regularity or order of random event sets can be measured by the unified scale of the change of information entropy, so the research on information entropy has become especially important. From the perspective of information dissemination, information entropy can represent the value of information, that is give a standard for measuring the value of information.
In this section, we propose an information entropy hypothesis testing algorithm based on the log-logistic distribution, assuming that the data are progressive type-I censored data. Let h be the critical amount of information required to reduce the source input to a certain standard.
The physical meaning of entropy represents the average uncertainty of the source before the source is output. The hypothesis test of entropy is used to check whether the obtained source data meet the requirements. If it passes the test, no additional data are needed, and the input of the source can be reduced, further reducing the amount of human and material expenditure on data collection.
The null hypothesis and alternative hypothesis are given as:
H 0 : H ( f ) h v . s . H β : H ( f ) > h
Adopting to the MLE of H ( f ) as the test statistic, then critical value H 0 of the right-tailed hypothesis test can be obtained as:
s u p P ( H f ˜ > H 0 ) α P ( H f ˜ H f V a r ( γ ( β ˜ ) ) > H 0 H f V a r ( γ ( β ˜ ) ) | H f h ) α P ( H f ˜ H f V a r ( γ ( β ˜ ) ) > H 0 H f V a r ( γ ( β ˜ ) ) | H f = h ) = α P ( H f ˜ h V a r ( γ ( β ˜ ) ) > H 0 h V a r ( γ ( β ˜ ) ) ) = α P ( ( H f ˜ h V a r ( γ ( β ˜ ) ) ) 2 > ( H 0 h V a r ( γ ( β ˜ ) ) ) 2 ) = α P ( ( H f ˜ h V a r ( γ ( β ˜ ) ) ) 2 ( H 0 h V a r ( γ ( β ˜ ) ) ) 2 ) = 1 α
where γ ( β ˜ ) = I 1 ( β ˜ ) ( g ( β ˜ ) ) 2 and under H f = h , ( H f ˜ h V a r ( γ ( β ˜ ) ) ) 2 D χ ( 1 ) 2 . From (27), by using the C H I I N V ( 1 α ) function, which stands for the lower 100 ( 1 α ) th percentile of χ ( 1 ) 2 , then:
( H 0 h V a r ( γ ( β ˜ ) ) ) 2 = C H I I N V ( 1 α )
can be obtained. Therefore, the critical value can be expressed as:
H 0 = h + V a r ( γ ( β ˜ ) ) C H I I N V ( 1 α )
where h and α represent the target value and the determined significance level. To determine whether the obtained source data meet the requirements, managers can perform a one-sided hypothesis test on information entropy. The test algorithm for H f is as follows. The power function w ( h 1 ) of the test of this point H f = h 1 > h 0 is:
w ( h 1 ) = P ( H f ˜ > H 0 | β 1 = e 2 h 1 ) = P ( 2 l n β ˜ > h 0 + V a r ( γ ( β 0 ˜ ) ) C H I I N V ( 1 α ) | β 0 = e 2 h 0 , β 1 = e 2 h 1 ) = P ( β ˜ < e 2 h 0 V a r ( γ ( β 0 ˜ ) ) C H I I N V ( 1 α ) | β 0 = e 2 h 0 , β 1 = e 2 h 1 ) = P ( β ˜ β 1 I 1 ( β 1 ) < e 2 h 0 V a r ( γ ( β 0 ˜ ) ) C H I I N V ( 1 α ) β 1 I 1 ( β 1 ) ) = Φ ( e 2 h 0 V a r ( γ ( β 0 ˜ ) ) C H I I N V ( 1 α ) β 1 I 1 ( β 1 ) )
where Φ ( · ) is the cdf for the known standard normal distribution. The power function values of w ( h 1 ) for testing H 0 : H f 0.8 are summarized in Table 1, Table 2 and Table 3 at α = 0.01 , 0.05 , 0.1 , respectively, for h 1 = 0.65 ( 0.15 ) 2 , m = 5 ( 1 ) 8 , n = 60 ( 20 ) 100 and p = 0.05 ( 0.025 ) 0.1 , T = 0.5 . Figure 1, Figure 2, Figure 3 and Figure 4 shows the power of the proposed algorithm for some common situations. From Table 1, Table 2 and Table 3, we conclude:
  • For m = 5 , p = 0.05 and α = 0.05 , the power function w ( h 1 ) presents a non-decreasing trend with respect to n, as performed in Figure 1 (any combinations of m, p and α will have the same trend).
  • The power w ( h 1 ) is a fixed non-increasing function of m, n = 60 , p = 0.05 , α = 0.05 , as performed in Figure 2 (any combinations of n, α and p will have a similar trend).
  • The power function w ( h 1 ) presents a non-increasing trend of the fixed deleted percentage p of n = 60 , α = 0.05 , m = 5 , as performed in Figure 3 (any combinations of n, m and α will have a similar trend).
  • From Figure 1, Figure 2 and Figure 3, the power function w ( h 1 ) increases as the value of h 1 increases for any combination of n, m, p and α .
  • From Figure 4, for any combination of n, m and p, the power function w ( h 1 ) presents a non-decreasing trend with respect to α .
Algorithm:
Step 1
Observe the progressive type-I interval censored data X 1 , , X m at the pre-scheduled times t 1 , , t m with censoring schemes of R 1 , , R m from the log-logistic distribution.
Step 2
In practical applications, h represents the critical amount of information required to reduce the source input to a certain standard. Then, the testing null hypothesis H 0 : H f h and the alternative hypothesis H β : H f > h are constructed.
Step 3
Calculate the maximum likelihood estimate of β and the value of the test statistic H f ˜ = 2 l o g β ˜ .
Step 4
The critical value can be calculated for the level of significance of α as:
H 0 = h + V a r ( γ ( β ˜ ) ) C H I I N V ( 1 α )
Step 5
Compare the size of H f ˜ and H 0 . Determine whether the obtained source data meet the requirements.
Based on the proposed algorithm, information entropy represents whether the amount of information contained in the data meets the requirements. Two numerical examples are given in the next section.

5. Monte Carlo Simulation

To demonstrate and illustrate the above test algorithm program, Monte Carlo simulation generates two datasets.
Monte Carlo simulation is a computational method that realistically simulates actual physical processes based on probabilistic and statistical theoretical methods. It is widely used in many fields, such as physical science, estimating micro economic models, engineering and data analysis (see, e.g., the recent works [19,20,21]).
Example 1.
The R software is used to generate n = 60 sets of failure time values, which obey the log-logistic distribution under β = 4 . The log-logistic distribution is generated as follows: if M is a random number with uniform distribution (0,1), then U = ( M 1 M ) 1 β follows a log-logistic distribution.
0.3118375 , 0.3715598 , 0.4876330 , 0.5205967 , 0.5424458 , 0.5445681 , 0.5819012
0.5882983 , 0.6189600 , 0.6723682 , 0.6790795 , 0.6936371 , 0.7221577 , 0.7227560
0.7424159 , 0.7487004 , 0.7490484 , 0.7517022 , 0.7750581 , 0.8431784 , 0.8434412
0.8686997 , 0.8835853 , 0.8855607 , 0.8898366 , 0.9227385 , 0.9633077 , 0.9638518
0.9643345 , 0.9701875 , 0.9810045 , 0.9890877 , 1.0151853 , 1.0301795 , 1.0597236
1.0836868 , 1.0853866 , 1.1076366 , 1.1599594 , 1.1661035 , 1.2076103 , 1.2394994
1.2654396 , 1.2813285 , 1.3142395 , 1.3628084 , 1.3895883 , 1.3966131 , 1.4486672
1.4501228 , 1.4574971 , 1.5251246 , 1.6315132 , 1.6351908 , 1.7744221 , 2.0102572
2.1783054 , 2.5374440 , 3.1885393 , 6.4965022
According to these data, we are able to obtain a progressive type-I censored sample. The number of checking times is m = 7 ; the check interval is equal to t = 0.2 (h); and the predetermined deleted percentage of the survival units is given by ( p 1 , p 2 , p 3 , p 4 , p 5 , p 6 , p 7 , p 8 ) = ( 0.05 , 0.05 , 0.05 , 0.05 , 0.05 , 0.05 , 0.05 , 1 ) . Now, we can start executing the test algorithm program about H f .
Step 1
Observe the progressive type-I interval censored data ( 0 , 2 , 6 , 11 , 13 , 8 , 8 ) at the pre-set times ( t 1 , t 2 , t 3 , t 4 , t 5 , t 6 , t 7 ) = ( 0.2 , 0.4 , 0.6 , 0.8 , 1.0 , 1.2 , 1.4 ) with the censoring schemes of
( R 1 , R 2 , R 3 , R 4 , R 5 , R 6 , R 7 ) = ( 3 , 3 , 2 , 2 , 1 , 0 , 1 ) .
Step 2
Let the critical amount of information required h = 0.6 , and then, propose the test null hypothesis H 0 : H f 0.6 and the alternative hypothesis H β : H f > 0.6 .
Step 3
Calculate the maximum likelihood estimate of parameter β ˜ = 4.725352 and the value of the test statistic H f ˜ = 2 ln β ˜ = 0.447058 .
Step 4
The critical value can be calculated for the level of significance of α = 0.05 as: H 0 = h + V a r ( γ ( β ) ) C H I I N V ( 1 α ) = 1.212735 .
Step 5
Since H f ˜ = 0.447058 < H 0 = 1.212735 , we accept the null hypothesis H 0 : H f 0.6 . Thus, the obtained source data do not meet the requirements.
Example 2.
The R software is used to generate n = 60 sets of failure time values, which obey the log-logistic distribution under β = 6 .
0.2517733 , 0.5072381 , 0.5850387 , 0.6208714 , 0.6625793 , 0.6954254 , 0.7241406
0.7372249 , 0.7407151 , 0.7610365 , 0.7614509 , 0.7616830 , 0.7782394 , 0.8252179
0.8414707 , 0.8571038 , 0.8703908 , 0.8931475 , 0.9007681 , 0.9008163 , 0.9077517
0.9100598 , 0.9200725 , 0.9301961 , 0.9513207 , 0.9545003 , 0.9609950 , 0.9635523
0.9856276 , 0.9955788 , 1.0130551 , 1.0324070 , 1.0612042 , 1.0750621 , 1.0762970
1.0783691 , 1.1241678 , 1.1481179 , 1.1530923 , 1.1605018 , 1.1649218 , 1.1785074
1.1838348 , 1.1876701 , 1.2244942 , 1.2383936 , 1.2795402 , 1.2886820 , 1.3056510
1.3163001 , 1.3368304 , 1.3685482 , 1.4333597 , 1.4493346 , 1.4811992 , 1.8535557
1.9116135 , 2.1423352 , 2.2983347 , 2.4595617
According to these data, we are able to obtain a progressive type-I censored sample. The number of checking times is m = 8 ; the check interval is equal to t = 0.2 (h); and the predetermined deleted percentage of the survival units is given by ( p 1 , p 2 , p 3 , p 4 , p 5 , p 6 , p 7 , p 8 ) = ( 0.02 , 0.02 , 0.02 , 0.02 , 0.02 , 0.02 , 0.02 , 1 ) . Now, we can start executing the test algorithm program about H f .
Step 1
Record the progressive type-I interval censored sample ( 0 , 1 , 2 , 10 , 17 , 14 , 8 , 3 ) at the pre-set times ( t 1 , t 2 , t 3 , t 4 , t 5 , t 6 , t 7 , t 8 ) = ( 0.2 , 0.4 , 0.6 , 0.8 , 1.0 , 1.2 , 1.4 , 1.6 ) with the censoring scheme of ( R 1 , R 2 , , R 8 ) = ( 1 , 1 , 1 , 1 , 1 , 0 , 0 , 0 ) .
Step 2
If the critical amount of information required h = 0.1 , then, we can propose the test null hypothesis H 0 : H f 0.1 and the alternative hypothesis H β : H f > 0.1 .
Step 3
Calculate the maximum likelihood estimate of parameter β ˜ = 6.530128 and the value of the test statistic H f ˜ = 2 ln β ˜ = 0.123574 .
Step 4
The critical value can be calculated for the level of significance of α = 0.05 as: H 0 = h + V a r ( γ ( β ) ) C H I I N V ( 1 α ) = 0.609365 .
Step 5
Since H f ˜ = 0.123574 < H 0 = 0.609365 , we accept the null hypothesis H 0 : H f 0.1 . Thus, the obtained source data do not meet the requirements.

6. Conclusions

In practice, due to some reasons such as time and cost, usually only incomplete data can be collected and observed. We consider that the data come from the progressive type-I censoring scheme and use the MLE method and EM algorithm to estimate the parameters and the information entropy. In combination with the hypothesis test, we establish the information entropy calculation algorithm, using information entropy to measure the amount of uncertainty in an object in information theory. Extensive simulations justify that the algorithm is feasible and effective. This information entropy algorithm based on censored data will be widely used in interdisciplinary fields such as system science, geography, computer science and technology in the future.

Author Contributions

Methodology and Writing, Y.D.; Simulation, Y.G.; Supervision, W.G.

Funding

This work was supported by Project 201910004001 which was supported by National Training Program of Innovation and Entrepreneurship for Undergraduates.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shah, B.K.; Dave, P.H. A note on log-logistic distribution. J. MS Univ. Baroda 1963, 12, 15–20. [Google Scholar]
  2. Tadikamalla, P.R.; Johnson, N.L. Tables to facilitate fitting lu distributions. Commun. Stat. Simul. Comput. 1982, 11, 249–271. [Google Scholar] [CrossRef]
  3. O’Quigley, J.; Struthers, L. Survival models based upon the logistic and log–logistic distributions. Comput. Progr. Biomed. 1982, 15, 3–11. [Google Scholar] [CrossRef]
  4. Balakrishnan, N.; Malik, H.J. Moments of order statistics from truncated log-logistic distribution. J. Stat. Plan. Inference 1987, 17, 251–267. [Google Scholar] [CrossRef]
  5. Kantam, R.R.L.; Rosaiah, K.; Rao, G.S. Acceptance sampling based on life tests: Log-logistic model. J. Appl. Stat. 2001, 28, 121–128. [Google Scholar] [CrossRef]
  6. Chen, D.G.; Lio, Y.L.; Jiang, N. Lower confidence limits on the generalized exponential distribution percentiles under progressive type-i interval censoring. Commun. Stat. Simul. Comput. 2013, 42, 2106–2117. [Google Scholar] [CrossRef]
  7. Singh, S.; Tripathi, Y.M. Estimating the parameters of an inverse weibull distribution under progressive type-i interval censoring. Stat. Pap. 2016, 59, 21–56. [Google Scholar] [CrossRef]
  8. Wu, S.F.; Lin, Y.T.; Chang, W.J.; Chang, C.W.; Lin, C. A computational algorithm for the evaluation on the lifetime performance index of products with rayleigh distribution under progressive type i interval censoring. J. Comput. Appl. Math. 2017, 328, 508–519. [Google Scholar] [CrossRef]
  9. Wu, S.J.; Huang, S.R. Planning progressive type-i interval censoring life tests with competing risks. IEEE Trans. Reliab. 2014, 63, 511–522. [Google Scholar] [CrossRef]
  10. Basak, I.; Balakrishnan, N. Prediction of censored exponential lifetimes in a simple step-stress model under progressive type ii censoring. Comput. Stat. 2017, 32, 1–23. [Google Scholar] [CrossRef]
  11. El-Sagheer, R.M. Inferences in constant-partially accelerated life tests based on progressive type-ii censoring. Bull. Malays. Math. Sci. Soc. 2016, 41, 609–626. [Google Scholar] [CrossRef]
  12. Singh, S.K.; Singh, U.; Kumar, M. Bayesian estimation for poisson-exponential model under progressive type-ii censoring data with binomial removal and its application to ovarian cancer data. Commun. Stat. Simul. Comput. 2014, 45, 3457–3475. [Google Scholar] [CrossRef]
  13. Aggarwala, R. Progressive interval censoring: Some mathematical results with applications to inference. Commun. Stat. 2001, 30, 1921–1935. [Google Scholar] [CrossRef]
  14. Liu, H. Asymptotic normality of maximum likelihood estimators for truncated and censored data. J. Huzhou Teach. Coll. 1996, 5, 17–22. [Google Scholar]
  15. Mao, S.; Wang, J.; Pu, X. Advanced Mathematical Statistics; Higher Education Press: Beijing, China, 2006. [Google Scholar]
  16. Zhong, N.; Deng, X. Multimode non-Gaussian process monitoring based on local entropy independent component analysis. Can. J. Chem. Eng. 2017, 95, 319–330. [Google Scholar] [CrossRef]
  17. Xie, H.; Cheng, H.Z.; Niu, D.X. Discretization of continuous attributes in rough set theory based on information entropy. Chin. J. Comput. 2005, 28, 1570. [Google Scholar]
  18. Li, C.L.; Wang, X.C.; Zhao, J.C. An information entropy-based multi-population genetic algorithm. J. Dalian Univ. Technol. 2004, 44, 589–593. [Google Scholar]
  19. Fishman, G.S. A Monte Carlo sampling plan for estimating network reliability. Oper. Res. 2017, 34, 581–594. [Google Scholar] [CrossRef]
  20. Blevins, J.R. Sequential Monte Carlo Methods for Estimating Dynamic Microeconomic Models. J. Appl. Econom. 2011, 31, 773–804. [Google Scholar] [CrossRef]
  21. Graham, I.G.; Scheichl, R.; Ullmann, E. Mixed finite element analysis of lognormal diffusion and multilevel Monte Carlo methods. Stoch. Part. Differ. Equ. Anal. Comput. 2016, 4, 41–75. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Power function for testing at α = 0.05 under m = 5 and p = 0.05 .
Figure 1. Power function for testing at α = 0.05 under m = 5 and p = 0.05 .
Symmetry 10 00445 g001
Figure 2. Power function for testing at α = 0.05 under n = 60 and p = 0.05 .
Figure 2. Power function for testing at α = 0.05 under n = 60 and p = 0.05 .
Symmetry 10 00445 g002
Figure 3. Power function for testing at α = 0.1 under n = 60 and m = 5 .
Figure 3. Power function for testing at α = 0.1 under n = 60 and m = 5 .
Symmetry 10 00445 g003
Figure 4. Power function for testing under n = 60 , m = 5 and p = 0.05 .
Figure 4. Power function for testing under n = 60 , m = 5 and p = 0.05 .
Symmetry 10 00445 g004
Table 1. The values of w ( h 1 ) at α = 0.01 for h 1 = 0.65 ( 0.15 ) 2 , n = 60 ( 20 ) 100 , m = 5 ( 1 ) 8 and p = 0.05 ( 0.025 ) 0.1 under T = 0.5 and h 0 = 0.8 .
Table 1. The values of w ( h 1 ) at α = 0.01 for h 1 = 0.65 ( 0.15 ) 2 , n = 60 ( 20 ) 100 , m = 5 ( 1 ) 8 and p = 0.05 ( 0.025 ) 0.1 under T = 0.5 and h 0 = 0.8 .
h 1
mnp0.650.80.951.11.251.41.551.71.852
5600.050.0038000.0172450.0811480.2763830.5872260.8365280.9496600.9854130.9952530.998054
0.0750.0040320.0176500.0798680.2659150.5649730.8156180.9379880.9802860.9930320.996955
0.10.0042840.0180760.0786770.2559640.5431470.7937690.9247170.9739130.9900400.995381
800.050.0021830.0150330.0957680.3632430.7290050.9329580.9882350.9980360.9995980.999882
0.0750.0023360.0153560.0935140.3479880.7050130.9189620.9837760.9969010.9992860.999771
0.10.0025040.0156950.0913850.3333710.6807420.9033160.9781180.9952600.9987810.999575
1000.050.0013420.0136280.1122730.4510780.8328860.9754450.9976510.9997810.9999720.999994
0.0750.0014490.0138990.1089740.4317130.8112150.9679360.9963350.9995920.9999400.999986
0.10.0015660.0141850.1058470.4129750.7884830.9588860.9944530.9992700.9998760.999968
6600.050.0048830.0178120.0741070.2507050.5612790.8334210.9566020.9906050.9978780.999399
0.0750.0050450.0181860.0736110.2430130.5407520.8119180.9450890.9863920.9964840.998884
0.10.0052300.0185830.0731320.2356000.5206620.7895200.9317920.9808750.9944140.998031
800.050.0029000.0154850.0870780.3340810.7107920.9353230.9915570.9991220.9998980.999983
0.0750.0030110.0157830.0858220.3218620.6873500.9210400.9877090.9984350.9997780.999956
0.10.0031370.0160980.0845970.3100520.6637500.9050180.9826260.9973370.9995490.999896
1000.050.0018390.0140080.1018850.4202900.8221100.9780500.9986530.9999360.9999961.000000
0.0750.0019180.0142580.0998080.4039060.8002410.9707070.9977070.9998570.9999890.999999
0.10.0020100.0145240.0977810.3879730.7774310.9617350.9962570.9997000.9999710.999996
7600.050.0057610.0184830.0693960.2279480.5273580.8171610.9558040.9921860.9987040.999743
0.0750.0058170.0187960.0696120.2236340.5111460.7967100.9442840.9883560.9976800.999455
0.10.0059100.0191370.0697520.2191590.4949400.7754080.9309650.9832140.9960570.998920
800.050.0034980.0160190.0808150.3054890.6806140.9286050.9920330.9994170.9999590.999996
0.0750.0035380.0162680.0805100.2974710.6603160.9142820.9883080.9988920.9998970.999987
0.10.0036030.0165390.0801110.2893280.6396870.8983040.9833420.9980050.9997620.999963
1000.050.0022640.0144570.0939880.3873020.7988570.9759140.9988560.9999680.9999991.000000
0.0750.0022950.0146670.0931180.3756150.7786900.9683680.9980050.9999200.9999971.000000
0.10.0023430.0148940.0921380.3638190.7576310.9592180.9966690.9998150.9999890.999999
8600.050.0063720.0190940.0666470.2114360.4967500.7973450.9520000.9925520.9990330.999861
0.0750.0063250.0193290.0673860.2101050.4857100.7793440.9406710.9888910.9981950.999674
0.10.0063340.0196020.0679640.2081260.4738800.7602750.9275690.9839260.9968080.999297
800.050.0039220.0165050.0768930.2834510.6504930.9184980.9914940.9995110.9999770.999999
0.0750.0038900.0166920.0773170.2793940.6347610.9048090.9877490.9990510.9999370.999995
0.10.0038970.0169090.0775270.2745670.6181060.8895150.9827760.9982550.9998430.999983
1000.050.0025710.0148650.0888230.3605490.7734400.9718170.9988210.9999771.0000001.000000
0.0750.0025490.0150230.0889030.3535900.7564980.9642000.9979730.9999400.9999991.000000
0.10.0025560.0152040.0887200.3457680.7383700.9550430.9966470.9998550.9999941.000000
Table 2. The values of w ( h 1 ) at α = 0.05 for h 1 = 0.65 ( 0.15 ) 2 , n = 60 ( 20 ) 100 , m = 5 ( 1 ) 8 and p = 0.05 ( 0.025 ) 0.1 under T = 0.5 and h 0 = 0.8 .
Table 2. The values of w ( h 1 ) at α = 0.05 for h 1 = 0.65 ( 0.15 ) 2 , n = 60 ( 20 ) 100 , m = 5 ( 1 ) 8 and p = 0.05 ( 0.025 ) 0.1 under T = 0.5 and h 0 = 0.8 .
h 1
mnp0.650.80.951.11.251.41.551.71.852
5600.050.0106360.0460600.1810030.4747510.7828160.9404930.9869550.9970950.9992020.999697
0.0750.0111530.0466220.1773880.4594890.7632510.9289880.9826390.9956900.9987000.999472
0.10.0117030.0472090.1739270.4446500.7432630.9161990.9773170.9937670.9979490.999112
800.050.0068270.0428930.2129410.5818500.8846980.9823700.9979910.9997580.9999600.999989
0.0750.0072200.0433650.2076590.5632290.8686460.9770540.9969400.9995700.9999180.999976
0.10.0076390.0438590.2025830.5448810.8515330.9706240.9954620.9992640.9998420.999949
1000.050.0045980.0407880.2458160.6743920.9422910.9952710.9997290.9999830.9999981.000000
0.0750.0049020.0412010.2388860.6542070.9310140.9932450.9995240.9999630.9999960.999999
0.10.0052280.0416330.2322120.6340080.9184160.9905600.9991930.9999240.9999900.999998
6600.050.0129250.0468450.1698280.4511700.7737500.9450740.9909470.9986800.9997770.999947
0.0750.0132710.0473590.1674520.4380910.7542390.9334780.9872630.9978040.9995640.999881
0.10.0136600.0479000.1651190.4253080.7343760.9204920.9825290.9964870.9991940.999753
800.050.0085340.0435530.1996320.5591810.8816270.9853670.9989410.9999330.9999950.999999
0.0750.0088080.0439850.1958410.5423950.8652850.9803690.9982280.9998570.9999850.999998
0.10.0091140.0444390.1921210.5258220.8479130.9741970.9971450.9997130.9999640.999993
1000.050.0059000.0413660.2305470.6540740.9420880.9965450.9998960.9999971.0000001.000000
0.0750.0061200.0417430.2253330.6352370.9305900.9948190.9997900.9999921.0000001.000000
0.10.0063670.0421400.2202190.6163970.9177520.9924460.9995970.9999800.9999991.000000
7600.050.0147080.0477640.1610240.4253700.7536480.9418710.9919090.9991500.9999070.999986
0.0750.0148300.0481890.1599590.4158590.7360270.9302710.9884260.9984860.9997930.999962
0.10.0150250.0486490.1587670.4062260.7179770.9173110.9838830.9974310.9995690.999905
800.050.0099030.0443240.1884390.5311790.8685230.9848740.9991790.9999690.9999991.000000
0.0750.0100080.0446810.1863110.5180310.8528980.9798330.9985720.9999260.9999961.000000
0.10.0101680.0450670.1840400.5047210.8363070.9736190.9976190.9998340.9999870.999999
1000.050.0069730.0420400.2170550.6261060.9345840.9965560.9999320.9999991.0000001.000000
0.0750.0070640.0423520.2138410.6105950.9231410.9948390.9998530.9999971.0000001.000000
0.10.0071980.0426890.2104720.5948000.9104430.9924760.9997020.9999911.0000001.000000
8600.050.0159150.0485900.1551040.4040950.7320490.9356970.9918010.9993200.9999490.999995
0.0750.0158330.0489060.1551190.3982710.7174780.9244730.9883660.9987520.9998740.999984
0.10.0158590.0492700.1548310.3917450.7021220.9119170.9838770.9978180.9997160.999954
800.050.0108480.0450180.1805450.5066030.8526910.9829970.9992160.9999801.0000001.000000
0.0750.0107930.0452830.1798380.4975520.8387750.9778930.9986360.9999490.9999981.000000
0.10.0108230.0455880.1787590.4877260.8237900.9716460.9977190.9998780.9999941.000000
1000.050.0077280.0426460.2072230.6002300.9245050.9960920.9999401.0000001.0000001.000000
0.0750.0076920.0428780.2057650.5888090.9137370.9943050.9998690.9999981.0000001.000000
0.10.0077220.0431440.2038700.5765360.9017460.9918750.9997300.9999951.0000001.000000
Table 3. The values of w ( h 1 ) at α = 0.1 for h 1 = 0.65 ( 0.15 ) 2 , m = 5 ( 1 ) 8 , n = 60 ( 20 ) 100 and p = 0.05 ( 0.025 ) 0.1 under T = 0.5 and h 0 = 0.8 .
Table 3. The values of w ( h 1 ) at α = 0.1 for h 1 = 0.65 ( 0.15 ) 2 , m = 5 ( 1 ) 8 , n = 60 ( 20 ) 100 and p = 0.05 ( 0.025 ) 0.1 under T = 0.5 and h 0 = 0.8 .
h 1
mnp0.650.80.951.11.251.41.551.71.852
5600.050.0177920.0738400.2597020.5903350.8622650.9697590.9945480.9989510.9997360.999903
0.0750.0185630.0744310.2544620.5740800.8465320.9626380.9924010.9983550.9995430.999821
0.10.0193760.0750480.2494000.5580550.8300490.9544210.9896230.9974930.9992370.999679
800.050.0120080.0704730.3020420.6948890.9357040.9925770.9993370.9999330.9999900.999997
0.0750.0126310.0709790.2948970.6769500.9246510.9899050.9989300.9998730.9999780.999994
0.10.0132900.0715060.2879760.6589670.9124940.9865190.9983220.9997670.9999540.999986
1000.050.0084300.0682020.3436070.7773380.9715240.9983300.9999280.9999961.0000001.000000
0.0750.0089360.0686500.3346950.7595520.9647780.9974860.9998650.9999910.9999991.000000
0.10.0094760.0691160.3260410.7413900.9569590.9963070.9997550.9999810.9999980.999999
6600.050.0210570.0746660.2469670.5715660.8600410.9742260.9967590.9996230.9999460.999988
0.0750.0215690.0752050.2431100.5568170.8440670.9673430.9951300.9993180.9998830.999971
0.10.0221390.0757700.2393120.5422440.8273670.9592750.9928940.9988190.9997630.999932
800.050.0145820.0711800.2873210.6787440.9364570.9944880.9997160.9999860.9999991.000000
0.0750.0150140.0716400.2817720.6618390.9252250.9921620.9994830.9999680.9999971.000000
0.10.0154940.0721230.2763110.6448980.9128630.9891160.9991000.9999280.9999920.999999
1000.050.0104860.0688270.3272090.7643190.9728510.9989410.9999791.0000001.0000001.000000
0.0750.0108500.0692350.3200610.7471230.9661010.9982960.9999520.9999991.0000001.000000
0.10.0112550.0696630.3130240.7295970.9582410.9973460.9999000.9999961.0000001.000000
7600.050.0235650.0756280.2359890.5477950.8478900.9737500.9973830.9997970.9999830.999998
0.0750.0237590.0760720.2337360.5361120.8327630.9668620.9959400.9995970.9999550.999993
0.10.0240510.0765510.2313390.5242860.8169280.9587850.9939050.9992420.9998950.999980
800.050.0166190.0720020.2738110.6552010.9299120.9946220.9998090.9999951.0000001.000000
0.0750.0167970.0723820.2702230.6410450.9188160.9923310.9996310.9999860.9999991.000000
0.10.0170550.0727900.2664760.6266230.9066520.9893200.9993220.9999650.9999981.000000
1000.050.0121560.0695560.3114190.7427470.9697150.9990220.9999881.0000001.0000001.000000
0.0750.0123180.0698920.3065440.7277520.9628510.9984090.9999711.0000001.0000001.000000
0.10.0125450.0702540.3014970.7123040.9549170.9974970.9999350.9999991.0000001.000000
8600.050.0252530.0764900.2281020.5266920.8331550.9713310.9975040.9998580.9999920.999999
0.0750.0251600.0768190.2272540.5184990.8197770.9644850.9961180.9997010.9999770.999998
0.10.0252160.0771960.2260380.5096430.8055480.9564790.9941490.9994100.9999400.999992
800.050.0180180.0727380.2637050.6329410.9208410.9940910.9998330.9999971.0000001.000000
0.0750.0179600.0730190.2618950.6222420.9104300.9917500.9996720.9999921.0000001.000000
0.10.0180240.0733420.2596450.6108070.8989630.9886960.9993870.9999770.9999991.000000
1000.050.0133260.0702080.2992600.7212320.9648030.9989270.9999911.0000001.0000001.000000
0.0750.0132920.0704570.2965030.7093200.9580590.9982920.9999771.0000001.0000001.000000
0.10.0133590.0707430.2932440.6965860.9502940.9973560.9999450.9999991.0000001.000000

Share and Cite

MDPI and ACS Style

Du, Y.; Guo, Y.; Gui, W. Statistical Inference for the Information Entropy of the Log-Logistic Distribution under Progressive Type-I Interval Censoring Schemes. Symmetry 2018, 10, 445. https://doi.org/10.3390/sym10100445

AMA Style

Du Y, Guo Y, Gui W. Statistical Inference for the Information Entropy of the Log-Logistic Distribution under Progressive Type-I Interval Censoring Schemes. Symmetry. 2018; 10(10):445. https://doi.org/10.3390/sym10100445

Chicago/Turabian Style

Du, Yuge, Yu Guo, and Wenhao Gui. 2018. "Statistical Inference for the Information Entropy of the Log-Logistic Distribution under Progressive Type-I Interval Censoring Schemes" Symmetry 10, no. 10: 445. https://doi.org/10.3390/sym10100445

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop