Estimation of Unknown Parameters of Truncated Normal Distribution under Adaptive Progressive Type II Censoring Scheme

: In reality, estimations for the unknown parameters of truncated distribution with censored data have wide utilization. Truncated normal distribution is more suitable to ﬁt lifetime data compared with normal distribution. This article makes statistical inferences on estimating parameters under truncated normal distribution using adaptive progressive type II censored data. First, the estimates are calculated through exploiting maximum likelihood method. The observed and expected Fisher information matrices are derived to establish the asymptotic conﬁdence intervals. Second, Bayesian estimations under three loss functions are also studied. The point estimates are calculated by Lindley approximation. Importance sampling technique is applied to discuss the Bayes estimates and build the associated highest posterior density credible intervals. Bootstrap conﬁdence intervals are constructed for the purpose of comparison. Monte Carlo simulations and data analysis are employed to present the performances of various methods. Finally, we obtain optimal censoring schemes under different criteria.


Truncated Normal Distribution
Normal distribution is one of the most significant probability distributions in statistics on account that it fits lots of natural phenomena. With its satisfactory performance on fitting data and nice properties, analysts always discuss the characteristics of data based on normal distribution. However, negative data seldom appear in most cases of reality such as life test experiments. To overcome this, truncation is introduced to limit the values within a specific scope, which fits the data better so that the final analytical results after computations and estimations will be more accurate. Thus, truncated normal distribution, restricting the domain of normal distribution between one or two bounds, can be found to have theoretical and practical values.
Truncated normal distribution has not received extensive attention in academia until recent years for the complexity of its numeric characteristics when making statistical inferences. Based on this distribution, Ref. [1] compared the maximum likelihood estimations of mean and variance under censored samples with complete samples. Moreover, using the progressive type-II censoring model, Ref. [2] estimated the parameters of left truncated normal distribution. On the aspect of applications, Ref. [3] approximated the length of queue with abandonment based on truncated normal distribution. So far, studies related to this distribution still leave plenty of space to explore.
In this article, a normal distribution that is left truncated at zero is investigated. We denote it as TN(µ, τ) for convenience. Notice that µ, τ are the parameters of mean and variance. The corresponding probability density function (pdf) is described as and the cumulative distribution function (cdf) is written as . ( Here Φ(·) represents the cdf of standard normal distribution. The survival function can be written as .
(3) Figure 1 reflects that the truncated normal distribution pdf is unimodal. Similar to normal distribution, as µ increases, location of truncated normal distribution moves along the right direction while the shape becomes narrower and sharper as τ decreases. However, because of truncation, the density location change is no longer a translation under the condition that the variance is fixed. Instead, the value of pdf becomes lower as µ moves away from the truncated point in positive direction. Figure 2 presents some characteristics of cdf. The fastest increasing point among all points in the domain appears at the mean. The growth of cdf becomes slower with τ decreasing. Furthermore, larger µ leads to lower rising rate.  In general, we denote a random variable following truncated normal distribution as X and assume that this distribution is left truncated at a. Different from normal distribution random variable, the expectation and variance of X are not only determined on the parameters µ and τ but also depended on the truncated point. According to [4], the expectation is Meanwhile, the characteristic function is given as ] and the moment generating function is M(t) = e µt+ 1 2 ], which are helpful to calculate the moments.

Adaptive Progressive Type-II Censored Scheme
In survival analysis and reliability test, incomplete observation is a universal problem perplexing lots of scholars for the limitation of time and cost. Thus, censored schemes are applied to improve the efficiency of experiments. Various censoring schemes are put forward and studied by many statisticians. Type I and II censoring have found the most universal applications among these censoring models. In type-I censoring, the life span experiment stops at a prefixed time and the units remaining are right censored while under type-II censoring, m, the number of failures is predetermined and the experiment stops once the m-th failure happens. However, being unable to remove the survival units during the test is one of the weaknesses in the two censoring schemes. To conquer this, progressive type II censoring model was proposed. In this scheme, there are totally n sample units in an experiment. On the occurrence of the first failure, R 1 live units are removed at random from n − 1 live units remaining. Similarly, from remaining n − j − ∑ j−1 i=1 R i units remove R j units when the j-th failure appears. This process is ongoing until m-th failure happens. Here the failure time of m units are described as X (1:m:n) , · · · , X (m−1:m:n) , X (m:m:n) . In addition, (R 1 , · · · , R m−1 , R m ) represents the censoring scheme. Note that the number m and censoring scheme are prescribed at the beginning. Readers who are interested in progressive censoring can consult [5,6] for further information.
However, progressive type II censoring scheme is lack of flexibility. For example, during the life testing experiment, researchers may want to control the experiment time accordingly under the practical condition. Adaptive progressive type-II censored scheme is proposed to deal with that problem. This scheme is implemented in sequel. At the start of a life test experiment with n units totally, m, the size of observed units and (R 1 , · · · , R m−1 , R m ) are predetermined. In addition, T, the expected total testing time, is preprovided by researchers. Before T, this test is carried out according to the censoring plan which is given in advance like progressive type II censoring. However, once the actual time runs over T, researchers tend to obtain more observations in a shorter time by leaving more live units. The survival units would no longer be removed until the m-th failure appears. Assume that the practical test time exceeds over T exactly right after the J-th failure happens. (i.e., X (J:m:n) < T < X (J+1:m:n) , X (m+1:m:n) ≡ ∞, X (0:m:n) ≡ 0, J = 0, 1, · · · , m.). Therefore, when the actual time overruns T, the censoring scheme in the following be- Particularly, on condition that T is equal to 0, this scheme turns into conventional type-II censoring scheme. If the time of test is sufficient, T = ∞ is the corresponding premise. This model eventually changes to progressive type-II censoring model.
Considering the limitations of time and efficiency in practical situations, adaptive progressive type-II censoring censoring scheme enables researchers to control the experiment with flexibility in the process. This good property has attracted more and more scholars to do correlative investigation. Ref. [7] first proposed adaptive progressive type-II censoring model. Under the one-and two-parameter exponential distribution, Ref. [8] elaborated inferences related to estimating unknown parameters on the basis of this censoring scheme. Ref. [6] introduced the concepts of this censoring model and illustrated other related models in detail. Ref. [9] extended this censoring model by considering competing risks under exponential distribution and presented statistical inferences. On the basis of this scheme, Ref. [10] estimated the unknown parameters of a special distribution whose failure rate function is bathtub-shaped through maximum likelihood estimation and Bayesian approach.
For estimation methods, classical statistics and Bayesian inference are the mainstream approaches. Maximum likelihood estimation has been widely used as an important classical method of estimation. The life of experiment units are assumed to follow a certain lifetime distribution and be independent and identically distributed (i.i.d.). Then the maximum likelihood estimates are computed through maximizing the likelihood function. However, the small sample size leads to inaccurate estimation under this method. Bayesian estimation tackles this disadvantage by incorporating prior knowledge. Given the probability density function of sample and the prior distribution of parameters, Bayes estimates are obtained by calculating the expected value under posterior distribution. However, improper prior distribution selection may cause large errors and there is a certain degree of subjectivity in selection. Hence, giving careful consideration to choose prior distribution is very significant. In addition, the integral is difficult to be computed in Bayes estimation sometimes; importance sampling is a suitable approach to perform numerical calculations with an easily sampled distribution.
This paper aims to study estimation problems for the truncated normal distribution parameters under adaptive progressive type II censoring model. Both point estimations and interval estimations are discussed by applying classical approaches and Bayesian methods. Here the same distribution with Lodhi et al. (2019) is considered, but we mainly focus on enriching Bayesian estimation section and choosing three kinds of loss functions to calculate, which is comprehensive and comparable. Under the more flexible and complex censoring model, deducing functions, taking derivatives and writing codes to make simulations are more difficult. In addition, some evidences are needed before employing methods are proved, making this article rigorous.
In Section 2, via Newton-Raphson method, the maximum likelihood estimates of the truncated normal distribution parameters are obtained. Meanwhile, observed Fisher matrix is computed to build the corresponding aymptotic confidence intervals. Furthermore, the distribution of the j-th failure and expected Fisher information matrix are also discussed. Section 3 carrys out Bayesian estimations under different loss functions including square error loss Function (SELF), Linex loss function (LLF) and general entropy loss function (GELF). Lindley approximation and importance sampling are mainly adopted to calculate the estimates of parameters. On the basis of that, highest posterior density credible intervals (HPD intervals) are also established. Meanwhile, bootstrap confidence intervals are constructed in accordance with the algorithm steps in Section 4. In Section 5, the results obtained with different methods are compared and evaluated by carrying out simulations. To illustrate the effectiveness of estimation methods, analysis on a real data set are carried out in Section 6. Furthermore, Section 7 investigates the problem of selecting optimal censoring scheme under adaptive type II censored model. Eventually, Section 8 summarizes the whole article.

Maximum Likelihood Estimation
The estimators are derived in this section from the perspective of point estimation and interval estimation by utilizing maximum likelihood method. The equations inferred from likelihood function to obtain the estimates are demonstrated. Through observed Fisher information matrix, we also construct asymptotic confidence intervals. Some inferences about expected Fisher matrix are also presented in sequel. Assume that X (1:m:n) , · · · , X (m−1:m:n) , X (m:m:n) represent the censored sample under the prefixed censoring plan (R 1 , · · · , R m−1 , R m ) and the ideal total time is provided as T. In particular, before T, the last failure occurs at X J:m:n . For the sake of simplicity, X (1:m:n) , · · · , X (m−1:m:n) , X (m:m:n) are denoted respectively by X (1) , · · · , X (m−1) , X (m) .

Point Estimation
Suppose that the failure data x (1) , · · · , x (m−1) , x (m) are observed in the experiment. Let x represent the observed data (x (1) , · · · , x (m−1) , x (m) ). Given the pdf f (x), the cdf F(x) and the J which is obtained by setting T in advance, the likelihood function is described as where In the situation that data are subject to truncated normal distribution, f (x) and F(x) are the pdf and cdf of TN(µ, τ) separately. The corresponding likelihood function and log-likelihood function are expressed by where is the pdf of standard normal distribution.μ andτ are used to denote the MLEs of µ and τ. For the purpose of obtaining correspondinĝ µ andτ, the partial derivatives of likelihood function for µ and τ are set to be equal to 0.μ andτ are calculated by solving the equations, which maximizes the likelihood function. Then the equations are given as follows Theorem 1. When µ is given, the MLE of τ exists.

Proof. Please see Appendix A.
However, under this circumstance the Equations (11) and (12) are nonlinear and have no closed form solution. The denominators in the equations contain complex and nonlinear functions. They cannot be simplified. It is impossible to get the analytic solutions. So numerical techniques are essential to be introduced in order to settle this problem. Here to obtain the final estimatesμ andτ, Newton-Raphson method is chosen, which can be achieved by R software.

Asymptotic Confidence Interval
Aiming at constructing the approximate intervals of the two parameters, asymptotic variance-covariance matrix ofμ andτ is discussed by performing Fisher information matrix inversion.
Through taking the expectation of the negative second derivative of the log-likelihood function, Fisher information matrix can be calculated. Thus it can be expressed as

Observed Fisher Information Matrix
The observed Fisher matrix obtained based on samples is helpful and convenient to estimate the intervals ofμ andτ. For truncated normal distribution, it can be given as where 2 , and φ (·) is the first derivative of the pdf of standard normal distribution. Then the associated variance-covariance matrix is described as Thus, the 100%(1 − α) asymptotic confidence intervals are constructed separately as , where z α/2 is the α 2 -th percentile of the standard normal distribution.

Expected Fisher Information Matrix
Before experiment, n, m, R, T are predetermined. J is the order of failure right after the time point T. So it is relevant to T and the actual process of test. Because of the randomness of the life testing experiment, only when the test is over can we get the exact value of J. Thus J can be treated as a random variable. However, unknown J leads to the difficulty of obtaining the expectation expression for the pdf of experiment units involving J. Through making the assumptions that the J = j is known, J is regarded as a constant in this case. Then the specific expectation expression can be derived and the expected Fisher information matrix using adaptive progressive type II censored data is demonstrated as follows.
The expected Fisher matrix is decided by the distribution of the experiment unit X (i) , i = 1, 2, · · · , m. As a matter of fact, the pdf of X (i) based on TN(µ, τ) is described as (see Appendix B) where, Thus based on Equation (14), the expected Fisher information matrix is expressed by n respectively by f TN x (i) (x (i) ), i = 1, 2, · · · , n then integrating them from zero to infinity to obtain the expectation of these random variables. In fact, the expectation expressions about X (i) only depend on µ, τ, i and J. Denote the asymptotic variance-covariance matrix as Var * (μ,τ). Then Var * (μ,τ) can be written as Hence the 100%(1 − α) aymptotic confidence intervals computed by expected Fisher information matrix can be given as Furthermore, on the basis of the pdf of X (i) , the distribution of J is also inferred. The probability function (pf) of J is obtained as (see Appendix C) where j = 0, 1, · · · , m , r m+1 ≡ 0.

Bayesian Estimation
Bayesian estimation regards the unknown parameters as random variables. For the selection of prior distribution of unknown parameters is discussed and its simulation results are satisfactory in [2]. We adopt the same prior distribution as this reference. Suppose that µ follows a truncated normal distribution depending on τ, that is TN(a, τ b ), and τ follows an Inverse Gamma distribution IG(c, d 2 ). Then the prior density functions of unknown parameters in TN(µ, τ) are described as where hyperparameters a, b, c, d > 0. So the density function of joint prior distribution is written as Hence we have Then given the data, the posterior distribution density function derived by the above inferences becomes where the normalized coefficient NC = ∞ 0 ∞ 0 π(µ, τ|x)dµdτ.

Three Loss Functions
Choosing loss function is significant in Bayesian estimation. The symmetric loss function and asymetric loss functions are considered in this section.

Square Error Loss Function (SELF)
The Square error loss function (SELF) is a symmetric loss function with extensive application. The corresponding function is given by where θ is the estimate of θ. Then for TN(µ, τ), the Bayesian estimates under SELF can be shown as

Linex Loss Function (LLF)
In some situations, using symmetric loss function is inappropriate so asymmetric loss function is utilized to deal with this problem. Ref. [11] first proposed Linex loss function, as yet it has been used widely by a number of scholars. Based on progressive type-II censoring scheme, Ref. [12] discussed Bayesian estimations of inverse Weibull (IW) distribution under LLF. Under progressive first failure censoring scheme [13] adopted LLF to compute Bayes estimates of the log-logistic distribution. LLF is a linear exponential loss function and it penalizes heavily on one side of 0 and increases linearly on the other side of 0, which makes it become a asymmetric loss function. This loss function is helpful to solve overestimation and underestimation problems. The LLF is given as where p = 0 and p denotes the direction and penalizing intensity. If p > 0, LLF increases exponentially in positive direction and linearly in negative direction while if p < 0 the negative one will be penalized more heavily. Furthermore, the punishment intensity rises as the absolute value of p becomes larger. As for how to choose p, one can set a series of values and resample using Monte Carlo method. Then compare the simulation results to choose the most effective value. Or refer to [14] for more information.
Under LLF, the Bayesian estimates of µ and τ are described as

General Entropy Loss Function (GELF)
General entropy loss function is a suitable modified function on the basis of LLF. One can refer to [15] for detailed information. The GELF is expressed as Hence the Bayes estimates under GELF can be computed as However, the above Bayes estimation expressions cannot give a closed form of unknown parameters. In the next section, on the basis of three loss functions, estimates of parameters are computed by introducing Lindley approximation method.

Lindley Approximation
Lindley method was first put forward by [16] to determine the point estimation of unknown parameters. Through approximating the two integrals appeared in Bayes analysis by Taylor expansion at the MLE point, the numerical solutions of approximate Bayes estimation can be obtained. Let u denote any function about µ and τ. Furthermore, u is the associated Bayes estimate. Lindley's approximation method in two parameters' case is written as and l represents the log-likelihood function in (10), ρ represents the prior distribution log-likelihood function of unknown parameters, σ ij is equal to the corresponding (i,j)-th entry of Var(μ,τ) mentioned in (13). The above expressions including their derivatives are computed in detail in Appendix D. u is the function we want to approximate, which takes different forms when considering different loss functions.
(1) Estimation under the square error loss function. Bayes estimate µ LS is written as Similarly, Bayes estimate τ LS is given as (2) Estimation under Linex loss function. In this case u becomes an exponential function about unknown parameter. Then the Bayes estimate µ LL is written as Similarly, Bayes estimate τ LL is given as (3) Estimation under general entropy loss function.
In this situation, u changes to a power function then the Bayes estimate µ LE is obtained as Similarly, Bayes estimate τ LE is given as Using these expressions the approximate Bayes estimates can be computed under different loss functions. Though this method is helpful to obtain point estimate, it cannot estimate the intervals of unknown parameters. In order to overcome this difficulty, importance sampling method is adopted to construct credible intervals in the following subsection.

Importance Sampling
As a useful technique in Monte Carlo method, importance sampling can be applied to calculate both point estimates and interval estimates. Based on the features of posterior expression in (19), τ samples from Inverse Gamma distribution while µ samples from truncated normal distribution on the condition that τ is known. Then the pdfs can be written as Theorem 2. Π * (τ) is the density function from Inverse Gamma distribution.
Proof. Let IG(α,β) denote the Inverse Gamma distribution. The expression of Π * (τ) is in accordance with Inverse Gamma distribution density function. Thus to prove Theorem 2, only need to illustrate thatα andβ are non-negative.
According to Cauchy-Schwarz inequality, we can obtain thenβ is divided to three parts.
To sum up, all three parts are positive, soβ > 0. Then Π * (τ) belongs to Inverse Gamma distribution naturally. Using Function (33) and (34), (19) can be simplified as Then by utilizing the expressions above, generating samples from π(µ, τ|x) is carried out by the following steps.

1.
Prefix the number of samples S n .
Repeat step 2 and 3 to produce a series of samples (µ 1 , τ 1 ), (µ 2 , τ 2 ), · · · , (µ S n , τ S n ). Let θ(µ, τ) denote the function that is used to conduct Bayesian estimation. Then its Bayes estimate θ * (µ, τ) can be obtained naturally as In sequal, the HPD intervals of unknown parameters can also be obtained on the basis of samples generated. Assume that P[θ(µ, τ) ≤ θ p ] = p, 0 < p < 1. For p is determined in advance, establishing a corresponding HPD interval by conducting estimation of θ p is what we intend to do.

Suppose that
For the sake of brevity, is the corresponding value. Based on the Bayes estimatê θ * p = θ * (z p ) where z p is the integer that satisfies

Bootstrap Confidence Interval
Bootstrap method is a useful tool to construct the confidence interval of unknown parameters by resampling. Here the percentile bootstrap method (Boot-p method) is mainly considered and the algorithm is demonstrated as follows.

Simulation Results
In this section, the performances of different point estimation and interval estimation methods are evaluated by Monte Carlo simulations. Comparing their expected value and mean squared error, the advantages and weaknesses of diverse methods can be figured out. Here R software is used for computations and simulations. Taking the size of samples n and effective samples m, the ideal test time T and different censoring schemes into account, simulation schemes are designed for the comparison purpose. Because of the randomness of simulation, sampling repeats N = 5000 times and the final value is regarded as the average of computation results, which are tabulated in the following. To simplicity, the censoring schemes are described by abbreviation. For example, (1, 1, 0 * 7) represents (1, 1, 0, 0, 0, 0, 0, 0, 0) and ((1, 1, 0) * 3) represents (1, 1, 0, 1, 1, 0, 1, 1, 0)  Generally, µ = 2 and τ = 1 are predetermined and truncated normal distribution samples are generated under the adaptive progressive type II censoring schemes through applying the algorithm which is proposed by [7]. Using optim command with predetermining the initial value as the true value in R software, the ML estimations of unknown parameters based on samples are obtained. In Table 1 the ML estimation values (EV) and mean square errors (MSE) are presented. To construct Bayesian point estimation, on one hand, Lindley approximation is applied on the basis of known ML estimates, on the other hand, the importance sampling method as a numerical simulation method is also used. For comparison purposes, three loss functions SELF, LLF and GELF are considered and calculated respectively. The hyperparameters taken for simulations is a = 0.6, b = 0.3, c = 37, d = 0.6. For asymmetric loss function, p = 1 and p = −1 are compared with each other under Linex loss function while q = 1 and q = −1 are evaluated under general entropy loss function. In Tables 2 and 3, the computed Bayesian estimates and assessments are reported. Additionally, the detailed results of asymptotic confidence intervals, bootstrap-p intervals as well as HPD intervals at the 95% confidence level are also shown in Table 4 through the interval mean length (ML) and coverage rate (CR).  (1) Among all methods, there is a common tendency that all the estimation values are approaching true value and meanwhile mean square errors are decreasing as the sample size n and observed failure numbers m increase. (2) Generally, the estimates using Bayes method are more accurate compared with ML estimates from their estimated values and MSE. That is because MLEs are only calculated on the basis of data while Bayesian method also takes prior information of unknown parameters in TN(µ, τ) into account. For inappropriate prior distribution and hyperparameters selection, Bayesian estimates may have larger bias. The simulation results show that our prior distribution selection is relatively suitable. (3) In Bayesian estimation, the estimates under SELF perform slightly better than under LLF and GELF, which illustrates that symmetric loss function is a suitable choice in these cases. Both LLF and GELF with p = 1, q = 1 have smaller estimate values than with p = −1, q = −1. But between LLF and GELF, the results show little difference. (4) As for setting expected time, ML estimates under T = 3 are lower than under T = 1.
However, the effects of T = 1 and T = 3 are not obvious in Bayesian estimation. Meanwhile, different censoring schemes also show no significant tendency. (5) In the respect of computation techniques, importance sampling provides estimation values that are closer to true value and smaller MSEs than Lindley approximation.
In Table 4, the results of interval estimations are presented. The properties of different estimation methods can be concluded as (1) The interval mean length is narrower and coverage rate is higher as sample size n and observed failure numbers m increase.
(2) Although the results of Aymptotic confidence interval is not very satisfactory, Boot-p intervals and HPD credible intervals obtain highly good performances with lower ML and closer CL. (3) For different censoring scheme, the first type scheme ( m 2 , m 2 , 0 * (n − m − 2)) performs less ML when obtaining asymptotic confidence intervals than other intervals when T = 1 while in T = 3 cases, the regulation is contrary.

Real Data Analysis
To show the effectivenesses of estimation methods, we adopt the data set on the fatigue failure times of twenty three ball bearings in [17], which has been applied in a number of studies. Ref. [18] used this data set to discuss the fitness of exponentiated exponential distribution, Gamma dirtribution and Weibull distribution. Ref. [19] illustrated the estimation of reliability procedures in multicomponent system and evaluated its results based on this data set. Ref. [20] applied the real data set to show the characteristics of Inverted Exponentiated Weibull (IEW) distribution. Particularly, Ref. [2] studied this data set and assessed the goodness of fit under four criterions among half normal distribution, folded normal and truncated normal distributions. Finally, the conclusion that truncated normal distribution fits this data set best was drawn.
So in this section, the data set mentioned above is analyzed on the basis of our model. Table 5 presents the complete data and adaptive type II censored data under various schemes. Two types plans are mainly considered. Here denote scheme (4, 4, 0 * 13) as R I and scheme ((1, 1, 0) * 4, 1, 0) as R I I respectively. In addition, different prescribed expected time T = 0.4 and T = 0.9 are calculated for the purpose of evaluation. The point estimation and interval estimation results computed based on the ball bearing fatigue failure test data set are tabulated in Tables 6 and 7.  In Table 6, the estimates of unknown parameters under real data set are presented. As for interval estimation, the interval bounds and mean lengths are shown in Table 7. (2) Compared with Bayesian estimator using importance sampling, the estimation values of µ applying MLE are relatively smaller except the situation of T = 0.4, R I I while MLE results of τ are close when T = 0.4 but become higher when T = 0.9. (3) According to the interval estimation results Table, the lower and higher bound of Bootp interval are larger slightly. Generally, in the respect of interval length, the results of HPD credible interval are better than the other two.

Optimal Censoring Scheme
In simulations and real data analysis, different censoring schemes influence the effectiveness of estimates to a certain degree. Thus, the optimal censoring scheme is put forward to estimate parameters in higher efficiency and accuracy. Selecting the optimal schemes has attracted lots of scholars to study. To solve this problem, various criterions are raised and analyzed. Ref. [21] discussed the aymptotic variance through the Fisher information matrix based on Type-I censoring data. Ref. [22] regarded choosing the optimal censoring scheme as a discrete optimization problem and compared different censoring schemes under six criterion. Ref. [23] adopted three criterion to find the optimal censoring scheme under progressively censored lognormal distribution.
In the following, the criteria we adopt to evaluate different censoring schemes are tabulated.
The optimal censoring scheme is obtained through simulations. By repeating sampling procedures N=1000 times, The trace and determinant of Var(μ,τ) are averaged as final results. Meanwhile, the total expected time of every censoring scheme is computed using the same method. In sequel, evaluating three kinds of censoring schemes is considered. Given the number of total sample units n and number of failures m, ( n 10 * 4, 0 * (n − 4 − 4n 10 )), (1, 1, 0) * ( n 5 ) and (0 * (n − 4 − 4n 10 ), n 10 * 4) mean censoring at first, censoring uniformly and censoring in the end and they are denoted as R I , R I I and R I I I respectively. By taking different n, m, the more flexible rules of choosing the optimal censoring schemes are found. Under situations of different prefixed T, all the calculation results, optimal censoring scheme and total expected time are shown in Table 8. From Table 8, some conclusions are summarized as follows.
(1) With the rising number of sample size, the determinant and trace of variance-covariance matrix decrease. (2) Setting a larger value for T can reduce the trace and determinant of Var(μ,τ) slightly.
(3) The first kind of censoring scheme R I is always the optimal censoring scheme no matter depending on Criteria I or Criteria II except when n = 30, m = 12, T = 3. However, according to the column named E(X (m) ), the total expected experiment time of R I under different T or (n, m) are always the largest among the three kinds of censoring schemes. Thurs R I costs more time on carrying out the test.
So experimenters should weigh and consider balancing the effectiveness and time when designing test schemes. To maximize the effectiveness of estimation, increasing sample size, censoring in the beginning and prescribing a larger T are all good choices. However, if saving time is more important, censoring uniformly or in the end ought to be adopted.

Conclusions
In this paper, parameter estimation problem is investigated and statistical inferences are derived under adaptive type II censoring scheme based on the truncated normal distribution. The maximum likelihood method is applied to determine the point estimations.
To overcome the difficulty of solving nonlinear equations, Newton-Raphson algorithm is utilized to obtain unknown parameter estimates. Bayesian estimation under three loss functions SELF, LLF and GELF are also considered. The estimate values are calculated by employing importance sampling as well as Lindley approximation. Based on theoretical results, simulations and analysis to real data are studied to evaluate the effectiveness of diverse methods. From the numerical simulation, the Bayes estimation method is better than that maximum likelihood estimation. From the numerical simulation, on one hand, the Bayes estimation method is better than that maximum likelihood estimation. This rule can also be found in [2] which studies based on progressive type II censoring model with the same distribution and [10] with the exponentiated Weibull distribution under the same model. On the other hand, importance sampling is more effective than Lindley approximation but the two performance of two methods are incomparable in [2].
Furthermore, confidence and credible intervals of two unknown parameters are also constructed. Aymptotic confidence intervals are established on the basis of observed and expected Fisher information matrices. In sequal, the HPD credible intervals are presented by importance sampling procedures. Additionally, Boot-p intervals are computed for the purpose of comparison. Among the interval estimation methods, Boot-p intervals and HPD intervals perform better relatively than aymptotic confidence intervals for small samples from simulation results. In [2], HPD intervals have the best performance on the interval length. However, there is no significant effect on the estimation methods in [10]. It hinted that the approaches applied to carry out interval estimation on truncated normal distribution may be an important factor that influences the estimate results. Finally, the optimal censoring schemes are investigated under two criterion.
Truncated normal distribution with censoring data can be more practical and helpful when doing survival analysis in reality, which is worth further studying. One can extend the data in this article by considering more flexible and complex censoring schemes such as generalized progressive hybrid censor scheme. Besides, combining with competing risks is another potential way to develop a more practical model.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Proof of MLE Existence
Proof of Theorem 1. When µ is given, the MLE of τ exists.
For truncated normal distribution, the pdf of X (i) divided into two conditions is expressed by can be considered as the first order statistic of random samples from the distribution left truncated at x (j) so its cdf under TN(µ, τ) is obtained as Then the PMF of J is deduced as follows where j = 0, 1, · · · , m , r m+1 ≡ 0.   } mean ( muleft ) mean ( muright ) mean ( mulen ) mucover/SIM mean ( t a u l e f t ) mean ( t a u r i g h t ) mean ( t a u l e n ) t a u c o v e r /SIM mean ( muappro ) mean ( ( muappro−mum)^2 ) mean ( tauappro ) mean ( ( tauappro−taum )^2 ) mean ( mullfappro ) mean ( ( mullfappro−mum)^2 ) mean ( t a u l l f a p p r o ) mean ( ( t a u l l f a p p r o −taum )^2 ) mean ( mugelfappro ) mean ( ( mugelfappro−mum)^2 ) mean ( t a u g e l f a p p r o ) mean ( ( t a u g e l f a p p r o −taum )^2 )