Objective Bayesian Estimation for Tweedie Exponential Dispersion Process

: An objective Bayesian method for the Tweedie Exponential Dispersion (TED) process model is proposed in this paper. The TED process is a generalized stochastic process, including some famous stochastic processes (e.g., Wiener, Gamma, and Inverse Gaussian processes) as special cases. This characteristic model of several types of process, to be more generic, is of particular use for degradation data analysis. At present, the estimation methods of the TED model are the subjective Bayesian method or the frequentist method. However, some products may not have historical information for reference and the sample size is small, which will lead to a dilemma for the frequentist method and subjective Bayesian method. Therefore, we propose an objective Bayesian method to analyze the TED model. Furthermore, we prove that the corresponding posterior distributions have nice properties and propose Metropolis–Hastings algorithms for the Bayesian inference. To illustrate the applicability and advantages of the TED model and objective Bayesian method, we compare the objective Bayesian estimates with the subjective Bayesian estimates and the maximum likelihood estimates according to Monte Carlo simulations. Finally, a case of GaAs laser data is used to illustrate the effectiveness of the proposed methods.


Introduction
With today's advanced technology, most products are highly reliable.For these highly reliable products, it is not easy to evaluate their lifetime distribution by using traditional lifetesting procedures, which record only time to failure data [1].Even using the procedures incorporating censoring and accelerating techniques, the information about the lifetime distribution is still very limited [2].In this case, an alternative approach is to collect the degradation data in order to analyze a product's reliability.Compared to the lifetime data, the degradation data provide more valuable information on product failure behavior for making quick reliability assessments and other logistical decisions [3,4].
When analyzing degradation data, an important problem is how to establish an appropriate degradation model that can capture the true degradation process of a product in the field [5].In practice, stochastic dynamics is the most common characteristic involved in the degradation process due to the uncertainties in the product's working environments, random errors in measurements, and the individual variability of the products in a population.Stochastic process models have great potential for capturing stochastic dynamics within degradation processes.Thus, stochastic process modeling-based degradation analysis is favored by many researchers [6][7][8].The Wiener, Gamma, and Inverse Gaussian (IG) processes are three commonly used stochastic process models [9,10].The Wiener process is suitable for modeling non-monotonic degradation data [11].The Gamma process [12] and IG process [13] are suitable for modeling monotonic degradation data.Although these stochastic process models could fit most of degradation data well, these three well-known processes are not suitable in some engineering applications.For example, a discrete-type compound Poisson process may be more appropriate to model a leakage current of thin gate oxides in nanotechnology [14].Hence, a more general class of degradation model is necessary for describing the real degradation data more accurately.
To promote the adaptability of the modeling method for degradation data, the Tweedie Exponential Dispersion (TED) process was proposed to describe the degradation process of products [15].The TED process is a generalized stochastic process, which includes the Wiener, Gamma, and IG processes as the special cases.Hence, it is reasonable to use the TED process to model the degradation paths of some products.
Until now, the research methods for the TED model are the frequentist approaches [16,17] or the subjective Bayesian method [18,19].However, the objective Bayesian method has also many advantages in statistical analysis [20,21].The most appealing feature of the objective Bayesian approach is the use of noninformative priors [22].Jeffreys' prior and the reference prior are the two most often used noninformative priors.Jeffreys' prior has an invariance property for the prior probability in estimation problems [20].The reference prior can approximately describe the inferential content of the data without incorporating any other information [21].
The objective Bayesian method has been applied in the analysis of degradation models, and sometimes performs better than the frequentist approaches and the subjective Bayesian method, especially in cases of small sample size.For example, He et al. [23] employed the objective Bayesian method to study an Inverse Gaussian degradation model.The numerical results show that the proposed objective Bayesian estimates perform better than the maximum likelihood estimation (MLE) and the Bootstrap methods in terms of the mean squared error (MSE) and the frequentist coverage probability.Guan et al. [24] used the objective Bayesian method to estimate the parameters of the Wiener process.The simulation results reveal that the performances of objective Bayesian method are better than that of MLE and subjective Bayesian estimators in terms of the rate of convergence, time efficiency, and coverage probabilities, especially in the case of small sample size.For more details, see [25].
This paper aims to develop an objective Bayesian method for the TED process model: compared with the existing work, the major contribution of this paper lies in the following three aspects: (1) Noninformative priors, including Jeffreys prior and the reference prior, are provided, which solves the problem of how to choose an appropriate prior for the TED model without historical data in small samples; (2) The proposed priors are proven to have proper posterior distributions and probability matching properties; and (3) The corresponding Bayesian inference is obtained by using Metropolis-Hastings (MH) algorithm.
The remainder of this article is organized as follows.In Section 2, the TED model is introduced.In Section 3, the Jeffreys prior and reference priors under different ranking groups are derived.In Section 4, the posterior properties are discussed and the Metropolis-Hastings (MH) algorithm for different priors to estimate the model parameters is proposed.In Section 5, the effectiveness of the proposed objective Bayesian method is verified by Monte Carlo simulation.In Section 6, the proposed method is applied to analyze real degradation data.Finally, our conclusions are given in Section 7.

TED Model
A stochastic process is defined as an exponential dispersion (ED) degradation process {Y(t), t ≥ 0} if satisfying the following three properties [15]: (1) Y(0) = 0 with probability one; (2) {Y(t), t ≥ 0} has stationary and independent increments on non-overlapping intervals, that is, where the probability density function (PDF) of ED distribution ED(µt, λ) is: where µ is the mean drift rate and λ is the dispersion parameter; c(•) is a canonical function, guaranteeing that the cumulative distribution function (CDF) of Equation ( 2) is normalized and equal to one; κ(•) is called the cumulant function, which is a twice differentiable function, and satisfying κ (θ) τ(θ) = µ, in which κ (θ) is the first derivative of κ(θ) with respect to θ.
The mean and the variance of Y(t) are, respectively, given by and where is the second derivatives of κ(θ) with respect to θ, and is called the unit variance function.
The TED model is an important class of the ED model with power variance functions, that is, where p is a power classification parameter.
For the TED model, the function κ(θ) can be obtained by solving the equations , and the solution can be expressed as: Then, the canonical parameter θ(µ) can be expressed as Specific values of p correspond to specific models: p = 0 corresponds to the Wiener process; p = 2 corresponds to the Gamma process; p = 3 corresponds to the IG process; 1 < p < 2 corresponds to Compound Poisson process.Moreover, the TED model does not exist for all values of p in the interval (1, 2) [15][16][17].Table 1 gives the transform relationships between the TED model and those common processes.For the TED model, the PDF has no closed form except for some special values [17].According to the previous research [16][17][18], the saddle-point approximation method (SAM) provides a highly accurate approximation for TED model.Therefore, we adopt SAM to obtain the approximated PDF of TED, which is expressed as where

Noninformation Priors
In Bayesian inference, the prior distribution plays an important role.A reasonable prior distribution can improve the accuracy of Bayesian estimation [26].However, sometimes we do not have any prior information or it is difficult to obtain the prior information about the parameters, which will lead to the dilemma of choosing a reasonable prior distribution [27].To overcome this problem, the noninformation priors are proposed.Jeffreys' prior and the reference prior are two widely used noninformation priors.The procedure of objective Bayesian estimates based on Jeffreys' prior and the reference prior is as follows: Step 1: Derive the Fisher information matrix of the TED model because knowledge of Fisher's information matrix is necessary to determine Jeffreys' prior and the reference prior.(see Section 3.1).
Step 2: Derive the objective priors: Jeffreys' prior and the reference prior based on the derived Fisher information matrix (see Section 3.2).
Step 3: Analyze whether the derived Jeffreys' prior and reference prior are the probability matching prior (see Section 3.3).
Step 4: Analyze whether the posterior distributions derived from the objective priors are proper.That is, verify whether the integrals of the posterior distributions are finite (see Section 4.1).
Step 5: Generate Markov Chain Monte Carlo (MCMC) samples according to Metropolis-Hastings (MH) algorithm, because it is difficult to obtain the explicit expression of the marginal posterior distribution for parameter µ.Furthermore, the objective Bayesian estimates of parameters can be obtained based on generated MCMC samples (see Section 4.2).

Fisher's Information Matrix
Suppose that there are n units tested in the degradation test.Let m i be the number of measurements for the ith unit.Let Y(t ij ) be the degradation value of the ith unit at the measurement time t ij , i = 1, 2, . . ., n, j = 1, 2, . . .m i .The degradation increment between t ij and t i,j−1 is denoted by ∆y ij = Y t i,j − Y t i,j−1 , i = 1, 2, . . ., n, j = 1, 2, . . .m i , and where Therefore, the likelihood function is Then, the log-likelihood function is Taking the first partial derivatives of the log-likelihood Function (11) with respect to parameters λ and µ, and solving these equations, we can have where Substituting the MLEs μM and λM into formula (11), and maximizing this profile loglikelihood function or minimizing the negative profile log-likelihood function through one-dimensional search, then the MLE pM can be obtained.In this paper, the MATLAB function "FMINSEARCH" is used to find the minimum value of negative profile loglikelihood function.
Furthermore, we calculate the Fisher information matrix based on the log-likelihood function.However, according to the research in [17], it is difficult to derive the elements with respect to p in the Fisher information matrix.Because that the second-order partial derivative of the log-likelihood function with respect to the parameter p is very complicated.Then, the corresponding expectations cannot be obtained, which will lead to the subsequent uninformative priors that cannot be obtained.Therefore, we just derive the objective priors for the parameters λ and µ.
Suppose the parameter vector Θ = (λ, µ).The second-order partial derivative of log-likelihood function ln L with respect to λ and µ can be expressed as: where E(•) is the expectation.Therefore, the Fisher information matrix of Θ is Based on the Fisher information matrix, we can obtain the objective priors of the parameters.In addition, according to the asymptotic normality, the Fisher information matrix can also be used to construct asymptotic confidence intervals (ACIs) for the parameters Θ.

Jeffreys' and Reference Priors
Jeffreys' prior is proportional to the square root of the determinant of the Fisher information matrix.Additionally, it is invariant under one-to-one transformation of parameters.In particular, Jeffreys' prior has many optimality properties in the absence of nuisance parameters for the regular models where asymptotic normality holds [27,28].The following theorems give the Jeffreys' prior of parameters (λ, µ) Proof.According to the Fisher information Matrix (15), we have According to the definition of Jeffreys' prior, that is, Thus, ( 16) holds.
Besides Jeffreys' prior, the reference prior also plays an important role in the objective Bayesian analysis.It was proposed by Bernardo [29], and its idea is to maximize the expectation of Kullback-Leibler divergence between the prior and posterior distributions.The posterior distribution based on the reference prior has many nice properties, such as invariance, consistency under marginalization and consistent sampling properties [27,30].If there is only one parameter in the model, the reference prior is equal to Jeffreys' prior.If there are multiple parameters in the model, the reference prior is typically not equal to Jeffreys' prior.Before the derivation of the reference prior, the parameters need to be sorted by descending interest.The ordering is used to reflect the degree of importance of the different parameters.If the reference priors are the same for different orders, then the reference prior is robust to the order of parameters [21,30].Now, we derive the reference priors based on different parameters' order.
Proof.The proof is given in Appendix A.
According to Theorem 2, the reference prior is irrelevant to the order of the parameters for the TED model.That is, in this case, there is a unique reference prior.The reference prior is robust to the order of parameters.

Probability Matching Prior
The probability matching prior is proposed by [31].The Bayesian credible sets based on this prior have either exactly or approximately validate frequentist coverage probabilities [31,32].In this subsection, we verify whether the derived Jeffreys' prior and reference prior are probability matching priors.
Suppose that the parameters vector θ = (θ 1 , θ 2 ), and θ 1 is the parameter of interest.Based on the probability matching prior, the credible interval for θ 1 has a coverage error O n −1 in the frequentist sense, i.e., where The priors that satisfy (18) are defined as the probability matching priors.
To obtain the probability matching prior, we list two conclusions from [23]: (1) For the parameters vector θ = (θ 1 , θ 2 ), θ 1 is the parameter of interest, and θ 2 is nuisance parameters.The Fisher information matrix is Then, based on the interest parameter θ 1 , the probability match prior π(θ 1 , θ 2 ) is the solution of the following equation: where (2) Furthermore, if θ 1 and θ 2 are orthogonal, i.e., I 12 = I 21 = 0, then, the above Equation ( 20) is simplified as According to the above two conclusions, we have the following theorem.

Theorem 3.
(1) The Jeffreys' prior π J (λ, µ) is not probability matching prior for λ, but is the probability matching prior for µ.

Posterior Analysis 4.1. Posterior Distribution
In order to make sense of posterior inferences, we need to validate whether the corresponding posterior distributions are proper or not.
Proof.The proof is given in Appendix B.
From (23), under the Jeffreys' priors π J (λ, µ), the posterior distributions of λ and µ are, respectively, where Ga(a, b) represents a gamma distribution with scale parameter 1/b and shape parameter a.
Proof.The proof is given in Appendix C.

Sampling Algorithm
The objective Bayesian estimates for parameters λ and µ are the expectations of the posterior distribution.However, it is observed that the posterior distribution of µ does not belong to any known parametric family from the posterior distributions ( 25) and (26).Additionally, it is difficult to obtain the explicit expression of the marginal posterior density of µ.Therefore, we apply the MH algorithms to generate MCMC samples because that can flexibly generate samples from any proposed distribution.
Under Jeffreys' prior, the detailed procedures of the objective Bayesian estimates are as follows.For the reference prior, the procedures are similar.

Simulation Study
In this section, we adopt the Monte Carlo simulation to analyze the performance of the objective Bayesian estimates.It is assumed that the initial value of the degradation process is zero, that is Y(t = 0) = 0, each unit is measured m = 15 times and ∆t = 1.The threshold ω = 30.Two cases of values for the parameters (λ, µ) in the TED model are set here: Case I : λ = 0.2, µ = 1; Case II : λ = 0.5, µ = 1.5.

Comparison of Point Estimators
The objective Bayesian estimators under the Jeffreys' and reference priors are compared with MLE in Section 3.1 and subjective Bayesian estimates in [19] in terms of the average biases (ABs) and mean square errors (MSEs) to find the most efficient estimation method.In [19], the subjective prior distributions of parameters are assumed to be λ ∼ N(a, b) and µ ∼ N(c, d), the hyperparameters (a, b, c, d) are determined subjectively.We adopt two subjective priors.One is π s1 (λ, µ) : λ ∼ N(0.2, 0.1), µ ∼ N(1, 0.2), for which the mean is equal to the true value of parameters in Case I, i.e., E(λ) = 0.2, E(µ) = 1; the other is π s2 (λ, µ) : λ ∼ N(0.5, 0.3), µ ∼ N(2, 0.2), for which the mean is equal to the true value of parameters in Case II, i.e., E(λ) = 0.5, E(µ) = 2.
According to the values of the parameters, the samples can be generated.For each simulated sample, we calculate the MLE, the objective Bayesian estimate and the subjective Bayesian estimate.At the same time, both the absolute bias and the square error between the estimated value and the real value are calculated.We repeat these procedures Q(=6000) times.Then, the corresponding average of the estimated parameters can be obtained.Taking the AB and MSE of parameter µ as an example, the expression is Tables 2 and 3 present the estimated results for Cases I and II, respectively.From Tables 2 and 3, it is observed that (1) As anticipated, the ABs (MSEs) for all estimates of the parameters become smaller and smaller as the sample sizes increase.That is to say, the performance continues to improve as the sample sizes increase.Additionally, it is observed that the ABs (MSEs) become closer and closer to 0 as the sample sizes increase.Therefore, the estimates are asymptotically unbiased.
(2) Compared with MLE and subjective Bayesian estimates, the objective Bayesian estimates under π J (λ, µ) and π R (λ, µ) have smaller ABs and MSEs in most situations.Especially, the objective Bayesian estimators under reference prior π R (λ, µ) have the smallest ABs and MSEs.At the same time, considering that the reference prior has many nice properties, such as invariance, consistency under marginalization, consistent sampling properties and so on [27,33], and π R (λ, µ) is probability matching prior for λ and µ.Therefore, it is recommended to adopt the reference prior π R (λ, µ) to make an inference for the TED model.
(3) In Case I, the ABs and MSEs under the subjective prior π s1 (λ, µ) are smaller than those under the subjective prior π s2 (λ, µ) because the mean of the subjective prior π s1 (λ, µ) is equal to the true value.In Case II, the same situation exists.Therefore, the value of the hyperparameter of the subjective prior is important to the subjective Bayesian estimators and should be reasonably determined.

Comparison of Confidence Intervals
The performances of the confidence intervals for the objective Bayesian method are compared with the ACI and subjective Bayesian method in terms of width of confidence interval (WCI) and coverage probability (CP).For each simulated sample, the confidence interval can be obtained with 95% confidence level.Furthermore, we calculate the length of the interval and check whether the interval covers the true value of the parameter.Repeat these procedures Q(=6000) times.Then, the WCI is obtained as the average length of all confidence intervals and the CP is the number of confidence intervals that cover the true values divided by Q.The WCIs and CPs based on Case I and Case II are presented in Tables 4 and 5, respectively.Due to the space limitation, we only present the results with 95% confidence level here.
From Tables 4 and 5, (1) It is observed that the WCIs become smaller and the CPs are nearer to 0.95 as the sample size n increases.That is, the performance of interval estimations improves as the sample sizes increase.
(2) For the different methods of interval estimate, the performances of objective Bayesian estimates are better than the others in terms of WCIs and CPs in most of situations.Especially, the objective Bayesian estimators under the reference prior π R (λ, µ) have the shortest WCIs and are closest to 0.95.Therefore, the reference prior π R (λ, µ) is recommended to adopt for the TED model.
(3) In studying the effect of hyperparameter of the subjective prior on the efficiency of the Bayesian credible interval, we try two subjective prior distributions with different values of λ and µ.In Case I, the mean of the subjective prior distribution π s1 (λ, µ) is equal to the true value, and then the performances based on the subjective prior π s1 (λ, µ) are better than those based on the subjective prior π s2 (λ, µ) in terms of WCIs and CPs.In Case II, the mean of the subjective prior π s2 (λ, µ) is equal to the true value, then the performances based on the subjective prior π s2 (λ, µ) are better than those based on the subjective prior π s1 (λ, µ).That is to say, a subjective prior distribution with reasonable information about the parameters improves the performance of the Bayesian credible interval.

An Illustrative Example
In this section, a real example of GaAs lasers is used to illustrate the performance of the proposed objective Bayesian estimates.The degradation data of GaAs lasers are taken from Table C17 of Meeker and Escobar [34].For a GaAs laser device, the percent of operating current will increase over time.When the percent of operating current of the GaAs laser reaches at a predefined threshold ω, then the device is considered to failure.In this real example, there are 15 GaAs lasers tested at 80 • C. The initial operating current of all the GaAs lasers is 0, i.e., y(t i0 ) = 0, i = 1, 2, . . .15.During the experiment, the percent of operating current increase and are recorded every 250 h until 4000 h.Thus, The GaAs laser is considered to failure when the operating current increases to 10%, i.e., ω = 10%.Figure 1 shows the degradation paths of the operating current of 15 GaAs lasers.This degradation dataset has been analyzed in some references [17,35].Peng [35] compared seven degradation models, and found that the IG process is the best one according to Akaike's information criterion (AIC).Furthermore, Xu [17] adopt the TED process to model this degradation data, and they used the MLE to obtain the point estimates of the parameters.In addition, compared with the Wiener process, gamma process, and IG process, they found that the TED model provided the best fitting effect than these three commonly used degradation models.Following Xu [17], we use the TED model to fit this degradation data.Then, under the Jeffreys' prior and the reference prior, we obtain the objective Bayesian estimates of parameters by using the MH algorithm.The initial values of parameters are chosen as λ = 3, µ = 0.5.The number of iterations is 15,000, i.e., D = 15,000, and the first D 0 = 5000 simulated samples are discarded.The remaining samples are then used for the objective Bayesian estimates.Figures 2 and 3 show the sampling process of the parameters based on the Jeffreys' prior and reference prior, respectively.From Figures 2  and 3, the trace graphs reveal random scatters around the mean values depicted by solid lines for the simulated values of parameters.That is, these plots signify the convergence of the MH algorithm.Table 6 shows the estimates of parameters by using the MLE and objective Bayesian estimate.From Table 6, the estimators based on different methods are relatively close when n = 5.Furthermore, we compare the estimates of the parameters for the different methods in the case of small sample sizes.We randomly select 5, and 10 GaAs lasers from 15 tested devices, respectively.Then, the corresponding estimators can be obtained based on the selected GaAs lasers by using different methods.The estimated parameters are also shown in Table 6.It is observed that the estimators become closer and should be better as the sample size increases.The objective Bayesian estimators perform better than the MLE.Especially in the case of small sample size (e.g., n = 5), the MLE is greatly affected by the sample size, and its volatility is relatively large.In contrast, the objective Bayesian estimate is robust.The results reveal the advantage of the objective Bayesian estimators.Thus, we recommend using the objective Bayesian estimate for the TED model, especially in the case of small sample sizes.Furthermore, the TED model is compared with the three commonly used stochastic process models in terms of the log-likelihood function value and Akaike information criterion (AIC).The model with the smallest AIC and largest log-likelihood function value is considered as the best model.In theory, since these three stochastic processes are special cases of the TED process, the performance of the TED model should be better.The results are shown in Table 7. From Table 7, as expected, the TED model with p = 2.8511 has the maximal value of log-likelihood function and smallest AIC.Therefore, the TED model can be seen as the best degradation model for the GaAs laser data.In addition, it is observed that the value of AIC based on the IG process is close to that based on the TED model.This is because that the estimated parameter p = 2.8511, which means that the TED model is close to IG model (p = 3).At the same time, considering the IG model is simpler than the TED model.Therefore, the IG process is also suitable for this laser data.
Furthermore, the parameter p(p ∈ (−∞, 0] ∪ [1, ∞)) is a power classification parame- ter.That is, the different values of parameter p represent different stochastic process models.Besides the three commonly used stochastic process models, the TED model also includes some other stochastic process models when p = 0, 2, 3.In order to demonstrate the influence of p, we plot the AIC with respect to parameter p, which is shown in Figure 4. From Figure 4, the curve of AIC varies significantly with parameter p.That is, the parameter p has a great impact on the AIC.Thus, p is an important parameter for the TED model, and should be determined carefully.Additionally, for the three commonly used stochastic process models, we should analyze the applicability of these models before conducting degradation data analysis.Therefore, the degradation data of GaAs lasers is more suitable when using the TED model with our proposed objective Bayesian methodology than some commonly used degradation models.

Conclusions
The TED model is a generalized degradation model, including Wiener, gamma, and IG processes and so on.For the statistical inference of TED model, the existing research mainly focuses on the subjective Bayesian or frequentist methods.However, in applications, sometimes it is difficult to obtain prior information for the unknown parameters, or there is no prior information.To overcome these problems, an objective Bayesian estimate can be used.In this study, two non-informative priors, Jeffreys# prior and the reference prior, are adopted to obtain the objective Bayesian estimates for the TED model, and the corresponding posterior properties are also discussed.
In theory, we prove that the posterior distributions based on Jeffreys' prior and the reference prior are proper.Furthermore, we prove that the Jeffreys' prior π J (λ, µ) is not the probability matching prior for λ, but is the probability matching prior for µ, and the reference prior π R (λ, µ) is the probability matching prior for λ and µ.In addition, the reference prior has some nice properties, such as invariance under one-to-one transformation and consistency under marginalization.At the same time, the reference prior is probability matching prior for λ and µ.Thus, the reference prior π R (λ, µ) is recommended to make inferences for the TED model.
To further illustrate the proposed method and its effects, we compare the performances of the objective Bayesian method with MLE and subjective Bayesian methods based on the Monte Carlo simulation.The results show that the performances under the reference prior are better than those of subjective Bayesian estimates and MLE and in terms of the average biases, MSE, width of confidence interval and coverage probability, especially in the case of small sample size.These results reveal the advantages of the reference prior.In addition, the performances of subjective Bayesian estimators with mis-specified hyperparameters are the worst.These results indicate that the prior distributions and their hyperparameters should be reasonably determined.Finally, the proposed objective Bayesian methodology is fully illustrated using a real degradation dataset, demonstrating that the TED model is effective to describe the degradation process and has a wider range of applications.
In summary, our study contributes three aspects.First, we derive the Jeffreys' prior and reference prior of the TED model.This solves the problem of how to choose an appropriate prior for the TED model without historical data, especially in cases of small sample size.Second, the proposed priors are proven to have proper posterior distributions and probability matching properties.Third, the corresponding Bayesian inference is obtained by using MH algorithm.At the same time, the simulation results reveal that the proposed method provided more accurate estimate than the subjective Bayesian estimate MLE.
In future studies, the objective Bayesian estimates can be considered for the TED model with random effects, the TED model under accelerated degradation test and the entropy of the TED model, etc.
(2) Next, we prove the reference prior for the ordering (µ, λ).According to Fisher information matrix I(λ, µ), we have This completes the proof of Theorem 2. This implies that the posterior distribution π J (λ, µ|X) is proper.

Figure 1 .
Figure 1.Degradation paths of the GaAs lasers.

Figure 3 .
Figure 3.The sampling process of posterior distribution under the reference prior π R (λ, µ).

Figure 4 .
Figure 4.The AICs based on different parameter p.

Table 1 .
Five well-known TED model.

Table 2 .
ABs and MSEs (within bracket) of parameters based on MLE, subjective Bayesian and objective Bayeisan estimations in case I.

Table 3 .
ABs and MSEs (within bracket) of parameters based on MLE, subjective Bayesian and objective Bayeisan estimations in case II.

Table 4 .
The WCIs and CPs (within bracket) of parameters in case I.

Table 5 .
The WCIs and CPs (within bracket) of parameters in case II.

Table 7 .
AIC and log-likehood of different models for the GaAs laser data.