Analysis for Xgamma Parameters of Life under Type-II Adaptive Progressively Hybrid Censoring with Applications in Engineering and Chemistry

: Censoring mechanisms are widely used in various life tests, such as medicine, engineering, biology, etc., as they save (overall) test time and cost. In this context, we consider the problem of estimating the unknown xgamma parameter and some survival characteristics, such as reliability and failure rate functions in the presence of adaptive type-II progressive hybrid censored data. For this purpose, the maximum likelihood and Bayesian inferential approaches are used. Using the observed Fisher information under s-normal approximation, different asymptotic conﬁdence intervals for any function of the unknown parameter were constructed. Using the gamma ﬂexible prior, Bayes estimators against the squared-error loss were developed. Two procedures of Bayesian approximations—Lindley’s approximation and Metropolis–Hastings algorithm—were used to carry out the Bayes estimates and to construct the associated credible intervals. An extensive simulation study was implemented to compare the performance of the different methods. To validate the proposed methodologies of inference—two practical studies using datasets that form engineering and chemical ﬁelds are discussed.


Introduction
The gamma distribution is one of the most popular models used for analyzing constant and non-constant failure rate data. It includes the exponential, Erlang, and chi-square distributions as special cases. In recent years, using exponential and/or gamma as the parent distribution (among others), several new models have been developed to provide richness that makes them accurate and suitable to fit complex datasets. New statistical models, created based on a finite mixture of probability distributions, played a vital role in modeling the real-life phenomenon. Further, finite mixture densities have been widely used to model various data, for example, see [1,2]. However, gamma distribution does not exhibit a bathtub or upside-down bathtub shaped hazard rate function and, thus, it cannot be used to model the complex lifetime of a system.
For this reason, Reference [3] proposed a new one-parameter xgamma distribution, denoted by XGD(δ), as a special finite mixture of exponential and gamma distributions. They derived various mathematical, structural, and survival properties, with inference to the XGD. Moreover, they showed that the XGD was useful in modeling the dataset with a monotone failure rate, and that it had applicability in analyzing lifetime data.
Suppose that the lifetime random variable X of an experimental unit(s) follows XGD(δ), then, the probability density function (PDF) f (·) and cumulative distribution function (CDF) F(·) of X, are given, respectively, by and where δ is the scale parameter. The reliability characteristics of any lifetime model are the main features for evaluating the capacity of any electronic system that a reliability practitioner frequently uses. Therefore, some survival parameters of the XGD are also investigated as unknown parameters; reliability function (RF) R(·) and failure rate function (FRF) h(·) at mission time t are given, respectively, by and h(t; δ) = 1 + 1 2 δt 2 1 + δ(1 + t) + 1 2 (δt) 2 δ 2 , t > 0, δ > 0, Moreover, Reference [3] indicates that the XGD belongs to the exponential family of distributions and it has more flexibility than the exponential distribution. Reference [4] studied the estimation procedures of the XGD parameter and some related important survival characteristics under type-II progressively censoring (PCS-T2). Furthermore, they showed that the xgamma random variates are stochastically larger than that of the exponential and Lindley distributions. Reference [5] discussed the estimating problem of the parameter and reliability characteristics of XGD under hybrid type-II censored data. Recently, Reference [6] derived both classical and Bayes estimates of some parameters of life for XGD under complete sampling.
The adaptive type-II progressive hybrid censoring scheme (APHCS-T2), introduced by [7], saves the total time test and increases the efficiency of statistical inference. It can be described in the context of a life testing experiment as follows: suppose that n identical units are placed on the test at time zero; m(1 m < n) is the pre-fixed number of failures and the experimental time is allowed to run over time T, which is an ideal total test. In this case, the progressive censoring R i , i = 1, 2, . . . , m, is provided, but the values of some of R i may change accordingly during the experiment. If X (m) < T, the experiment proceeds with R i s and stops at X (m) . In this situation, APHCS-T2 degenerates into usual PCS-T2 and the experiment stops at the time of observe m-th failure. Otherwise, if X (m) does not occur before time T, i.e., X (d) < T < X (d+1) , where (d + 1) < m and X (d) is the d-th failure, occur before time T, then no items will be withdrawn from the experiment by setting R i = 0 for i = d + 1, . . . , m − 1, and the experiment stops at the time of the m-th failure occurring, and all remaining surviving items are removed, i.e., R m = n − m − ∑ d i=1 R i . The main advantage of APHCS-T2 is it enables us to get the effective number of failures m and assures that the total test time will not be too far away from the pre-specified time T.
To the best of our knowledge, we have not come across any work related to the estimation of distribution parameters and/or reliability characteristics of the XGD under adaptive progressive censoring. Thus, by demonstrating that the XGD may be used as a survival model utilizing APHCS-T2, the purpose of this study is to close this gap. Thus, our objectives in this study are: first, to drive both point and interval estimators of the unknown parameter and survival characteristics, the XGD using both maximum likelihood and Bayesian approaches in the present data collected under the APHCS-T2 life test. Using gamma density prior under the squared-error loss (SEL) function, the Bayes estimators (BEs) for the unknown parameter and the survival characteristics are obtained. When the informative prior density is taken into account, an approach used to determine the hyperparameter values is proposed. Second, the proposed estimators cannot be obtained in explicit expressions, we apply some numerical procedures to evaluate them, such as the Newton-Raphson (N-R) iterative method, Lindley's approximation, and Metropolis-Hastings (M-H) algorithm. Presently, Bayesian statistics that are developed based on Markov chain Monte Carlo (MCMC) techniques are widely used in many fields. Using normal approximation of the MLE (NA) and of the log-transformed MLE (NL), approximate confidence intervals (ACIs) for the unknown parameters, and any function of them, are constructed. Moreover, using MCMC simulated samples, two-sided BCI/HPD credible intervals are also constructed. Extensive numerical comparisons have been made to compare the performance of the classical and Bayesian estimates. The point estimates have been compared in terms of their estimated root mean squared errors (RMSEs) and relative absolute biases (RABs). Further, the interval estimates have been compared in terms of their average confidence widths (ACWs). Lastly, two datasets from industrial and chemical fields are analyzed to illustrate our proposed estimators.
The rest of the paper is organized as follows: maximum likelihood and Bayesian inferential procedures of the unknown parameter and the reliability characteristics are provided in Sections 2 and 3, respectively. In Section 4, asymptotic and credible intervals are constructed. Monte Carlo simulation results are presented in Section 5. Two practical examples using real datasets are analyzed and investigated in Section 6. Finally, some concluding remarks are provided in Section 7.

Likelihood Inference
, is an APHCS-T2 sample drawn from an xgamma population whose PDF (1) and CDF (2), with their respective removal patterns are R 1 , . . . , R d , 0, . . . , 0, n − m − ∑ d i=1 R i . Henceforward, we will use X i instead of X i:m:n . Substituting (1) and (2) into (5), the likelihood function (5) can be written up to proportional as where The corresponding log-likelihood function, (·) ∝ log L(·), of (6) becomes where ξ( From (7), the MLEδ of δ can be obtained by solving of the following likelihood equation where ξ (·) is the first-derivative with respect to δ such as ξ ( It is clear that, from (8), a closed-form solution ofδ does not exist and cannot be obtained analytically. Thus, one may use any suitable numerical procedure, such as the N-R iterative method to obtain the desired MLEδ for any given values of n, m, T and (x i , R i ), i = 1, 2, . . . , m. Once the estimate ofδ is obtained, then the associated MLEs of the parameters of life, R(·) and h(·) (for t > 0), can be easily derived via replacing δ byδ, respectively, asR

Bayes Procedure
In this section, the Bayes estimators of the unknown parameters δ, R(·) and h(·) are developed under the SEL function, l S (·), which is defined as Using (9), the Bayes estimatorη(δ) (say) of the unknown parameter or any function of δ, η(δ) (say), is given by the posterior mean of η(δ). However, any other loss function can be easily incorporated. It is known that the family of gamma distributions is flexible enough to cover a large variety of prior beliefs of the experimenter, see [23]. Therefore, we assume that the unknown parameter δ is stochastically independently distributed with conjugate gamma prior, as Combining (10) with (6) and substituting in the continuous Bayes' theorem, the posterior PDF of δ becomes where C is the normalizing constant of (11) and is given by C = ∞ 0 π( δ|x)dδ. Hence, using (11), the Bayes estimate of any parameter of life η(δ) = (δ, R(t), h(t)) (say) against the SEL function is given bỹ It is clear that the posterior PDF (11) is not easily tractable due to its implicit mathematical expression, so the BEs (12) cannot be developed in a closed-form. To overcome this problem, two approximation techniques, namely: Lindley and MCMC procedures, are used.

Lindley's Approximation
Reference [24] suggested procedure is to approximate the desired Bayes estimator by reducing the posterior ratio to finite values. According to this procedure, the approximated value η(δ) of η(δ) as in (12) for any function of δ for a sufficiently large sample size n is given by where and ρ = ∂ log π(·)/∂δ. All terms of (13) are evaluated at δ =δ. Now, to approximate the Bayes estimators using (13), the following quantities must be obtained as where ξ (·) and ξ (·) are the second-and third-derivatives, with respect to δ as The approximate BEs η(δ) of η(δ), using the Lindley's approximation method, can be easily obtained. Unfortunately, in literature, we have not yet come up with a method that can be used to construct the associate credible intervals of the unknown parameters η(δ) using Lindley's procedure. In view of this, we propose to use the M-H algorithm in order to generate MCMC samples from the posterior distribution and then to compute the Bayes estimators and also to construct the associated credible intervals.

M-H Algorithm
The M-H algorithm, a useful one of MCMC techniques, is used for generating random samples from the posterior distribution by an independent proposal distribution. However, from a practical point of view, this method provides a chain form of the Bayesian estimate, which is easy to apply in practice. One may refer to [25] and [26] for more applications related to this algorithm. To implement the M-H algorithm procedure, conduct the following steps for the sample generation process: Step 1: Start with an initial guess δ (0) =δ.
Step 6: Using the outputs of Step 5, compute the parameters of life of XGD(δ), such as R(t) and h(t), for a distinct time t > 0 as and respectively.
To remove the affection of the initial guess and to guarantee the convergence of the sampler, the first simulated varieties, N 0 (say), usually discarded in the beginning of the analysis implementation (burn-in period). Hence, the selected MCMC samples η (j) (δ) = (δ (j) , R (j) (t), h (j) (t)) for j = N 0 + 1, . . . , N can be used to develop the Bayesian inferences. However, the Bayes MCMC estimates of a parametric function η(δ) based on the SEL function are given byη

. Hyper-Parameter Value Selection
The elicitation procedure used to determine the hyperparameter value, when an informative prior of the density parameter is taken into account, is the main issue in Bayesian analysis. In the literature, this problem has been discussed by [27,28]. Moreover, the values of hyperparameters for the unknown parameters under interest are made by assuming two independent types of information, namely prior mean and prior variance of the unknown parameter of the model under consideration. In this regard, we propose the following steps to determine the values of hyperparameters a and b, based on past samples as Step 1: Set the parameter value of δ.
Step 2: Set the complete sample size n.
Step 3: Draw a random sample of size n from XGD(δ).
Step 6: Set the sample mean and sample variance ofδ J equal to the mean and variance of the gamma density prior, respectively, as and where B is the number of samples generated from the distribution under consideration.
Step 7: Solve (15) and (16) simultaneously, the estimated hyperparametersȃ andb of a and b can be turn out directly by

Asymptotic and Credible Intervals
In this section, using NA and NL approximations of the MLE, the associated two-sided ACIs of the unknown parameter δ or any characteristic function, such as R(·) and h(·), are constructed. In addition, the simulated MCMC varieties of these unknown parameters are used to construct the associated BCI and HPD credible intervals.

Asymptotic Confidence Intervals
The asymptotic normal distribution of the MLE is the most common method to obtain the confidence bounds. In our problem, the exact distribution ofδ cannot be obtained easily due to the MLEδ as (8) has a nonlinear expression. Therefore, the observed Fisher information, I(·), is used to develop the 100(1 − ε)% ACI of δ by taking the negative expectation of the second-derivative (or Hessian matrix) of (7) as I(δ) = −L 2 , where L 2 is given in (14). Consequently, the approximate variance of the MLEδ is given by However, as referred to in Section 1, the XGD(δ) belongs to one-parameter exponential family of distributions. Therefore, the sampling distribution of (δ −δ)/ Var(δ) can be easily approximated by a standard normal distribution, N(0, 1), see [29]. Then, using the concept of large sample theory based on the NA of the MLEδ, the 100(1 − ε)% two-sided ACI of δ is given byδ where z ε/2 is the percentile of the standard normal distribution with right-tail probability (ε/2)-th . Since the characteristic parameters, such as R(·) and h(·), denoted by ϕ(δ)(say), are continuous functions of δ, then using the approximate variance of the unknown parameter δ, the corresponding variance estimate of any function of δ can be approximated by where I −1 (·) is given by (18), see [30]. Similarly, based on the NA of the MLEsφ(δ), the 100(1 − ε)% two-sided ACIs forφ(δ) can be obtained bŷ where z ε/2 is the percentile of the standard normal distribution with upper probability (ε/2)-th.
In some situations, the lower bound of standard ACI using normal approximation may take a negative value for a life distribution parameter. To overcome this drawback, one may replace the zero value instead of the negative value or using the NL in order to construct ACIs for the unknown parameters that take positive values, see [31]. Recently, the NL of the MLE was used by several authors, for example, see [32][33][34]. Hence, using the concept of the large sample, the 100(1 − ε)% two-sided ACI based on the NL for the MLE of any function of η(δ) is given by where η(δ) = (δ,R(t),ĥ(t)) and z ε/2 is the percentile of the standard normal distribution with upper probability (ε/2)-th.

Credible Intervals
The interval based on posterior distribution is known as the Bayesian confidence interval and is called a credible interval. To construct the two-sided BCIs, the unknown model parameter δ and the survival characteristics R(t) and h(t), order the simulated MCMC variates of η (j) . Hence, the 100(1 − ε)% two-sided BCI for any function of η(δ) is given by where η (x)(y) is an objective estimate located at the ordered point xy in the MCMC simulated variates after burn-in.
According to the method proposed by [35], using the ordered MCMC variates of η (j) (δ) for j = N 0 + 1, . . . , N, the 100(1 − ε)% HPD credible interval for any function of δ as η(δ) can be constructed as where j * is chosen such that Here, [x] denotes the largest integer less than or equal to x.

Monte Carlo Simulation
To examine the behavior of considered estimators of the model parameter, as well as associated reliability and hazard rate functions of XGD based on APHCS-T2, various Monte Carlo simulations are conducted by considering two different true values of the xgamma parameter as δ(= 1, 3). Using the algorithm described in [7], a large number, 1000 of APHCS-T2 samples based on different combinations of n (number of total test units) and m (effective sample size) are generated from XGD(δ).
For each setting, the MLEs and Bayes MCMC estimates of the unknown parameter δ and the reliability characteristics R(t) and h(t) are computed. Approximate Bayes computations are developed using both Lindley and M-H algorithm methods. Now, to generate APHCS-T2 samples from the XGD model, conduct the following steps: Step 1: generate an ordinary PCS-T2 sample (X i , R i ), i = 1, 2, . . . , m, by using the algorithm described by [36] as sample from XGD(δ) is carried out.
Step 3: Generate the first m − d − 1 order statistics from a truncated distribution . . , X m . This numerical comparison is performed based on different combinations of (n, m, T), such as: n(= 30, 80) for each pre-determined time T(= 1, 3). The test is terminated when the number of failed subjects achieves or exceeds a certain value m, where the percentage of failure information (m/n) is taken as 50 and 80%.
To assign values for the hyperparameters of the conjugate gamma prior (10), we propose using the procedure of past sample data described in Section 3.3. In view of this, we generated 1000 complete samples of a large size 50 (say) from the xgamma lifetime model as past samples when the plausible values of an unknown parameter δ were taken as 1 and 3. Consequently, the values of the hyperparameters a and b, which are plugged-in to the computation of the desired Bayes estimates, are taken as (a, b) = (98.295, 97.046) and (a, b) = (77.136, 25.384) for XGD(1) and XGD (3), respectively. If one does not have any prior information on the unknown parameter of interest, the posterior PDF (11) becomes proportional to the likelihood function (6). In this case, it is better to use the MLEs instead of the BEs, as the later are computationally more expensive.
Regarding the reliability characteristic functions of XGD, we also obtained the MLEs and BEs for R(t) and h(t) for a distinct time t = 0.1. Hence, for each true parameter value of δ, the actual value of them are taken as (R(0.1), h(0.1)) = (0.9523, 0.4774) and (R(0.1), h(0.1)) = (0.8047, 2.1024) for δ = 1 and 3, respectively. To develop the Bayesian computations, we generated 12,000 MCMC samples and then the first 2000 iterations were discarded from the generated sequence as burn-in to remove the affection of the selection of the initial value. Hence, the average BEs of the unknown parameters δ, R(t) and h(t) are computed based on the SEL function using 10,000 MCMC samples. The initial value of δ, for running the MCMC sampler algorithm, was taken to be its MLE. Moreover, for each n and m, different censoring schemes (CSs), R, to remove survival units during the lifetime experiment, are used as For each test, the average point estimates with their RMSEs and RABs as well as the ACWs of the interval estimates are given, respectively, bŷ where G is the number of replicates,η is the MLE or BE of the parametric function η, The average MLEs and BEs (Lindley's and MCMC methods) of δ, R(t) and h(t) with their RMSEs and RABs are calculated and reported in Tables 1-3. In each table, the associated average estimate, RMSE and RAB, of any unknown parameter based on each test, are tabulated in the first, second, and third rows, respectively. Further, the ACWs of 95% asymptotic and credible intervals of δ, R(t) and h(t) are computed and listed in Tables 4 and 5.
From Tables 1-3, it can be seen that the proposed estimators of the unknown parameter and the reliability characteristics of XGD are very good, in terms of their RMSEs, RABs, and ACWs. As n (or m) increases, the RMSEs, RABs, and ACWs of both point and interval estimates decrease as expected. Thus, to get better estimation results, one may tend to increase the effective sample size. It is also observed that, as the failure percentage m/n increases, the point estimates become even better. Since the Bayes estimates of any parametric function include gamma prior information than the classical estimates, consequently, they perform better than the other competing estimates in terms of their RMSEs and RABs. Hence, the MCMC method using the M-H algorithm is better than Lindley's approximation method, with respect to their RMSEs and RABs.
Regarding the xgamma parameter δ, RF R(t), FRF h(t), when δ increases with fixed T, the RMSEs and RABs associated with MLEs and BEs increase; moreover, they have similar behavior when T increases with fixed δ. When T increases with fixed δ, the RMSEs and RABs associated with MLE and Lindley estimates decrease, while those associated with the MCMC estimates increase. When T increases with fixed δ, the RMSEs and RABs associated with MCMC estimates increase, while those associated with MLEs and Lindley estimates decrease. From Tables 4-6, in respect to the interval estimates, the ACWs of asymptotic and credible intervals narrow down when n and m increases, as expected. In addition, as m/n increases, the 95% ACWs of all of the proposed estimates narrow down. As δ (or T) increases, the ACWs of both asymptotic and credible intervals for δ, R(t), h(t) tend to increase. It can also be seen that the ACWs of asymptotic (NA/NL) intervals, as well as credible (BCI/HPD) intervals, are quite close to each other. Further, due to the gamma prior information, it is observed that the credible intervals perform better than the asymptotic intervals, with respect to the shortest ACWs, as expected.
Moreover, comparing schemes 1 and 3, it is clear that the RMSEs and RABs of all estimates for the unknown parameter δ, R(t) and h(t) are greater, based on scheme 3 than

Real-Life Applications
To show the adaptability and flexibility of the proposed methodologies to real phenomena, in this section, we provide two real datasets derived from the engineering and chemical fields.
The maximum likelihood estimation method is used to estimate the unknown parameters of the considered distribution and to evaluate all terms of the model selection criteria. In general, the best distribution that provides the better fit corresponds to the lowest value of NLC, AIC, CAIC, BIC, HQIC, and K-S statistic values and highest p-values. The values of MLEs of the model parameters and corresponding goodness-of-fit measures are computed and reported in Table 7. It shows that the XGD fits the life cycle of the yarn dataset quite satisfactorily. Moreover, the XGD is the best choice among other competing models because it has the smallest K-S value with the highest p-value. According to BIC, HQIC, and K-S (p-value), it can be seen that the XGD is the better choice, followed by gamma, generalized-exponential, exponential, Weibull, and Nadarajah-Haghighi distributions, respectively. Similarly, according to NLC, AIC, and CAIC, it is noted that the gamma distribution is slightly better than the other distributions. In addition, another advantage of using XGD, instead of the two-parameter competing distributions for modeling lifetime data, is that XGD only has one parameter. Consequently, from a computational point of view, inferential procedures become more convenient. For more illustration, we have provided two plots computed at the estimated model parameter(s) of each considered distribution. Plot (a) represents the empirical CDF plot and the fitted CDFs; Plot (b) represents the histogram of the yarn dataset and the fitted PDFs, see Figure 1. Now, we discuss the proposed estimates in the presence of the complete yarn dataset. Using different choices of m, T, and R, three APHCS-T2 samples are generated and presented in Table 8. In short, the censoring scheme R = (3, 0, 0, 3) is referred as R = (3, 0 * 2, 3). Because we lack prior information about the xgamma parameter, the approximate BEs are developed using Lindley and MCMC approximation methods under gamma improper, i.e., a = b = 0. Using the M-H algorithm described in Section 3.2, 20,000 MCMC samples were generated and the first 5000 iterations of the simulated variates were discarded as burnin. Using Table 8, the MLEs and BEs with their standard errors (SEs) of the unknown parameters δ, R(t) and h(t) (at given mission time t = 50) are calculated and reported in Table 9. To run the MCMC sampler algorithm, the initial values of the unknown parameters were taken to be their MLEs. Moreover, two-sided 95% asymptotic and credible intervals with their lengths are computed, listed in Table 9. Recall that, in the literature, we have not come across a way to obtain the variance (or construct the associated interval) for the Lindley estimate. It shows that the point estimates of the unknown parameters obtained by MLEs and BEs are quite close to each other. Further, the interval estimates of δ, R(t) and h(t) obtained by 95% asymptotic/credible intervals are also similar. Trace plot, or time-series diagram, is a simple way to judge how quickly the MCMC converges at each iteration with the iteration (x-axis) and the sampled values (y-axis). Hence, the trace plots of the simulated 15,000 samples of δ, R(t) and h(t) based on the complete yarn dataset are given in Figure 2. In each trace plot, the sample mean is displayed with a horizontal solid line (-), further, lower, and upper bounds of 95% BCIs and HPD credible intervals are displayed with dotted (· · · ) and dashed (---) horizontal lines, respectively. It indicates that the MCMC procedure converges well; it also shows that discarding the first 5000 samples as burn-in is an appropriate size to erase the effect of the initial values. Moreover, it shows that the bounds of 95% BCI/HPD credible intervals are very close to each other. In addition, the histograms for the MCMC outputs of size 15,000 of δ, R(t) and h(t), using the Gaussian kernel, are plotted in Figure 3. Similarly, in each histogram plot, the sample mean is displayed with a vertical solid line (|). It is evident from the estimates that the generated sequences of δ are fairly symmetric, while the generated sequences of R(t) and h(t) are close to negative and positively skewed, respectively. Furthermore, some properties of MCMC samples such as: mean, median, mode, SE, standard deviation (SD), and skewness (Sk.) of each unknown parameter are computed and displayed in Table 10. It indicates that the central tendency measures are quite close to each other. Thus, the proposed estimates support our findings and showed that the XGD has superior performance.

Chemical Data Analysis
This application provides an analysis of vinyl chloride, which is a known human carcinogen; exposure to this compound should be avoided as much as possible and its levels should be kept as low as technically feasible. However, according to [45], this dataset represents 34 data points (in mg/L) for vinyl chloride obtained from clean up-gradient monitoring wells, as: 0.1, 0.1, 0.2, 0.2, 0.4, 0.4, 0.4, 0.5, 0.5, 0.5, 0.6, 0.6, 0.8, 0.9, 0.9, 1.0, 1. is a suitable model to fit the vinyl chloride data or not. Thus, the K-S statistic, along with the associated p-value, is obtained. Using complete vinyl chloride data, the MLE (along its SE) of δ is 1.0313(0.1235); the K-S (along its p-value) is 0.138(0.533). This result indicates that the p-value is quite high compared to the significance level 5%, so we cannot reject the null hypothesis that the given data are coming from the XGD lifetime model. To get the proposed estimates, using m = 17 with different choices of T and R, three artificial APHCS-T2 samples are generated from the vinyl chloride dataset and are presented in Table 11. Using each generated sample, the MLEs and BEs with their SEs of δ, R(t) and h(t) (at distinct time t = 0.5) are calculated and reported in Table 12. Moreover, 95% two-sided asymptotic/credible interval estimates with their lengths are also computed and listed in Table 12. Here, we assume that prior information about the xgamma parameter δ is not available; the Bayes MCMC estimates with associated credible intervals are developed by run the chain of the MCMC sampler for 30,000 iterations, discarding the first 5000 values as a burn-in period. From, Table 12, it is seen that the Bayes MCMC estimates computed using Lindley and MCMC methods are very close to the corresponding MLEs. A similar pattern is observed with the interval (asymptotic/credible) estimates. Table 11. Three different APHCS-T2 samples generated from the vinyl chloride dataset.  in Figures 4 and 5, respectively. Figure 4 shows the convergence of MCMC outputs for each unknown parameter. It is also clear that, from Figure 5, the generated posteriors of all unknown parameters are fairly symmetric. Furthermore, some vital statistics of MCMC outputs are calculated and reported in Table 13. Finally, we conclude that the proposed methodologies, using both yarn and vinyl chloride datasets, provide a good demonstration of the unknown xgamma parameter and related reliability characteristics.

Concluding Remarks
In this paper, we have shown that the one-parameter xgamma distribution is a useful survival model for modeling reliability data. Moreover, we discussed the problem of estimating the unknown parameter and some reliability characteristics of the proposed xgamma model based on the adaptive type-II progressive hybrid censored samples. The maximum likelihood estimates with associated asymptotic confidence intervals of the unknown quantities were obtained numerically by the 'maxLik' package. The Lindley and Metropolis-Hastings approximation methods were used to approximate the Bayes estimates and to construct the associated credible intervals. In Bayesian computations, an algorithm used to determine the values of hyperparameters based on past samples was proposed.
To compare the behavior of the different estimates, a simulation study was conducted under various scenarios. The simulation results indicate that the Bayesian estimates perform better than the classical estimates in terms of minimum value of the root mean squared-error, relative absolute bias, and average confidence width. To demonstrate the applicability of the proposed methodologies in a real situation, two datasets related to engineering and chemical experiments were analyzed. Although we mainly considered adaptive type-II progressively hybrid censoring from the xgamma model, in future work, the same methods could be extended to other distributions and/or censoring plans. Finally, we hope that the results and methodology discussed in this paper will be beneficial to reliability practitioners.  Data Availability Statement: The authors confirm that the data supporting the findings of this study are available within the article.