Abstract
This article presents Bayesian estimation methods applied to the gamma zero-truncated Poisson (GZTP) and the complementary gamma zero-truncated Poisson (CGZTP) distributions, encompassing both one-parameter and two-parameter models. These distributions are notably flexible and useful for modeling lifetime data. In the one-parameter model case, the Jeffreys prior is mathematically derived. The use of informative and noninformative priors, combined with the random walk Metropolis algorithm within a Bayesian framework, generates samples from the posterior distributions. Bayesian estimators’ effectiveness is examined through extensive simulation studies, in comparison with the maximum likelihood method. Results indicate that Bayesian estimators provide more precise parameter estimates, even with smaller sample sizes. Furthermore, the study and comparison of the coverage probabilities (CPs) and average lengths (ALs) of the credible intervals with those from Wald intervals suggest that Bayesian credible intervals typically yield shorter ALs and higher CPs, thereby demonstrating the effectiveness of Bayesian inference in the context of GZTP and CGZTP distributions. Lastly, Bayesian inference is applied to real data.
1. Introduction
The gamma zero-truncated Poisson (GZTP) and complementary gamma zero-truncated Poisson (CGZTP) distributions hold significant importance in statistical modeling, particularly for lifetime data exhibiting nonmonotonic hazard functions. The GZTP distribution provides a flexible model for phenomena where an event is guaranteed to occur, effectively handling datasets where zero counts are inapplicable. This makes it ideal for reliability analyses where the time to first failure is of interest. The CGZTP further extends this utility by modeling the bathtub-shaped hazard function, which is characterized by an initial decrease, followed by a constant rate, and then an increase. Such a capability to fit a bathtub hazard function makes the CGZTP a robust tool for complex survival data, overcoming the limitations of traditional models such as the gamma distribution.
The GZTP distribution, introduced by Niyomdecha et al. [1], is derived from compounding the gamma and zero-truncated Poisson distributions using the minimum function. Consider a collection of N independent and identically distributed random variables , each following a gamma distribution with a probability density function defined as , where is a shape parameter and is a rate parameter. The variable N follows a zero-truncated Poisson distribution, given by: , and . Assuming X and N are independent, Z is defined as the minimum of . The probability density function (pdf) of the GZTP distribution is denoted as
where is the upper incomplete gamma function. The CGZTP distribution utilizes the same compounding principle as the GZTP but employs the maximum function instead [2]. Consider Y to be the maximum of . The pdf for the CGZTP distribution is then given as follows:
These distributions exhibit flexible shapes of distribution functions, as shown in Figure 1. Previous studies have covered inferential procedures for the parameters of the GZTP and CGZTP distributions. Niyomdecha et al. [1] employed maximum likelihood estimation (MLE) to estimate GZTP parameters and then examine their asymptotic properties, while the MLEs and asymptotic confidence intervals for CGZTP parameters were discussed by Niyomdecha and Srisuradetchai [2]. The MLEs exhibited accurate estimations, and the confidence interval achieved the nominal coverage probability in the case of a large sample size. Several studies have been conducted on compound distributions using Bayesian methods. Xu et al. [3] investigated Bayesian estimators of Exponential-Poisson (EP) parameters by employing general noninformation prior distributions under symmetric and asymmetric loss functions. Yan et al. [4] determined the Bayesian estimators of the parameters in the EP distribution under general entropy, LINEX, and a scaled squared loss function based on type-II censoring. In a study conducted by Pathak et al. [5], the Bayesian estimators of Weibull–Poisson (WP) parameters were obtained by assuming that these parameters follow independently distributed prior distributions.
Figure 1.
Probability density functions of GZTP (left) and CGZTP (Right) for various parameter sets .
The loss function is instrumental in measuring how much an estimated parameter value deviates from its true value. The squared error loss function, a symmetric loss function, is commonly used in practice, especially when overestimation and underestimation are equally problematic [6,7]. Elbatal et al. [8] addressed parameter estimation for the Nadarajah–Haghighi distribution with progressive Type-1 censoring, employing the squared error loss function to produce Bayes estimates and credible intervals for maximum posterior density. Eliwa et al. [9] utilized balanced linear exponential and general entropy loss functions to estimate parameters for the new Weibull-Pareto distribution. Similarly, Abdel-Aty et al. [10] applied squared error, LINEX, and general entropy loss functions for future failure times in a joint type-II censored sample from multiple exponential populations.
In many studies, the posterior distributions often become complicated and cannot be simplified into any closed form. The samples were obtained from the posteriors using a Markov Chain Monte Carlo (MCMC) method, such as the Gibbs sampler (see [11]) and the Metropolis–Hastings algorithm (see [12]). Additionally, a summary of the predicted posteriors is provided based on a sample-based approach. The MCMC based on Metropolis–Hastings algorithms was used in [13] to estimate the unknown parameters of the alpha-power Weibull distribution under Type II hybrid censoring. In [14], the parameters of the unit-log-logistic distribution were estimated using a Bayesian approach. Noninformative priors were used, and samples from the joint posterior distribution were obtained using the random walk Metropolis algorithm. El-Sagheer et al. [15] employed Gibbs sampling to estimate the parameters for an asymmetric distribution and various lifetime indices, including reliability and hazard rate functions.
It is widely regarded that the conjugate prior for the Poisson parameter and the gamma rate parameter follow a gamma distribution. However, there is no proper conjugate prior for the gamma shape parameter [16,17]. Several papers explore Bayesian inference for estimating the parameters of the gamma distribution. Naji and Rasheed [18] derived Bayes estimators for the shape and scale parameters of the gamma distribution using the precautionary loss function. They assumed gamma and exponential priors for the shape and scale parameters, respectively, to represent prior information. Moala et al. [19] studied various noninformative priors, including Jeffreys prior, reference prior, maximal data information prior, Tibshirani prior, and a novel prior based on the copula method. Additionally, Pradhan and Kundu [20] assumed that the scale parameter follows a gamma distribution prior, while the shape parameter follows a log-concave distribution prior.
The existing literature has not addressed Bayesian inference on parameters of the GZTP and CGZTP) distributions. While Niyomdecha et al. [1] and Niyomdecha and Srisuradetchai [2] have conducted MLE and Wald’s interval analyses, their findings suggest that the mean square errors of MLEs remain high and that Wald’s interval coverage probabilities are below the nominal level for small sample sizes. This study, therefore, seeks to explore Bayesian inference for the GZTP and CGZTP distributions.
This paper is structured as follows: Section 2 delves into maximum likelihood estimation along with the corresponding interval estimation, which will be compared with the Bayesian credible interval. Section 3 elaborates on the prior and posterior distributions, estimation procedures based on the squared error loss function, and the application of the random walk Metropolis algorithm for simulating posterior samples. Simulation studies, which are conducted for scenarios involving one and two unknown parameters within both GZTP and CGZTP distributions, are presented in Section 4. Section 5 demonstrates two example applications using real data. Finally, the paper concludes with a discussion in Section 6.
2. Maximum Likelihood Estimation
Let be random samples from a GZTP distribution and be random samples from a CGZTP distribution. The likelihood functions based on the observed random sample of size will be as follows:
The corresponding log-likelihood function of the GZTP distribution is
and the log-likelihood function of the CGZTP distribution is
The maximum likelihood estimators of , and for the GZTP and CGZTP distributions are obtained by maximizing (5) and (6). This process is accomplished by solving the first derivatives with respect to each parameter of the log-likelihood function. These first derivatives are difficult and complex to solve, making it impossible to find the MLE of , and analytically. Consequently, numerical techniques such as the simulated annealing method are employed to estimate , and that maximize the likelihood function.
The MLEs are asymptotically normally distributed with a multivariate normal (MVN) distribution given by
where is the Fisher information matrix with element and [21]. The asymptotic variances of MLEs can be obtained from the inverse Fisher information matrix. Then, the corresponding Wald confidence intervals for are given by , where is the ii-th element of the inverse of , i.e., and is the -th quantile of the standard normal [22].
3. Bayesian Estimation
This section presents the formulation of prior distributions for each parameter, acknowledging their independence, and the subsequent derivation of joint posterior distributions.
3.1. Prior and Posterior Distributions
3.1.1. Case 1: and Are Unknown
To estimate the parameters for the GZTP or CGZTP distributions when and are unknown but is known, we assume that and have priors and , which correspond to and , respectively, and they are independently distributed. The prior distributions for and are obtained as follows:
where hyperparameters
Let be random samples from a GZTP distribution, so the joint posterior distribution given data is as follows:
and are the random samples from the CGZTP distribution, and the corresponding joint posterior distribution given data is given by:
The marginal posterior distributions of and have no closed form; as a consequence, the MCMC method is employed to provide Bayesian estimation.
3.1.2. Case 2: Is Unknown
Gamma Priors
To estimate parameter for the GZTP or CGZTP when and are known. Assuming that has a prior :
Let be random samples from a GZTP distribution. The corresponding posterior distribution, given data , is as follows:
Furthermore, let be random samples from a CGZTP distribution. The corresponding posterior distribution given data is:
Because the posterior distributions above are also complicated to derive analytically, MCMC is used to simulate samples from them.
Jeffreys Prior
Jeffreys prior is proposed as a widely known prior to represent a situation in which there is little information regarding the parameters. The Jeffreys prior for one parameter is proportional to the square root of the expected Fisher information [23]. From the likelihood functions in cases when and are known, the associated gradients are:
By differentiating (7) and (8), the observed Fisher information values of for the GZTP and CGZTP distributions are the same:
and the expected Fisher information is
Thus, from (9), the Jeffreys prior for the parameter is given by:
3.2. Point and Interval Estimations
This section explores the process of obtaining Bayesian estimates and constructing credible intervals for unknown parameters of the GZTP and CGZTP distributions. The squared error loss function is a symmetric function, defined by , where is an estimate of [24,25]. For example, for given data , under the squared error loss function, the Bayesian estimator of is .
Bayesian interval estimates for are also calculated based on the posterior distribution . They are referred to as credible intervals to differentiate them from confidence intervals. For a given value of , a credible interval is determined by values and that satisfy
where is called the credible level of the credible interval .
For GZTP, the Bayesian estimates of the unknown parameter(s) are given by:
- Case 1: and are unknown:
- Case 2: is unknown:
From CGZTP for the given data , the Bayes estimates of the unknown parameter(s) are given by
- Case 1: and are unknown:
- Case 2: is unknown:
Due to the complexity involved in constructing explicit forms of Bayes estimates for both cases, the random walk Metropolis algorithm, which is a variant of the MCMC methods, will be utilized to derive the Bayes estimates of the unknown parameters.
3.3. Random Walk Metropolis Algorithm
In this paper, random walk Metropolis is implemented. The random walk Metropolis (RWM) algorithm, a subset of the Metropolis–Hastings algorithms, is favored in Bayesian computation for its conceptual simplicity and operational ease. It is particularly advantageous when the posterior distribution is unknown or complex, offering a straightforward mechanism to generate sample values for parameter estimation. The strength of RWM lies in its local exploration capability, allowing it to meticulously probe the parameter space using a symmetric proposal density, which simplifies the acceptance criteria. This simplicity facilitates easier tuning and implementation, often requiring only the adjustment of the proposal distribution’s scale to balance the acceptance rate and the chain’s mixing [26,27]. Concurrently, credible intervals for unknown parameters are generated during this procedure. Since the density of the posterior distribution is proportional to the product of the likelihood and the density of the prior distribution, we use or as the target density for generating random samples from the joint posterior distribution of and . The RWM algorithm for generating random samples from the joint posterior distributions of and is shown as follows:
- Choose starting values of and define .
- At step , we draw and draw a new value
- The candidate will be accepted with a probability given by the Metropolis ratio:
- Repeat steps 2–3 M times to obtain samples and discard the first N values of the chain for burn-in.
- The Bayesian estimates of parameters and are computed as
- To compute the credible intervals of and , order and , then credible interval of and can be given, respectively, bywhere and are the ordered statistics from the MCMC samples for and , after discarding the burn-in, and is the floor function.
For the unknown λ, or is considered the target density for generating random samples from the posterior distribution of . The RWM algorithm for generating random samples from the posterior distribution of is given below:
- Choose starting values of and define .
- At step , we draw and draw a new value .
- The candidate will be accepted with a probability given by the Metropolis ratio:and
- Repeat steps 2–3 M times to obtain samples and remove the first N values of the chain for burn-in.
- The Bayesian estimates of parameters is computed by .
- To compute the credible intervals of , order , then credible interval of can be given by , where is the ordered statistics.
4. Simulation Study
The simulation study encompasses various sample sizes and hyperparameter values. Specifically, sample sizes and are examined. Table 1 presents the hyperparameter values for informative prior distributions. The means of the prior distributions, which have small and large variances, are equal to the true values of the unknown parameters, and . For example, for the case of and , with the hyperparameters of Prior 1 (, ), the variance of equals 4, and with the hyperparameters of Prior 2 (, ), the variance of equals 2. Thus, the variance of Prior 1 is considered “High” compared to that of Prior 2. Both prior distributions of have the same mean, 2, which equals to the true value.
Table 1.
The prior distributions of parameters and for the GZTP and CGZTP distributions.
Values of and are selected to create a variety of distribution shapes, as shown in Figure 1. Additionally, Table 2 details the prior distributions of the parameter . The shapes of gamma prior distributions with different hyperparameters, as presented in Table 1 and Table 2 are illustrated in Figure 2.
Table 2.
The prior distributions of parameter for the GZTP and CGZTP distributions.
Figure 2.
Gamma prior distributions with different hyperparameters.
The RWM algorithm, as described in Section 3.3, is executed for 10,000 iterations with a burn-in period of 1000. In both panels of Figure 3, the examples of the trace plots for α and β suggest that the Markov chains have reached a stationary distribution, evidenced by the dense and fuzzy appearance of the plots, which indicates good mixing of the chains. The variability observed within each plot is consistent with the stochastic nature expected from RWM sampling, and there are no discernible trends or drifts to suggest nonconvergence.
Figure 3.
The trace plots of and from the joint posterior distributions corresponding to: (a) Prior 3 for GZTP with and (b) Prior 2 for CGZTP with , and .
Further examination of the pair plots shown in Figure 4 reveals that, despite the initial starting points being far from the true values, the pairs of samples drawn from the RWM algorithm progress toward a densely clustered area. This dense cluster signifies the region of high probability density within the posterior distribution, illustrating the algorithm’s ability to converge to the region of interest. For unknown , the examples of the trace plots are shown in Figure 5.
Figure 4.
The pair plots of and from the joint posterior distributions corresponding to: (a) Prior 3 for GZTP with and (b) Prior 2 for CGZTP with , and .
Figure 5.
The trace plots of from posterior distributions corresponding to: (a) Prior 2 for GZTP with , and , (b) Prior 1 for CGZTP with , and .
Monte Carlo simulations are performed to compare the performances of the Bayes estimators with those of the classical estimators. Point estimates for parameters are averaged over 1000 iterations. Subsequently, the mean squared errors (MSEs) of the parameter estimates are calculated. The coverage probabilities (CPs) of 95% Wald confidence intervals and credible intervals and their average lengths (ALs) are determined.
Table 3 presents the detailed MLEs and MSEs obtained from simulated data sets from GZTP where the values of and are unknown. As sample sizes increase, estimates become more accurate and the MSE values decrease. For example, for the parameter set (0.5, 2, 0.5), the MLE of α decreases from 2.3637 with an MSE of 0.8896 at sample size 15 to 2.0644 with an MSE of 0.0760 at sample size 100. It is also noticed that the MSEs of and Bayes estimate increase as increases given that is fixed. For example, in a case where , β = 1, and the sample size is 15, when α is 1, the MSE for Prior 2 is 0.0566; whereas when α increases to 2, the MSE for the same prior and sample size is 0.2314. Similarly, when is fixed, the MSEs of and Bayes estimate increase as increases. When the sample size is small, MLEs tend to have higher MSE compared to Bayesian estimates. However, between the two priors, Prior 2 has the lowest MSE. For instance, for the parameter set (0.5, 2, 0.5) and a sample size of 25, Prior 2 produces an estimate for α with an MSE of 0.1854, which is lower than the MSE of 0.2777 for Prior 1, indicating that the more informative Prior 2 generally results in a lower MSE.
Table 3.
MLE and Bayesian estimates and mean squared errors of and for the GZTP distributions.
Form Table 4, the 95% credible intervals and Wald confidence intervals for and are presented. As the sample size increases, the CPs tend to approach the nominal coverage probability of 0.95, while the ALs decrease. Typically, CPs are generally above 0.95, despite the small sample size of 15. Moreover, the Prior 2 tends to have the smallest ALs with the same sample size.
Table 4.
Coverage probabilities and average lengths of intervals for and of the GZTP distributions.
Figure 6 graphically summarizes the average of estimates, MSEs, CPs, and ALs for a selected case of GZTP. In the first row, which shows estimates of α and β, there are two line graphs, one for each parameter. These results display the average estimates obtained through MLE and two Bayesian methods with different priors (Prior 1 and Prior 2). Prior 2 yields the most accuracy, followed by Prior 1 and MLE. However, as the sample size increases, the estimates from all methods converge, suggesting that larger sample sizes lead to more accurate estimations. From the second row, the bar charts show that the precision of the estimation methods improves with larger samples. Bayesian estimates, particularly those with Prior 2, tend to have lower MSEs than MLEs. In the third row, the CPs from Prior 2 tend to be higher than the others, especially in the small sample size. As the sample size increases, all the methods tend to produce about the same CP. From the last row, the ALs generally decrease as the sample size increases, showing that the intervals become narrower and, thus, more precise with larger samples. The Bayesian estimates with Prior 2 consistently show the shortest ALs across all sample sizes for both parameters.
Figure 6.
The average of estimates, MSEs, CPs, and ALs for and of the two-parameter GZTP distribution with and . Column (a) presents results for the parameter α, and column (b) for the parameter β.
From simulated CGZTP datasets where the values of and are unknown, the conclusions are consistent with those from GZTP. The detailed results are summarized in Appendix A, Table A1 and Table A2 which provide the average Bayesian estimates, MSEs, CPs, and ALs of parameters. As sample sizes increase, estimates become more accurate, and the MSE values decrease. It is observed that, while holding constant, the MSEs of and increase as increases. Likewise, as increases, the MSEs of and Bayes estimate increase when remains constant. Applying Prior 2 results in the lowest MSE values for and . Figure 7 graphically summarizes the averages of estimates, MSEs, CPs, and ALs for and of the two-parameter CGZTP distribution with , and of CGZTP.
Figure 7.
The average of estimates, MSEs, CPs, and Als for and of the two-parameter CGZTP distribution with . Column (a) presents results for the parameter α, and column (b) for the parameter β.
For parameter λ of the GZTP distribution, the MSE values decrease as sample sizes increase, as shown in Figure 8. Both the MLE and the Jeffreys prior estimates result in poor estimations with large MSEs. As expected, the performance of these methods improves as the sample size increases. However, Prior 2 yields the lowest MSE, even with a sample size as small as 15. In all situations, the CPs either approach the desired coverage probability or exceed 0.95, and the ALs decrease as sample sizes increase. For additional scenarios, Table A3 and Table A4 in Appendix A provide a summary of all estimates, MSEs, CPs, and ALs. Similarly, the findings for λ of the CGZTP distribution are in line with those for the GZTP distribution. The MSEs of Bayesian estimates under informative priors are lower than those for the MLE and the Jeffreys prior. The CPs of credible intervals from informative priors tend to be higher than those from Wald intervals and the credible interval from the Jeffreys prior. Figure 9 presents estimates, MSEs, CPs, and ALs for three methods in a specific case, while results for other scenarios are compiled in Table A5 and Table A6.
Figure 8.
The averages of estimates, MSEs, CP, and ALs for of the one-parameter GZTP distribution with , and .
Figure 9.
The average of estimates, MSEs, CPs, and ALs for of the one-parameter CGZTP distribution with , and .
.
5. Application on Real Data
5.1. March Precipitation
The data in Table 5 represent the amount of precipitation (in inches) that fell in March in Minneapolis/St. Paul, including 30 consecutive measurements. These data were first discussed by Hinkley [28]. The MLE and the Bayesian summaries for and are presented in Table 6. The MLE of the parameters are and with the corresponding standard errors as 0.7207 and 0.4656, respectively. The 95% confidence intervals for and are (1.6347, 4.4847) and (0.8086, 2.6341), respectively. Using the RWM with gamma priors and assuming λ = 0.38, the Bayesian estimates for α and β are and , respectively. The 95% credible intervals for and are given, respectively, as (1.8884, 4.3697) and (0.9805, 2.6144). A histogram with the fitted GZTP curve (the orange line) and Bayesian estimates is illustrated in Figure 10. The pair plot of RWM depicted in Figure 11 illustrates that the estimates are converging to a stationary state and distributing around the posterior means, with cluster points appearing elliptical, suggesting a correlation between these two parameters. This pattern implies that Wald’s confidence intervals may not be optimal.
Table 5.
March precipitation data.
Table 6.
Bayesian and Maximum Likelihood Estimates with standard errors, 95% Wald CIs and Bayesian credible intervals, and goodness-of-fit testing results for two datasets.
Figure 10.
Histogram with fitted GZTP distribution for March precipitation data.
Figure 11.
The pair plot of and from the RWM algorithm for March precipitation data.
Notably, the Bayesian estimates are very close to the MLE values; however, the Bayesian credible intervals are consistently shorter than those from Wald’s confidence intervals. The Kolmogorov–Smirnov (K-S) test and the corresponding p-values indicate that both methods are effective, with Bayesian estimates exhibiting a slight performance edge over the MLEs. Note that a higher p-value indicates a better fit model.
5.2. Remission Time of Bladder Cancer Patients
The dataset consists of the number of months that 128 patients with bladder cancer spent in remission, as reported by Lee and Wang [29]. From Table 6, the MLEs of the parameters are and with the corresponding 95% confidence intervals for and ‚ as (0.9147, 1.4290) and (0.0919, 0.1599), respectively. Assuming that λ = 0.0238 with the unknown parameters being α and β, the Bayesian estimates for α and β are and , respectively. The 95% credible intervals for the parameters and are given, respectively, by (0.9283, 1.4277) and (0.0921, 0.1596). Moreover, the K-S tests suggest that both methods can be used to model the data at a significance level of 0.05. The histogram with the fitted GZTP curve (the orange line) is illustrated in Figure 12, and the pair plot of RWM is depicted in Figure 13. Initially, there appears to be a wide spread of values indicating that the Markov chain is exploring the parameter space. As the iterations progress, the points seem to converge towards a narrower region of the plot, which suggests that the parameters are settling into a region that could represent the mode of the posterior distribution.
Figure 12.
Histogram with fitted CGZTP distribution for remission time data.
Figure 13.
The pair plot of and from the RWM algorithm for remission time of bladder cancer patients.
6. Conclusions and Discussion
Both point and interval estimation have been studied within Bayesian frameworks. For point estimation, informative gamma priors with both low and high variance, as well as Jeffreys prior, are employed, and the results are compared with those obtained from MLE in terms of MSEs. Since the posterior distributions for GZTP and CGZTP do not have closed forms, the RWM algorithm is utilized to generate posterior samples. Furthermore, the Bayesian credible intervals are compared to Wald’s intervals in terms of coverage probability and average length.
Bayesian estimates using informative priors are obviously superior to the MLE and Bayesian estimates with Jeffreys priors in terms of MSEs. Among the informative priors having the mean equal to the true parameter, the one with a low variance yields a slightly lower MSE compared to that with high variance. In detail, when is fixed, the MSEs of and Bayes estimate increase as increases. Similarly, the MSEs of and Bayesian estimate increase as increases given that is fixed. In the case of an unknown λ, where the Jeffreys prior can be mathematically derived, the corresponding Bayesian estimates are slightly better than the MLE. However, as the sample size increases, the discrepancy among the MSEs obtained from all methods tends to decrease.
For interval estimation, the Bayesian credible intervals tend to be more conservative, with coverage probabilities exceeding the nominal level of 0.95, particularly for small sample sizes. The ALs of the credible intervals are notably shorter than those of the Wald confidence intervals. It is worth noting that credible intervals can achieve greater coverage with shorter interval lengths because Priors 1 and 2 were deliberately chosen so that the expected value of the prior equals the true parameter value. These informative priors have a substantial impact, often outweighing the data in their influence on the posterior. As the sample size increases, the influence of the data begins to outweigh that of the prior. In such cases, the lengths of the credible intervals tend to converge towards those of the frequentist confidence intervals, such as Wald’s intervals, and the high coverage probabilities adjust closer to the expected levels under the true confidence level.
Author Contributions
Conceptualization, P.S.; methodology, P.S. and A.N.; software, A.N.; validation, P.S. and A.N.; formal analysis, P.S.; investigation, P.S.; resources, A.N.; data curation, A.N.; writing—original draft preparation, P.S. and A.N.; writing—review and editing, P.S.; visualization, A.N.; supervision, P.S.; project administration, P.S.; funding acquisition, P.S. All authors have read and agreed to the published version of the manuscript.
Funding
This study was supported by Thammasat University Research Fund, Contract No TUFT 71/2566.
Data Availability Statement
The data presented in this study are openly available in reference number [28,29].
Acknowledgments
We express our gratitude to Thammasat University for their financial assistance. Additionally, we extend our appreciation to the editor and reviewers for their insightful remarks and suggestions, which have greatly enhanced the quality of the paper.
Conflicts of Interest
The authors declare no conflicts of interest.
Appendix A
Table A1.
MLE and Bayesian estimates and mean squared errors of and for the CGZTP distributions.
Table A1.
MLE and Bayesian estimates and mean squared errors of and for the CGZTP distributions.
| Parameter | MLE | Prior 1 | Prior 2 | |||||
|---|---|---|---|---|---|---|---|---|
| Est | MSE | Est | MSE | Est | MSE | |||
| (0.5, 2, 0.5) | 15 | 2.4064 | 1.0745 | 2.2300 | 0.5426 | 2.1470 | 0.2972 | |
| 0.6155 | 0.0776 | 0.5664 | 0.0399 | 0.5388 | 0.0200 | |||
| 25 | 2.2791 | 0.5593 | 2.1328 | 0.2987 | 2.0922 | 0.1994 | ||
| 0.5770 | 0.0392 | 0.5417 | 0.0234 | 0.5305 | 0.0151 | |||
| 50 | 2.1226 | 0.1949 | 2.0704 | 0.1592 | 2.0581 | 0.1306 | ||
| 0.5341 | 0.0130 | 0.5232 | 0.0112 | 0.5200 | 0.0091 | |||
| 100 | 2.0505 | 0.0838 | 2.0598 | 0.0833 | 2.0551 | 0.0756 | ||
| 0.5141 | 0.0058 | 0.5173 | 0.0058 | 0.5160 | 0.0052 | |||
| (0.5, 2, 1) | 15 | 2.4632 | 1.3465 | 2.2194 | 0.5722 | 2.1180 | 0.2863 | |
| 1.2479 | 0.3794 | 1.1209 | 0.1603 | 1.0708 | 0.0780 | |||
| 25 | 2.2731 | 0.5534 | 2.1589 | 0.3628 | 2.1103 | 0.2345 | ||
| 1.1462 | 0.1469 | 1.1014 | 0.1104 | 1.0743 | 0.0710 | |||
| 50 | 2.1415 | 0.2206 | 2.0898 | 0.1648 | 2.0767 | 0.1345 | ||
| 1.0778 | 0.0612 | 1.0490 | 0.0464 | 1.0418 | 0.0374 | |||
| 100 | 2.0438 | 0.0849 | 2.0476 | 0.0887 | 2.0436 | 0.0805 | ||
| 1.0217 | 0.0231 | 1.0264 | 0.0246 | 1.0243 | 0.0222 | |||
| (1, 1, 1) | 15 | 1.2364 | 0.3033 | 1.1329 | 0.1744 | 1.0545 | 0.0755 | |
| 1.2613 | 0.3650 | 1.1601 | 0.2148 | 1.0759 | 0.0872 | |||
| 25 | 1.1446 | 0.1446 | 1.0926 | 0.1016 | 1.0535 | 0.0617 | ||
| 1.1639 | 0.1662 | 1.1075 | 0.1165 | 1.0754 | 0.0749 | |||
| 50 | 1.0656 | 0.0566 | 1.0491 | 0.0427 | 1.0481 | 0.0365 | ||
| 1.0645 | 0.058 | 1.0549 | 0.0515 | 1.0521 | 0.044 | |||
| 100 | 1.0214 | 0.0213 | 1.0189 | 0.0189 | 1.0174 | 0.018 | ||
| 1.0294 | 0.0252 | 1.0224 | 0.0227 | 1.0222 | 0.0208 | |||
| (1, 2, 1) | 15 | 2.5058 | 1.5343 | 2.2172 | 0.6129 | 2.1247 | 0.2988 | |
| 1.2501 | 0.3653 | 1.1132 | 0.1546 | 1.0787 | 0.0791 | |||
| 25 | 2.3069 | 0.6666 | 2.1333 | 0.3636 | 2.0895 | 0.2332 | ||
| 1.1546 | 0.1637 | 1.0704 | 0.0914 | 1.0490 | 0.0582 | |||
| 50 | 2.1053 | 0.2120 | 2.0823 | 0.1880 | 2.0683 | 0.1528 | ||
| 1.0520 | 0.0511 | 1.044 | 0.0446 | 1.0373 | 0.036 | |||
| 100 | 2.0638 | 0.0911 | 2.0313 | 0.0861 | 2.0281 | 0.0779 | ||
| 1.0322 | 0.0220 | 1.0128 | 0.0213 | 1.0114 | 0.0192 | |||
Table A2.
Coverage probabilities and average lengths of intervals for and of the CGZTP distributions.
Table A2.
Coverage probabilities and average lengths of intervals for and of the CGZTP distributions.
| Parameter | MLE | Prior 1 | Prior 2 | |||||
|---|---|---|---|---|---|---|---|---|
| CP | AL | CP | AL | CP | AL | |||
| (0.5, 2, 0.5) | 15 | 0.9700 | 3.3858 | 0.9677 | 2.8886 | 0.9854 | 2.4848 | |
| 0.9630 | 0.9029 | 0.9723 | 0.7734 | 0.9869 | 0.6534 | |||
| 25 | 0.9620 | 2.4768 | 0.9720 | 2.1980 | 0.9840 | 2.0040 | ||
| 0.9560 | 0.6564 | 0.9740 | 0.5895 | 0.9820 | 0.5340 | |||
| 50 | 0.9560 | 1.6249 | 0.9530 | 1.5412 | 0.9660 | 1.4722 | ||
| 0.9670 | 0.4304 | 0.9600 | 0.4122 | 0.9710 | 0.3924 | |||
| 100 | 0.9530 | 1.1081 | 0.9510 | 1.0954 | 0.9540 | 1.0699 | ||
| 0.9630 | 0.2933 | 0.9470 | 0.2906 | 0.9470 | 0.2832 | |||
| (0.5, 2, 1) | 15 | 0.9640 | 3.4834 | 0.9669 | 2.8776 | 0.9831 | 2.4506 | |
| 0.9600 | 1.8363 | 0.9623 | 1.5292 | 0.9838 | 1.2982 | |||
| 25 | 0.9610 | 2.4693 | 0.9690 | 2.2273 | 0.9790 | 2.0198 | ||
| 0.9650 | 1.3041 | 0.9610 | 1.1997 | 0.9750 | 1.0820 | |||
| 50 | 0.9580 | 1.6402 | 0.9690 | 1.5563 | 0.9730 | 1.4867 | ||
| 0.9570 | 0.8685 | 0.9620 | 0.8241 | 0.9720 | 0.7857 | |||
| 100 | 0.9560 | 1.1042 | 0.9430 | 1.0910 | 0.9480 | 1.0680 | ||
| 0.9640 | 0.5831 | 0.9330 | 0.5780 | 0.9410 | 0.5644 | |||
| (1, 1, 1) | 15 | 0.9630 | 1.7800 | 0.9646 | 1.5340 | 0.9862 | 1.2472 | |
| 0.9580 | 1.8864 | 0.9638 | 1.6510 | 0.9877 | 1.3453 | |||
| 25 | 0.9670 | 1.2679 | 0.9490 | 1.1538 | 0.9710 | 1.0215 | ||
| 0.9710 | 1.3529 | 0.9600 | 1.2331 | 0.9670 | 1.1009 | |||
| 50 | 0.9480 | 0.8310 | 0.9580 | 0.7954 | 0.9660 | 0.7598 | ||
| 0.9640 | 0.8791 | 0.9460 | 0.8469 | 0.9530 | 0.8078 | |||
| 100 | 0.9620 | 0.5613 | 0.9460 | 0.5481 | 0.9510 | 0.5337 | ||
| 0.9640 | 0.6028 | 0.9480 | 0.5874 | 0.9560 | 0.5721 | |||
| (1, 2, 1) | 15 | 0.9620 | 3.7003 | 0.9685 | 2.9872 | 0.9831 | 2.5573 | |
| 0.9670 | 1.8137 | 0.9700 | 1.4938 | 0.9792 | 1.2863 | |||
| 25 | 0.9580 | 2.6326 | 0.9440 | 2.2804 | 0.9660 | 2.0762 | ||
| 0.9580 | 1.2987 | 0.9470 | 1.1363 | 0.9630 | 1.0340 | |||
| 50 | 0.9510 | 1.6938 | 0.9460 | 1.6069 | 0.9540 | 1.5427 | ||
| 0.9500 | 0.8379 | 0.9520 | 0.8008 | 0.9610 | 0.7682 | |||
| 100 | 0.9590 | 1.1722 | 0.9460 | 1.1287 | 0.9460 | 1.0986 | ||
| 0.9580 | 0.5811 | 0.9470 | 0.5572 | 0.9450 | 0.5438 | |||
Table A3.
MLE and Bayesian estimates and mean squared errors for of the GZTP distributions.
Table A3.
MLE and Bayesian estimates and mean squared errors for of the GZTP distributions.
| MLE | Prior 1 | Prior 2 | Jeffreys | ||||||
|---|---|---|---|---|---|---|---|---|---|
| Est | MSE | Est | MSE | Est | MSE | Est | MSE | ||
| (0.5, 2, 0.5) | 15 | 0.9783 | 0.6802 | 0.4326 | 0.0520 | 0.4785 | 0.0177 | 1.1344 | 0.7413 |
| 25 | 0.8088 | 0.4188 | 0.4396 | 0.0518 | 0.4898 | 0.0135 | 0.9237 | 0.3766 | |
| 50 | 0.6605 | 0.1944 | 0.4693 | 0.0492 | 0.5034 | 0.0104 | 0.7184 | 0.1554 | |
| 100 | 0.5637 | 0.0961 | 0.4767 | 0.0482 | 0.5019 | 0.0073 | 0.6155 | 0.0809 | |
| (0.5, 2, 1) | 15 | 0.9706 | 0.6877 | 0.4418 | 0.0430 | 0.4806 | 0.0171 | 1.1038 | 0.6507 |
| 25 | 0.7741 | 0.3625 | 0.4571 | 0.0460 | 0.4862 | 0.0142 | 0.9235 | 0.3736 | |
| 50 | 0.6646 | 0.1974 | 0.4748 | 0.0477 | 0.4967 | 0.0099 | 0.7318 | 0.1698 | |
| 100 | 0.558 | 0.0997 | 0.4770 | 0.0499 | 0.5016 | 0.0073 | 0.6397 | 0.0964 | |
| (1, 1, 1) | 15 | 1.2712 | 0.6759 | 0.9282 | 0.2182 | 0.9756 | 0.0704 | 1.4279 | 0.6366 |
| 25 | 1.1256 | 0.4531 | 0.8853 | 0.1970 | 0.9811 | 0.0736 | 1.2635 | 0.4066 | |
| 50 | 1.0375 | 0.2303 | 0.8710 | 0.1611 | 0.9910 | 0.0844 | 1.1083 | 0.1933 | |
| 100 | 1.0007 | 0.1301 | 0.9120 | 0.1111 | 0.9997 | 0.0629 | 1.0411 | 0.1204 | |
| (1, 2, 1) | 15 | 1.2710 | 0.6781 | 0.9179 | 0.2193 | 0.9857 | 0.0632 | 1.4052 | 0.6096 |
| 25 | 1.1546 | 0.4163 | 0.8926 | 0.1999 | 0.9670 | 0.0782 | 1.2536 | 0.3643 | |
| 50 | 1.0215 | 0.2225 | 0.8936 | 0.1605 | 0.9568 | 0.0726 | 1.1249 | 0.2155 | |
| 100 | 1.0090 | 0.1255 | 0.9093 | 0.1002 | 0.9512 | 0.0639 | 1.0358 | 0.1125 | |
Table A4.
Coverage probabilities and average lengths of intervals for of the GZTP distributions.
Table A4.
Coverage probabilities and average lengths of intervals for of the GZTP distributions.
| MLE | Prior 1 | Prior 2 | Jeffreys | ||||||
|---|---|---|---|---|---|---|---|---|---|
| CP | AL | CP | AL | CP | AL | CP | AL | ||
| (0.5, 2, 0.5) | 15 | 0.9730 | 2.7571 | 1.0000 | 1.4701 | 1.0000 | 1.0381 | 0.9750 | 2.5537 |
| 25 | 0.9670 | 2.1522 | 0.9980 | 1.3225 | 1.0000 | 1.0042 | 0.9610 | 2.0079 | |
| 50 | 0.9710 | 1.5770 | 0.9930 | 1.1184 | 1.0000 | 0.9139 | 0.9730 | 1.4764 | |
| 100 | 0.9670 | 1.1773 | 0.9850 | 0.9505 | 0.9990 | 0.8064 | 0.9710 | 1.1332 | |
| (0.5, 2, 1) | 15 | 0.9750 | 2.7480 | 1.0000 | 1.4452 | 1.0000 | 1.0376 | 0.9660 | 2.5227 |
| 25 | 0.9750 | 2.1212 | 1.0000 | 1.3397 | 1.0000 | 0.9945 | 0.9610 | 2.0065 | |
| 50 | 0.9690 | 1.5807 | 0.9980 | 1.1470 | 1.0000 | 0.9077 | 0.9650 | 1.4882 | |
| 100 | 0.9630 | 1.1716 | 0.9790 | 0.9643 | 0.9990 | 0.8090 | 0.9590 | 1.1481 | |
| (1, 1, 1) | 15 | 0.9860 | 3.0348 | 0.9960 | 2.2782 | 0.9990 | 1.7953 | 0.9700 | 2.9185 |
| 25 | 0.9760 | 2.4057 | 0.9820 | 1.9691 | 0.9990 | 1.6606 | 0.9620 | 2.3558 | |
| 50 | 0.9790 | 1.8204 | 0.9540 | 1.6183 | 0.9890 | 1.4316 | 0.9750 | 1.7843 | |
| 100 | 0.9570 | 1.3605 | 0.9570 | 1.3058 | 0.9780 | 1.1528 | 0.9540 | 1.3459 | |
| (1, 2, 1) | 15 | 0.9820 | 3.0353 | 0.9970 | 2.2602 | 1.0000 | 1.8119 | 0.9700 | 2.8891 |
| 25 | 0.9780 | 2.4414 | 0.9740 | 1.9752 | 0.9950 | 1.6444 | 0.9730 | 2.3516 | |
| 50 | 0.9780 | 1.8143 | 0.9610 | 1.6380 | 0.9850 | 1.4057 | 0.9630 | 1.7875 | |
| 100 | 0.9580 | 1.3642 | 0.9480 | 1.3070 | 0.9820 | 1.1519 | 0.9450 | 1.3473 | |
Table A5.
MLE and Bayesian estimates and mean squared errors of for the CGZTP distributions.
Table A5.
MLE and Bayesian estimates and mean squared errors of for the CGZTP distributions.
| MLE | Prior 1 | Prior 2 | Jeffreys | ||||||
|---|---|---|---|---|---|---|---|---|---|
| Est | MSE | Est | MSE | Est | MSE | Est | MSE | ||
| (0.5, 2, 0.5) | 15 | 0.9492 | 0.6656 | 0.4331 | 0.0422 | 0.4868 | 0.0187 | 1.1209 | 0.6926 |
| 25 | 0.7598 | 0.3479 | 0.4518 | 0.0505 | 0.4908 | 0.0164 | 0.9110 | 0.3663 | |
| 50 | 0.6418 | 0.1843 | 0.4791 | 0.0507 | 0.4969 | 0.0098 | 0.6994 | 0.1555 | |
| 100 | 0.5452 | 0.0914 | 0.4875 | 0.0413 | 0.4980 | 0.0066 | 0.5954 | 0.0810 | |
| (0.5, 2, 1) | 15 | 0.9335 | 0.6615 | 0.4933 | 0.0456 | 0.4932 | 0.0186 | 1.1018 | 0.6776 |
| 25 | 0.7784 | 0.3877 | 0.4659 | 0.0459 | 0.4927 | 0.0156 | 0.9068 | 0.3476 | |
| 50 | 0.6426 | 0.1863 | 0.4535 | 0.0461 | 0.4944 | 0.0107 | 0.7164 | 0.1577 | |
| 100 | 0.5518 | 0.0950 | 0.4416 | 0.0449 | 0.5025 | 0.0071 | 0.6143 | 0.0864 | |
| (1, 1, 1) | 15 | 1.2711 | 0.7890 | 0.8892 | 0.2052 | 0.9525 | 0.0727 | 1.4025 | 0.6383 |
| 25 | 1.1178 | 0.4082 | 0.8908 | 0.1929 | 0.9737 | 0.0738 | 1.1995 | 0.3194 | |
| 50 | 1.0621 | 0.2468 | 0.8759 | 0.1559 | 0.9700 | 0.0822 | 1.1013 | 0.2088 | |
| 100 | 0.9895 | 0.1185 | 0.9003 | 0.1070 | 0.9987 | 0.0616 | 1.0307 | 0.1115 | |
| (1, 2, 1) | 15 | 1.2915 | 0.7555 | 0.9263 | 0.2130 | 0.9532 | 0.0810 | 1.4141 | 0.6101 |
| 25 | 1.1182 | 0.4259 | 0.8949 | 0.1924 | 0.9669 | 0.0789 | 1.2223 | 0.3408 | |
| 50 | 1.0282 | 0.2143 | 0.8925 | 0.1564 | 0.9814 | 0.0714 | 1.0949 | 0.1891 | |
| 100 | 1.0117 | 0.1288 | 0.9169 | 0.1120 | 0.9849 | 0.0629 | 1.0425 | 0.1101 | |
Table A6.
Coverage probabilities and average lengths of intervals of for the CGZTP distributions.
Table A6.
Coverage probabilities and average lengths of intervals of for the CGZTP distributions.
| MLE | Prior 1 | Prior 2 | Jeffreys | ||||||
|---|---|---|---|---|---|---|---|---|---|
| CP | AL | CP | AL | CP | AL | CP | AL | ||
| (0.5, 2, 0.5) | 15 | 0.9700 | 2.7226 | 1.0000 | 1.4714 | 1.0000 | 1.0311 | 0.9630 | 2.5391 |
| 25 | 0.9780 | 2.1114 | 1.0000 | 1.3436 | 1.0000 | 0.9944 | 0.9680 | 1.9886 | |
| 50 | 0.9710 | 1.5639 | 0.9970 | 1.1372 | 1.0000 | 0.9137 | 0.9680 | 1.4510 | |
| 100 | 0.9810 | 1.1668 | 0.9850 | 0.9591 | 0.9990 | 0.8150 | 0.9690 | 1.1202 | |
| (0.5, 2, 1) | 15 | 0.9680 | 2.7063 | 1.0000 | 1.4829 | 1.0000 | 1.0401 | 0.9590 | 2.5048 |
| 25 | 0.9730 | 2.1223 | 1.0000 | 1.3163 | 1.0000 | 0.9891 | 0.9620 | 1.9820 | |
| 50 | 0.9690 | 1.5636 | 0.9990 | 1.1441 | 1.0000 | 0.9164 | 0.9670 | 1.4768 | |
| 100 | 0.9800 | 1.1688 | 0.9770 | 0.9645 | 0.9990 | 0.8224 | 0.9610 | 1.1323 | |
| (1, 1, 1) | 15 | 0.9720 | 3.0214 | 0.9930 | 2.2146 | 1.0000 | 1.8279 | 0.9750 | 2.8808 |
| 25 | 0.9860 | 2.4110 | 0.9820 | 1.9761 | 0.9960 | 1.6479 | 0.9730 | 2.2986 | |
| 50 | 0.9660 | 1.8304 | 0.9590 | 1.6258 | 0.9970 | 1.4200 | 0.9660 | 1.7716 | |
| 100 | 0.9580 | 1.3607 | 0.9470 | 1.3019 | 0.9700 | 1.1551 | 0.9610 | 1.3438 | |
| (1, 2, 1) | 15 | 0.9750 | 3.0453 | 0.9960 | 2.2751 | 1.0000 | 1.8100 | 0.9730 | 2.8994 |
| 25 | 0.9750 | 2.4082 | 0.9890 | 1.9854 | 1.0000 | 1.6575 | 0.9670 | 2.3193 | |
| 50 | 0.9810 | 1.8234 | 0.9660 | 1.6416 | 0.9940 | 1.4134 | 0.9660 | 1.7773 | |
| 100 | 0.9480 | 1.3634 | 0.9470 | 1.3040 | 0.9850 | 1.1542 | 0.9530 | 1.3458 | |
References
- Niyomdecha, A.; Srisuradetchai, P.; Tulyanitikul, B. Gamma Zero-Truncated Poisson Distribution with the Minimum Compounded Function. Thail. Stat. Thail. 2023, 21, 863–886. [Google Scholar]
- Niyomdecha, A.; Srisuradetchai, P. Complementary Gamma Zero-Truncated Poisson Distribution and Its Application. Mathematics 2023, 11, 2584. [Google Scholar] [CrossRef]
- Xu, B.; Guo, Y.; Zhu, N. The Parameter Bayesian Estimation of Two-Parameter Exponential-Poisson Distribution and Its Optimal Property. J. Interdiscip. Math. 2016, 19, 697–707. [Google Scholar] [CrossRef]
- Yan, W.-A.; Shi, Y.-M.; Liu, Y. Bayesian Estination of Exponential Poisson Distribution under Different Loss Function. Fire Control Command Control 2012, 2, 124–126,131. [Google Scholar]
- Pathak, A.; Kumar, M.; Singh, S.K.; Singh, U. Bayesian Inference: Weibull Poisson Model for Censored Data Using the Expectation–Maximization Algorithm and Its Application to Bladder Cancer Data. J. Appl. Stat. 2022, 49, 926–948. [Google Scholar] [CrossRef] [PubMed]
- Davidson-Pilon, C. Bayesian Methods for Hackers: Probabilistic Programming and Bayesian Inference; Addison-Wesley Educational: Boston, MA, USA, 2015; pp. 127–130. [Google Scholar]
- Cai, Y.; Gui, W. Classical and Bayesian Inference for a Progressive First-Failure Censored Left-Truncated Normal Distribution. Symmetry 2021, 13, 490. [Google Scholar] [CrossRef]
- Elbatal, I.; Alotaibi, N.; Alyami, S.A.; Elgarhy, M.; El-Saeed, A.R. Bayesian and Non-Bayesian Estimation of the Nadaraj ah–Haghighi Distribution: Using Progressive Type-1 Censoring Scheme. Mathematics 2022, 10, 760. [Google Scholar] [CrossRef]
- Eliwa, M.S.; EL-Sagheer, R.M.; El-Essawy, S.H.; Almohaimeed, B.; Alshammari, F.S.; El-Morshedy, M. General Entropy with Bayes Techniques under Lindley and MCMC for Estimating the New Weibull–Pareto Parameters: Theory and Application. Symmetry 2022, 14, 2395. [Google Scholar] [CrossRef]
- Abdel-Aty, Y.; Kayid, M.; Alomani, G. Generalized Bayes Prediction Study Based on Joint Type-II Censoring. Axioms 2023, 12, 716. [Google Scholar] [CrossRef]
- Smith, A.F.M.; Roberts, G.O. Bayesian Computation via the Gibbs Sampler and Related Markov Chain Monte Carlo Methods. J. R. Stat. Soc. 1993, 55, 3–23. [Google Scholar] [CrossRef]
- Brooks, S. Markov Chain Monte Carlo Method and Its Application. J. Royal. Statistical. Soc. D 1998, 47, 69–100. [Google Scholar] [CrossRef]
- Almetwally, E.M.; Alotaibi, R.; Rezk, H. Estimation and Prediction for Alpha-Power Weibull Distribution Based on Hybrid Censoring. Symmetry 2023, 15, 1687. [Google Scholar] [CrossRef]
- Shakhatreh, M.K.; Aljarrah, M.A. Bayesian Analysis of Unit Log-Logistic Distribution Using Non-Informative Priors. Mathematics 2023, 11, 4947. [Google Scholar] [CrossRef]
- EL-Sagheer, R.M.; Almuqrin, M.A.; El-Morshedy, M.; Eliwa, M.S.; Eissa, F.H.; Abdo, D.A. Bayesian Inferential Approaches and Bootstrap for the Reliability and Hazard Rate Functions under Progressive First-Failure Censoring for Coronavirus Data from Asymmetric Model. Symmetry 2022, 14, 956. [Google Scholar] [CrossRef]
- Gelman, A.; Carlin, J.B.; Stern, H.S.; Dunson, D.B.; Vehtari, A.; Rubin, D.B. Bayesian Data Analysis, 3rd ed.; Chapman & Hall/CRC: Philadelphia, PA, USA, 2013; pp. 43–44. [Google Scholar]
- Fink, D. A Compendium of Conjugate Priors. Available online: https://courses.physics.ucsd.edu/2018/Fall/physics210b/REFERENCES/conjugate_priors.pdf (accessed on 25 December 2023).
- Naji, L.F.; Rasheed, H.A. Bayesian Estimation for Two Parameters of Gamma Distribution under Precautionary Loss Function. Ibn AL-Haitham J. Pure Appl. Sci. 2019, 32, 187–196. [Google Scholar] [CrossRef]
- Moala, F.A.; Ramos, P.L.; Achcar, J.A. Bayesian Inference for Two-Parameter Gamma Distribution Assuming Different Non-Informative Priors. Rev. Colomb. Estad. 2013, 36, 319–336. [Google Scholar]
- Pradhan, B.; Kundu, D. Bayes Estimation and Prediction of the Two-Parameter Gamma Distribution. J. Stat. Comput. Simul. 2011, 81, 1187–1198. [Google Scholar] [CrossRef]
- Anastasiou, A.; Gaunt, R.E. Wasserstein Distance Error Bounds for the Multivariate Normal Approximation of the Maximum Likelihood Estimator. Electron. J. Stat. 2021, 15, 5758–5810. [Google Scholar] [CrossRef]
- Srisuradetchai, P.; Niyomdecha, A.; Phaphan, W. Wald Intervals via Profile Likelihood for the Mean of the Inverse Gaussian Distribution. Symmetry 2024, 16, 93. [Google Scholar] [CrossRef]
- Jeffreys, H. Theory of Probability; Oxford University Press: London, UK, 1983; ISBN 9780198531937. [Google Scholar]
- Panahi, H.; Moradi, N. Estimation of the Inverted Exponentiated Rayleigh Distribution Based on Adaptive Type II Progressive Hybrid Censored Sample. J. Comput. Appl. Math. 2020, 364, 112345. [Google Scholar] [CrossRef]
- Dutta, S.; Kayal, S. Inference of a Competing Risks Model with Partially Observed Failure Causes under Improved Adaptive Type-II Progressive Censoring. Proc. Inst. Mech. Eng. O J. Risk Reliab. 2023, 237, 765–780. [Google Scholar] [CrossRef]
- Junnumtuam, S.; Niwitpong, S.-A.; Niwitpong, S. Bayesian Computation for the Parameters of a Zero-Inflated Cosine Geometric Distribution with Application to COVID-19 Pandemic Data. Comput. Model. Eng. Sci. 2023, 135, 1229–1254. [Google Scholar] [CrossRef]
- Bédard, M. On the Robustness of Optimal Scaling for Random Walk Metropolis Algorithms. Ph.D. Thesis, University of Toronto Libraries, Toronto, ON, Canada, 2006. [Google Scholar]
- Hinkley, D. On Quick Choice of Power Transformation. J. R. Stat. Soc. Ser. C Appl. Stat. 1977, 26, 67–69. [Google Scholar] [CrossRef]
- Lee, E.T.; Wang, J.W. Statistical Methods for Survival Data Analysis, 4th ed.; John Wiley & Sons: Nashville, TN, USA, 2013; ISBN 9781118095027. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).