The aim of this section is to investigate the precision of the theoretical results of Bayesian estimation and prediction based on simulated and real datasets.
4.1. Simulation Study
In this subsection, a simulation study is conducted to demonstrate the performance of the presented Bayes estimators and Bayes predictor based on generated data from the DTZ-W () distribution. Bayes averages, relative absolute biases (RABs), relative errors (REs) and 95% CIs for the parameters, rf and hrf are calculated. Also, two-sample Bayes predictors (point and interval) for a future observation from the DTZ-W () distribution based on complete sampling are computed.
The
Adaptive Metropolis (AM) algorithm, a specialized MCMC method, extends the traditional Metropolis–Hastings algorithm. Developed in [
36], it enhances sampling efficiency by dynamically adjusting its proposal distribution based on the chain’s past performance. All simulation studies, which are crucial for visualizing and clarifying our findings, were conducted using the R package version 4.5.0
The steps of the AM algorithm are outlined below:
Step 1. Choose an initial vector of values for .
Step 2. At each iteration where , a proposed value is sampled from the candidate distribution . The algorithm utilizes a Gaussian candidate distribution with the current point as its mean , with a covariance matrix that depends on the sequence of previous points .
Step 3. Evaluate the acceptance rate:
the posterior distribution
is considered without including the normalization constant.
Step 4. Retain as with a probability . If is rejected, then set . This acceptance decision can be implemented through simulating a random variable from a uniform distribution. If the value is less than or equal to , then at iteration is updated to ; otherwise, it retains the value from the previous iteration, .
Step 5. Execute Steps 2–4 repeatedly for iterations, where should be substantial enough to ensure the stability of the results.
Step 6. A warm-up phase is applied to mitigate the influence of initial values, during which the first simulated values are rejected. Using the AM algorithm, the Bayes estimates of , , can be derived in terms of both the SE and LINEX loss functions as follows:
Where are drawn from the posterior distribution, with denoting the warm-up phase
The steps of the simulation procedure are
Generate random samples from the DTZ-W (
) distribution using the following transformation
where
are random samples from the uniform distribution.
Two datasets are generated from the DTZ-W () distribution using two different combinations of population parameter values
Ι:
and
Π:
where the samples of sizes are (30, 60 and 100) using number of replications (NR) = 10,000 for each sample size.
Computing the Bayes averages, RABs and REs of the Bayes estimates of the parameters, rf and hrf as follows:
and
The Bayes predictors (point and interval) for a future observation from the DTZ-W () distribution are computed for the two-sample case.
Table 1 and
Table 2 show the Bayes averages, RABs, REs and 95% CIs of the unknown parameters
under the SE and LINEX loss functions for two different combinations of the parameters.
Table 3 and
Table 4 show the same calculations for rf and hrf of the DTZ-W (
) distribution.
Table 5 and
Table 6 show the two-sample Bayesian prediction and 95% CIs for the future observation from the DTZ-W (
) distribution under the SE and LINEX loss functions.
4.2. Applications
This subsection is devoted to demonstrating how the proposed methods can be used in practice. Two real-life datasets are used for this purpose. The Kolmogorov–Smirnov (K-S) goodness-of-fit test is applied to demonstrate that the DTZ-W distribution is fitted to the two real datasets through Mathematica11.
Application I:
The first application utilizes a well-known dataset originally presented in [
37]. This dataset comprises time-between-failure observations for a repairable item. Such data are fundamental in reliability engineering, as they capture the operational lifespan and failure patterns of components or systems that can be restored to service after a failure event. The context of this dataset is thus directly relevant to the assessment of system longevity, maintenance scheduling, and the overall reliability performance of industrial equipment. Analyzing this type of data with the DTZ-W distribution allows for a more detailed and accurate understanding of reliability, particularly if there are inherent truncation points or operational limits on the observable failure times.
The data are 1.43, 0.11, 0.71, 0.77, 2.63, 1.49, 3.46, 2.46, 0.59, 0.74, 1.23, 0.94, 4.36, 0.40, 1.74, 4.73, 2.23, 0.45, 0.70, 1.06, 1.46, 0.30, 1.82, 2.37, 0.63, 1.23, 1.24, 1.97, 1.86 and 1.17.
Application II:
The second application employs a dataset from [
38], a foundational text in reliability and life data analysis. This dataset comprises the number of cycles (divided by 1000) accumulated until failure for 60 electrical appliances subjected to a life test. In reliability engineering, data from life tests, especially cycle-to-failure data, are critical for assessing product durability, predicting lifespan under operational stress, and optimizing design. This dataset offers valuable insights into the performance and failure characteristics of electrical components, directly informing us about issues of product reliability and warranty analysis. Applying the DTZ-W distribution to such data enables a more precise understanding of failure patterns, particularly if there are inherent truncation points or specific operational windows within which the appliance failures are observed or reported. This context is crucial for validating the flexibility and applicability of the proposed distribution in real-world engineering scenarios.
The data are 0.014, 0.034, 0.059, 0.061, 0.069, 0.080, 0.123, 0.142, 0.165, 0.210, 0.381, 0.464, 0.479, 0.556, 0.574, 0.839, 0.917, 0.969, 0.991, 1.064, 1.088, 1.091, 1.174, 1.270, 1.275, 1.355, 1.397, 1.477, 1.578, 1.649, 1.702, 1.893, 1.932, 2.001, 2.161, 2.292, 2.326, 2.337, 2.628, 2.785, 2.811, 2.886, 2.993, 3.122, 3.248, 3.715, 3.790, 3.857, 3.912, 4.100, 4.106, 4.116, 4.315, 4.510, 4.580, 5.267, 5.299, 5.583, 6.065 and 9.701.
The K-S goodness-of-fit test is performed to check the validity of the proposed fitted model. The p values are given, respectively, as 0.9560 and 0.9866. The p-value given in each case showed that the proposed model fits the data very well.
Table 7 and
Table 8 show the Bayes estimates of the parameters, rf, hrf and their
standard errors (Ses) under the SE and LINEX loss functions. Also, Bayes point predictors and 95% CIs for the future observation from the DTZ-W (
) distribution under two-sample prediction for the two datasets are displayed in
Table 9.
Ref. [
39] utilized two real-world applications to demonstrate the superior performance of the DTZ-W distribution when compared against several alternative distributions. These alternatives included the Zubair–Weibull, doubly truncated exponentiated inverse Weibull, Truncated Weibull Power Lomax, Truncated Log-Logistic-Weibull, and Truncated Exponential Marshall Olkin-Weibull distributions.
The comparative analysis employed established goodness-of-fit criteria: the K-S test and its corresponding p-value, the negative two log-likelihood function, the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), and the Corrected Akaike Information Criterion (CAIC). Consistent with statistical best practices, the distribution exhibiting the lowest values for −2lnL, AIC, BIC and CAIC, coupled with the largest p-value for the K-S test, was deemed the best fit for the given data.
The results obtained from both applications, including ML estimates of parameters, Ses, K-S statistics, p-values, −2lnL statistics and AIC, BIC and CAIC values, consistently indicated that the DTZ-W distribution provided a superior fit to the observed data when compared with the other distributions considered.