Next Article in Journal
Certain Results on the Lifts from an LP-Sasakian Manifold to Its Tangent Bundle Associated with a Quarter-Symmetric Metric Connection
Next Article in Special Issue
Estimation and Prediction for Alpha-Power Weibull Distribution Based on Hybrid Censoring
Previous Article in Journal
A New Class of Trigonometric B-Spline Curves
Previous Article in Special Issue
Confidence Intervals for Mean and Difference between Means of Delta-Lognormal Distributions Based on Left-Censored Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Theoretical Aspects for Bayesian Predictions Based on Three-Parameter Burr-XII Distribution and Its Applications in Climatic Data

by
Mustafa M. Hasaballah
1,*,
Abdulhakim A. Al-Babtain
2,
Md. Moyazzem Hossain
3 and
Mahmoud E. Bakr
2
1
Marg Higher Institute for Engineering and Modern Technology, Cairo 11511, Egypt
2
Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
3
School of Mathematics, Statistics & Physics, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(8), 1552; https://doi.org/10.3390/sym15081552
Submission received: 12 June 2023 / Revised: 27 July 2023 / Accepted: 29 July 2023 / Published: 7 August 2023

Abstract

:
Symmetry and asymmetry play vital roles in prediction. Symmetrical data, which follows a predictable pattern, is easier to predict compared to asymmetrical data, which lacks a predictable pattern. Symmetry helps identify patterns within data that can be utilized in predictive models, while asymmetry aids in identifying outliers or anomalies that should be considered in the predictive model. Among the various factors associated with storms and their impact on surface temperatures, wind speed stands out as a significant factor. This paper focuses on predicting wind speed by utilizing unified hybrid censoring data from the three-parameter Burr-XII distribution. Bayesian prediction bounds for future observations are obtained using both one-sample and two-sample prediction techniques. As explicit expressions for Bayesian predictions of one and two samples are unavailable, we propose the use of the Gibbs sampling process in the Markov chain Monte Carlo framework to obtain estimated predictive distributions. Furthermore, we present a climatic data application to demonstrate the developed uncertainty procedures. Additionally, a simulation research is carried out to examine and contrast the effectiveness of the suggested methods. The results reveal that the Bayes estimates for the parameters outperformed the Maximum likelihood estimators.

1. Introduction

Wind is the force that converts air pressure into air movement, causing the speed of the wind to decrease as air pressure increases. When a mass of moving air slows down, its kinetic energy or momentum is converted into static atmospheric pressure. This relationship indicates that higher wind speeds correspond to lower air pressure measurements. In addition to transporting hot or cold air, wind introduces moisture into the atmosphere, resulting in changes in weather patterns. Therefore, changes in wind conditions directly impact the weather. The direction of the wind is influenced by differences in air pressure. Wind flows from areas of high pressure to low-pressure zones, and the wind’s speed determines the degree of cooling. The UHCS, a generalised Type-I and Type-II HCS, was first introduced by Balakrishnan et al. [1]. The following is a definition of it: Let T 1 and  T 2 be values within the range ( 0 , ) , where T 2 is greater than T 1 , and let r and K be integers such that K < r < n . The test ends at min{max { Z r : n , T 1 } , T 2 } if the Kth failure occurs before time T 1 , where Z r : n represents the failure time of the rth unit. The test ends at the earliest possible time between Z r : n and T 2 if the Kth failure occurs between T 1 and T 2 . If the kth failure occurs after time T 2 , the experiment is terminated at Z K : n , where Z K : n denotes the failure time of the Kth unit. By employing this censoring scheme, we can ensure that the experiment concludes within a maximum duration of T 2 with at least K failures. In such case, we can assure exactly K failures. The Burr-XII distribution was first developed by Burr [2] and has been effectively used with a wide variety of observational data in many different fields. See Shao [3], Wu et al. [4] and Silva et al. [5] for more information on Burr-XII’s applications. There are several reasons for choosing the Burr distribution. Firstly, it encompasses only positive values, making it particularly suitable for modeling hydrological or meteorological data. Secondly, it possesses two shape parameters, enabling its adaptability to different samples due to its ability to cover a wide range of skewness and kurtosis values. For further discussion on this aspect, see Ganora and Laio [6]. Thirdly, the Burr-XII family is extensive and includes various sub-models, such as the log-logistic distribution. Cook and Johnson [7] utilized the Burr model to achieve improved fits for a uranium survey dataset, while Zimmer et al. [8] explore the statistical and probabilistic properties of the Burr-XII distribution and its relationship with other distributions commonly used in reliability analysis. Tadikamalla [9] expanded the two-parameter Burr-XII distribution by introducing an additional scale parameter, resulting in the TPBXIID. Since then, the applications of the Burr-XII distribution have received increased attention. Tadikamalla also established mathematical relationships among Burr-related distributions, demonstrating that the Lomax distribution is a special case of the Burr-XII distribution, and the compound Weibull distribution generalizes the Burr distribution. Furthermore, Tadikamalla showed that the Weibull, logistic, log-logistic, normal, and lognormal distributions can be considered as special cases of the Burr-XII distribution through appropriate parameter choices. The TPBXIID offers significant flexibility by incorporating two shape parameters and one scale parameter into its distribution function, allowing for a wide range of distribution shapes. The TPBXIID is defined by the following cdf:
F ( z ; α , θ , γ ) = 1 1 + z α θ γ , z > 0 , α , θ , γ > 0 ,
and the pdf is given by:
f ( z ; α , θ , γ ) = θ γ α θ z θ 1 1 + z α θ ( γ + 1 ) , z > 0 , α , θ , γ > 0 ,
The survival function S ( z ) can be obtained as:
S ( z ) = 1 + z α θ γ , z > 0 ,
and the failure rate function h ( z ) is given by:
h ( z ) = θ γ α θ z θ 1 1 + z α θ 1 , z > 0 ,
where the parameters γ and θ determine the shape of the density function, while  α determines its scale. When θ is greater than 1, the function has a shape that resembles an upside-down bathtub and is unimodal, with the mode occurring at z = α ( θ 1 ) ( θ γ + 1 ) 1 θ . In the case where θ is equal to 1, the function has an L-shape.
The TPBXIID can be reduced to well-known distributions as follows:
  • Setting θ = 1 in Equation (1) results in the Lomax distribution.
  • Setting α = 1 in Equation (1) leads to the Burr-XII distribution.
  • Setting α = 1 and γ = 1 in Equation (1) yields the log-logistic distribution.
The shapes of the pdf, cdf, survival and failure rate functions of the TPBXIID for different values of the parameters α , θ and γ are given in the Figure 1, Figure 2, Figure 3 and Figure 4.

Properties of the TPBXIID

  • The rth moment about the origin of a random variable Z distributed by a TPBXIID, denoted by μ r ´ , is the expected value of Z r , symbolically,
    μ r ´ = E ( Z r ) = α r Γ ( r θ + 1 ) Γ ( γ r θ ) Γ ( γ ) ,
    where
    Γ ( γ r θ ) 0 , 1 , 2 , , .
  • The variance of TPBXIID can be written as
    θ 2 = α 2 Γ ( γ ) Γ ( 2 θ + 1 ) Γ ( γ 2 θ ) α 2 Γ ( 1 θ + 1 ) 2 Γ ( γ 1 θ ) 2 Γ ( γ ) 2 .
  • The q t h quantile z q of the TPBXIID can be defined as
    z q = α 1 q 1 γ 1 1 θ .
Belaghi and Asl conducted research on estimating the Burr-XII distribution using both non-Bayesian and Bayesian methods in recent studies [10]. Another study by Nasir et al. [11] introduced a new category of distributions, called Burr-XII power series, which has a strong physical basis and combines the exponentiated Burr-XII and power series distributions. Additionally, Jamal et al. [12] proposed an altered version of the TPBXIID distribution that has flexible hazard rate shapes based on the common Burr-XII distribution.This study has been examined by multiple authors, including Sen et al. [13], Dutta and Kayal [14], Dutta et al. [15], and Sagrillo et al. [16].
Prediction plays a significant role in inferential statistics and is of great importance in various practical domains such as meteorology, economics, engineering, and education greatly rely on prediction for making informed decisions. Many life-testing experiments involve predicting future observations. Several researchers, including Balakrishnan and Shafay [17], AL-Hussaini and Ahmad [18], Shafay and Balakrishnan [19] and Shafay [20,21] have explored Bayesian prediction methods for future observations using various types of observed data.
Recently, Ateya et al. [22] conducted a study on predicting future failure times using a UHCS for the Burr-X model, with specific emphasis on engineering applications. In this article, we address the same problem using UHCS but with additional considerations. Let Z 1 : n < Z 2 : n < < Z n : n denote the order statistics from a random sample of size n from an absolutely continuous distribution. Under the UHCS, we run into six situations, which are listed below:
(I)
   0 < z k : n < z r : n < T 1 < T 2 ,
(II)
  0 < z k : n < T 1 < z r : n < T 2 ,
(III)
0 < z k : n < T 1 < T 2 < z r : n ,
(IV)
0 < T 1 < z k : n < z r : n < T 2 ,
(V)
  0 < T 1 < z k : n < T 2 < z r : n ,
(VI)
0 < T 1 < T 2 < z k : n < z r : n .
In each situation, the experiment is terminated at T 1 , z r : n , T 2 , z r : n , T 2 , and  z k : n , respectively. Thus, the likelihood function of the UHCS Z ̲ = ( Z 1 : n < Z 2 : n < < Z W : n ) can be expressed as:
L ( z ̲ , θ ) = n ! ( n W ) ! i = 1 W f ( z i ) 1 F ( B ) n W ,
( W , B ) = ( G 1 , T 1 ) , for   I , ( r , z r : n ) , for   II   and   IV , ( G 2 , T 2 ) , for   III   and   V , ( k , z k : n ) , for   VI ,
In this context, W represents the cumulative number of failures observed in the experiment up to time B (the stopping time point), and G 1 and G 2 indicate the number of failures that have occurred before time points T 1 and T 2 , respectively.
Moreover, using Equations (1), (2) and (8), we can express the likelihood function as:
L ( z ̲ ; α , θ , γ ) = K α W θ θ W γ W i = 1 W z i θ 1 i = 1 W 1 + z i α θ ( γ + 1 ) 1 + B α θ γ ( n W ) ,
where K = n ! ( n W ) ! .
The log-likelihood function for Equation (10), can be expressed as follows:
l ( α , θ , γ ) = ln ( k ) + W ln θ W θ ln α + W ln γ + ( θ 1 ) i = 1 W ln ( z i ) ( γ + 1 ) i = 1 W ln 1 + z i α θ γ ( n W ) ln 1 + B α θ .
To obtain the parameter estimates, we calculate the first derivatives of Equation (11) as follows:
W θ ^ α ^ + ( γ ^ + 1 ) i = 1 W θ ^ z i ( z i α ^ ) θ ^ 1 α 2 1 + z i α ^ θ ^ + γ ^ ( n W ) θ ^ B ( B α ^ ) θ ^ 1 α 2 1 + B α ^ θ ^ = 0 ,
W θ ^ W ln α ^ + i = 1 W ln z i ( γ ^ + 1 ) i = 1 W z i α ^ θ ^ ln z i α ^ 1 + z i α ^ θ ^ γ ^ ( n W ) B α ^ θ ^ ln B α ^ 1 + B α ^ θ ^ = 0 ,
and
W γ ^ i = 1 W ln 1 + z i α ^ θ ^ ( n W ) ln 1 + B α ^ θ ^ = 0 .
From (14), we obtain the MLE γ ^ as
γ ^ = W i = 1 W ln 1 + z i α ^ θ ^ + ( n W ) ln 1 + B α ^ θ ^ 1 .
Because Equations (12) and (13) cannot be written in closed-form expressions, we propose that the parameters have gamma prior distributions as follows:
π 1 ( α ) α a 1 1 e b 1 α , α > 0 ,
π 2 ( θ ) θ a 2 1 e b 2 θ , θ > 0 ,
π 3 ( γ ) γ a 3 1 e b 3 γ , γ > 0 ,
The expression
π ( α , θ , γ ) α a 1 1 θ a 2 1 γ a 3 1 e b 1 α b 2 θ b 3 γ ,
represents the joint prior distribution for the α , θ and γ The joint posterior density function is obtained from (8) and (16) as follows:
π * ( α , θ , γ | z ̲ ) α a 1 W θ 1 θ a 2 + W 1 γ a 3 + W 1 e θ b 2 i = 1 W ln ( z i ) e γ b 3 + i = 1 W ln 1 + z i α θ × e b 1 α i = 1 W ln 1 + z i α θ 1 + B α θ γ ( n W ) .
The posterior density function of α given θ and γ can be derived from (17) and is expressed as:
π 1 * ( α | θ , γ , z ̲ ) α a 1 W θ 1 e b 1 α i = 1 W ln 1 + z i α θ 1 + B α θ γ ( n W ) ,
π 2 * ( θ | α , γ , z ̲ ) θ a 2 + W 1 α a 1 W θ 1 e θ b 2 i = 1 W ln ( z i ) e i = 1 W ln 1 + z i α θ ,
and
π 3 * ( γ | α , θ , z ̲ ) γ a 3 + W 1 e γ b 3 + i = 1 W ln 1 + z i α θ .
It can be seen that, generating samples of γ can be achieved easily using any routine that produces random numbers from a gamma distribution. However, the posterior density functions of α given θ and γ in (18), and the posterior density function of θ given α and γ in (19), do not have known distributions that allow for direct sampling using conventional techniques. Despite this, when observing the plots of both posterior distributions, it becomes apparent that they exhibit similarities to normal distributions, as depicted in Figure 5. Therefore, we recommend employing the Metropolis–Hastings algorithm with a normal proposal distribution to generate random numbers from these distributions, as suggested by Metropolis et al. [23]. The subsequent sections of this paper are organized as follows: Section 2 examines Bayesian prediction intervals utilizing the UHCS for the TPBXIID. In Section 3, we employ the MCMC technique to derive Bayesian prediction intervals. Section 4 presents an analysis of a real dataset for illustrative purposes. Finally, Section 5 offers concluding remarks.

2. Approximate Confidence Interval

The asymptotic variance–covariance of the MLEs for the parameters α , θ , and  γ can be determined by the elements of the negative Fisher information matrix, denoted as I i j . These elements are defined as follows:
I i j = E 2 l χ i χ j ; where i , j = 1 , 2 , 3 and ( χ 1 , χ 2 , χ 3 ) = ( α , θ , γ ) .
Finding exact mathematical formulas for these assumptions is difficult, though. The variance-covariance matrix is therefore calculated as follows:
I 1 ( α , θ , γ ) = 2 l α 2 2 l α θ 2 l α γ 2 l θ α 2 l θ 2 2 l θ γ 2 l γ α 2 l γ θ 2 l γ 2 ( α , θ , γ ) = ( α ^ , θ ^ , γ ^ ) 1 = v a r ^ ( α ^ ) c o v ( α ^ , θ ^ ) c o v ( α ^ , γ ^ ) c o v ( θ ^ , α ^ ) v a r ^ ( θ ^ ) c o v ( θ ^ , γ ^ ) c o v ( γ ^ , α ^ ) c o v ( γ ^ , θ ^ ) v a r ^ ( γ ^ ) ,
where v a r ^ ( α ^ ) , v a r ^ ( θ ^ ) , and  v a r ^ ( γ ^ ) represent the estimated variances of α ^ , θ ^ , and  γ ^ , respectively, while c o v ( α ^ , θ ^ ) , c o v ( α ^ , γ ^ ) , and  c o v ( θ ^ , γ ^ ) denote the estimated covariances between the corresponding parameters. The second derivative is given in Appendix A. Substituting the estimated values α ^ , θ ^ , and  γ ^ into the matrix expression, we obtain the inverse of the asymptotic variance–covariance matrix. Finally, the  ( 1 η ) 100 % confidence intervals for the parameters α , θ , and  γ can be calculated as follows:
α ^ ± Z η / 2 v a r ^ ( α ^ ) , θ ^ ± Z η / 2 v a r ^ ( θ ^ ) and γ ^ ± Z γ η / 2 v a r ^ ( γ ^ ) ,
where Z η / 2 is the standard normal value.

3. One-Sample Bayesian Prediction

In this section, we introduce a general approach for computing interval predictions for the future order statistic Z c : n , which represents the c t h observation, within the TPBXIID framework. These predictions are based on the observed UHCS denoted as Z ̲ = ( Z 1 : n < Z 2 : n < < Z W : n ) , where W < c n . For a more comprehensive discussion on Bayesian prediction, please refer to Shafay [20,21]. The conditional density function of Z c : n given the UHCS z ̲ can be expressed as follows:
f ( z s | z ̲ ) = f 1 ( z c | z ̲ ) if ( W , B ) = ( G 1 , T 1 ) , for   I , f 2 ( z c | z ̲ ) if ( W , B ) = ( r , z r : n ) , for   II   and   IV , f 3 ( z c | z ̲ ) if ( W , B ) = ( G 2 , T 2 ) , for   III   and   for   V , f 4 ( z c | z ̲ ) if ( W , B ) = ( k , z k : n ) , for   VI ,
where
f 1 ( z c | z ̲ ) = 1 P ( r G 1 c 1 ) g = r c 1 f ( z c | , z ̲ , G 1 = g ) P ( G 1 = g ) ,
= g = r c 1 ( n g ) ! ϕ g ( T 1 ) ( c g 1 ) ! ( n c ) ! × [ F ( z c ) F ( T 1 ) ] c g 1 [ 1 F ( z c ) ] n c f ( z c ) [ 1 F ( T 1 ) ] n g ,
with z ̲ = ( z 1 , z G 1 ) , z c > T 1 and ϕ g ( T 1 ) = P ( G 1 = g ) j = r c 1 P ( G 1 = j ) , from (25), we get
f 1 ( z c | z ̲ ) = g = r c 1 ω = 0 c g 1 q = 0 n c τ 1 [ F ( z c ) ] c g ω + q 1 [ F ( T 1 ) ] ω + g f ( z c ) ν j ( T 1 ) ,
and, for  z c > z r , we get
f 2 ( z c | z ̲ ) = f 2 ( z c | z r ) = ( n r ) ! ( c r 1 ) ! ( n c ) ! × [ F ( z c ) F ( z r ) ] c r 1 [ 1 F ( z c ) ] n c f ( z c ) [ 1 F ( z r ) ] n r ,
with z ̲ = ( z 1 , , z r ) , so, we can get
f 2 ( z c | z r ) = ω = 0 c r 1 q = 0 n c τ 2 [ F ( z c ) ] c r ω + q 1 [ F ( z r ) ] ω f ( z c ) [ 1 F ( z r ) ] n r ,
also, for  z c > T 2 , we have
f 3 ( z c | z ̲ ) = 1 P ( k G 2 r * 1 ) g = k r * 1 f ( z c | , z ̲ , G 2 = g ) P ( G 2 = g ) ,
= g = k r * 1 ( n g ) ! ϕ g ( T 2 ) ( c g 1 ) ! ( n c ) ! × [ F ( z c ) F ( T 2 ) ] c g 1 [ 1 F ( z c ) ] n c f ( z c ) [ 1 F ( T 2 ) ] n g ,
with z ̲ = ( z 1 , , z G 2 ) , ϕ g ( T 2 ) = P ( G 2 = g ) j = k r * 1 P ( G 2 = j ) ,
f 3 ( z c | z ̲ ) = g = k r * 1 ω = 0 c g 1 q = 0 n c τ 3 [ F ( z c ) ] c g ω + q 1 [ F ( T 2 ) ] ω + g f ( z c ) ν j ( T 2 ) .
Finally, for  z c > z k , we have
f 4 ( z c | z ̲ ) = f ( z c | z k ) = ( n k ) ! ( c k 1 ) ! ( n c ) ! × [ F ( z c ) F ( z k ) ] c k 1 [ 1 F ( z c ) ] n c f ( z c ) [ 1 F ( z k ) ] n k ,
with z ̲ = ( z 1 , , z r ) , so, we can get
f 4 ( z c | z k ) = ω = 0 c k 1 q = 0 n c τ 4 [ F ( z c ) ] c k ω + q 1 [ F ( z k ) ] ω f ( z c ) [ 1 F ( z k ) ] n k .
The conditional density functions of Z c : n , considering the UHCS, can be derived by substituting Equations (1) and (2) into Equations (26)–(29). The resulting expressions are as follows:
f 1 ( z c | z ̲ ) = g = r c 1 ω = 0 c g 1 q = 0 n c τ 1 θ γ α θ z θ 1 1 + z c α θ ( γ + 1 ) 1 1 + z c α θ γ c g ω + q 1 × 1 1 + T 1 α θ γ ω + g ν j ( T 1 ) ,
f 2 ( z c | z r ) = ω = 0 c r 1 q = 0 n c τ 2 θ γ α θ z θ 1 1 + z c α θ ( γ + 1 ) 1 1 + z c α θ γ c r ω + q 1 × 1 1 + z r α θ γ ω 1 + z r α θ γ ( n r ) ,
f 3 ( z c | z ̲ ) = g = k r * 1 ω = 0 c g 1 q = 0 n c τ 3 θ γ α θ z θ 1 1 + z c α θ ( γ + 1 ) 1 1 + z c α θ γ c g ω + q 1 × 1 1 + T 2 α θ γ ω + g ν j ( T 2 ) ,
and
f 4 ( z c | z k ) = ω = 0 c k 1 q = 0 n c τ 4 θ γ α θ z θ 1 1 + z c α θ ( γ + 1 ) 1 1 + z c α θ γ c k ω + q 1 × 1 1 + z k α θ γ ω 1 + z k α θ γ ( n k ) ,
τ 1 , τ 2 , τ 3 and τ 4 are given in Appendix C. The Bayesian predictive density function of X s : n can be obtained as follows:
f * ( z c | z ̲ ) = f 1 * ( z c | z ̲ ) if ( W , B ) = ( G 1 , T 1 ) , for   I , f 2 * ( z c | z ̲ ) if ( W , B ) = ( r , z r : n ) , for   II   and   IV , f 3 * ( z c | z ̲ ) if ( W , B ) = ( G 2 , T 2 ) , for   III   and   for   V , f 4 * ( z c | z ̲ ) if ( W , B ) = ( k , z k : n ) , for   VI ,
where, for  z c > T 1 ,
f 1 * ( z c | z ̲ ) = 0 0 0 f 1 ( z c | z ̲ ) π * ( α , θ , γ | z ̲ ) d α d θ d γ = g = r c 1 ω = 0 c g 1 q = 0 n c 0 0 0 τ 1 θ γ α θ z θ 1 1 + z c α θ ( γ + 1 ) × 1 1 + z c α θ γ c g ω + q 1 1 1 + T 1 α θ γ ω + g × ν j ( T 1 ) π * ( α , θ , γ | z ̲ ) d α d θ d γ ,
with z ̲ = ( z 1 , z G 1 ) . For  z c > z r ,
f 2 * ( z c | z ̲ ) = 0 0 0 f 2 ( z c | z ̲ ) π * ( α , θ , γ | z ̲ ) d α d θ d γ = ω = 0 c r 1 q = 0 n c 0 0 0 τ 2 θ γ α θ z θ 1 1 + z c α θ ( γ + 1 ) × 1 1 + z c α θ γ c r ω + q 1 × 1 + z r α θ γ ( n r ) 1 1 + z r α θ γ ω π * ( α , θ , γ | z ̲ ) d α d θ d γ ,
with z ̲ = ( z 1 , , z r ) . For  z c > T 2 ,
f 3 * ( z c | z ̲ ) = 0 0 0 f 3 ( z c | z ̲ ) π * ( α , θ , γ | z ̲ ) d α d θ d γ = g = k r 1 ω = 0 c g 1 q = 0 n c 0 0 0 τ 3 θ γ α θ z θ 1 1 + z c α θ ( γ + 1 ) × 1 1 + z c α θ γ c g ω + q 1 × 1 1 + T 2 α θ γ ω + g ν j ( T 2 ) π * ( α , θ , γ | z ̲ ) d α d θ d γ ,
with z ̲ = ( z 1 , , z G 2 ) , and for z c > z k ,
f 4 * ( z c | z ̲ ) = 0 0 0 f 4 ( z c | z ̲ ) π * ( α , θ , γ | z ̲ ) d α d θ d γ = ω = 0 c k 1 q = 0 n c 0 0 0 τ 4 θ γ α θ z θ 1 1 + z c α θ ( γ + 1 ) × 1 1 + z c α θ γ c k ω + q 1 × 1 + z k α θ γ ( n k ) 1 1 + z k α θ γ ω π * ( α , θ , γ | z ̲ ) d α d θ d γ ,
with z ̲ = ( z 1 , , z k ) , for  z c > z k . ν j ( T 1 ) and ν j ( T 2 ) are given in Appendix C.
It is evident that, the integrals in (34) is so hard to evaluate analytically. Then, to approximate the f i * ( z c | z ̲ ) , we used MCMC samples generated by using Gibbs within Metropolis–Hasting samplers. The Bayesian predictive for a two-sided equi-tailed 100 ( 1 ρ ) % interval of z c : n , where W < c n , can be obtained by solving the following two equations:
F i ^ * ( L z c : n | z ̲ ) = 1 ρ 2 and F i ^ * ( U z c : n | z ̲ ) = ρ 2 ,
where F i ^ * ( z c | z ̲ ) is computed using the expression:
F i ^ * ( z c | z ̲ ) = 1 N M j = M + 1 N f i ( z c | α j , θ j , γ j , z ̲ ) , i = 1 , 2 , 3 , 4 .
Here, L Z c : n and  U Z c : n represent the lower and upper of the interval, respectively.

4. Two-Sample Bayesian Prediction

We propose a general procedure for calculating interval predictions for the c t h order statistic Y c : m , where 1 c m , for the TPBXIID using the UHCS. The marginal density function of the c t h order statistic from a sample of size m drawn from a continuous distribution with cdf F ( z ) and pdf f ( z ) can be expressed as:
f Y c : m ( y c | α , θ , γ ) = m ! ( c 1 ) ! ( m c ) ! [ F ( y c ) ] c 1 [ 1 F ( y c ) ] m c f ( y c ) ,
= q = 0 m c ( 1 ) q m c q m ! ( c 1 ) ! ( m c ) ! [ F ( y c ) ] c + q 1 f ( y c ) ,
where y c > 0 , 1 c m , and the derivation can be found in Arnold et al. [24].
Substituting the expressions for α and θ from Equations (1) and (2) into the above expression, the marginal density function of Y c : m becomes:
f Y c : m ( y c | α , θ , γ ) = q = 0 m c ( 1 ) q m c q m ! ( c 1 ) ! ( m c ) ! θ γ α θ y c θ 1 1 + y c α θ ( γ + 1 ) × 1 1 + y c α θ γ c + q 1 .
We can derive the Bayesian predictive density function of Y c : m , as follows:
f Y c : m * ( y c | z ̲ ) = 0 0 0 f ( y c | z ̲ ) π ( α , θ , γ | z ̲ ) d α d θ d γ ,
f Y c : m * ( y c | z ̲ ) = q = 0 m c ( 1 ) q m c q m ! ( c 1 ) ! ( m c ) ! 0 0 0 θ γ α θ z θ 1 1 + y c α θ ( γ + 1 ) × 1 1 + y c α θ γ c + q 1 π * ( α , θ , γ | z ̲ ) d α d θ d γ .
It is evident that Equation (43) is challenging to solve analytically, making closed-form solutions impossible to obtain. Thus, we resort to using MCMC samples generated by applying Gibbs within Metropolis–Hastings samplers to approximate f i * Y c : m ( y c | z ̲ ) as:
F i ^ Y c : m * ( y c | z ̲ ) = 1 N M j = M + 1 N f i Y c : m ( y c | α j , θ j , γ j , z ̲ ) , i = 1 , 2 , 3 , 4 .
By solving the following two equations, the Bayesian predictive of a two-sided equi-tailed 100 ( 1 ρ ) % interval for y c : n , where 1 c m , can be obtained:
F Y c : m * ( L Y c : n | z ̲ ) = 1 ρ 2 and F Y c : m * ( U Y c : n | z ̲ ) = ρ 2 ,
where F Y c : m * ( t | x ̲ ) is given as in (44), and L Y c : n and U Y c : n indicate the lower and upper, respectively.

5. MCMC Method

In this section, we investigate the application of the MCMC method to obtain samples of α , θ , and  γ from the posterior density function (17). Specifically, we will concentrate on the M-H-within-Gibbs sampling technique, which is explained as follows.

5.1. Estimation Based on Squared Error (SE) Loss Function

The SE loss function is defined as:
ξ S E ( Δ ) = a Δ 2 = a [ u ( θ ) u ^ ( θ ) ] 2 ,
where a is a positive constant, typically set to 1. Here, Δ = u ^ ( θ ) u ( θ ) , u ( θ ) represents the function to be estimated with respect to θ , and  u ^ ( θ ) is the SE estimate of u ( θ ) . The Bayes estimator under the quadratic loss function is the mean of the posterior distribution:
u ^ ( θ ) S E = E [ u ( θ ) | z ̲ ] = θ u ( θ ) π * ( θ | z ̲ ) d θ ,
where π * ( θ | x ̲ ) denotes the posterior distribution. The SE loss function is widely used in the literature and is considered the most popular loss function. It possesses symmetry, treating overestimation and underestimation of parameters equally. However, in life-testing scenarios, one type of estimation error may be more critical than the other.

5.2. Estimation Based on Linear Exponential (LINEX) Loss Function

The LINEX loss function, denoted by ξ L I N E X ( Δ ) , is defined as follows:
ξ L I N E X ( Δ ) e a Δ a Δ 1 , a 0 ,
where Δ represents the difference between the true value  u ( θ ) and the LINEX estimate u ^ ( θ ) , as defined previously. The shape parameter a governs the direction and degree of symmetry. It was introduced by Varian [25] and further explored for its interesting properties by Zellner [26]. When a > 0 , overestimation leads to more severe consequences than underestimation, and vice versa. In contrast, when a is near zero, the LINEX performs similarly to the symmetric SE loss function. For  a = 1 , the function becomes highly asymmetric, with overestimation incurring greater loss than underestimation. Conversely, for a < 0 , the loss increases exponentially when Δ = u ^ ( θ ) u ( θ ) < 0 , and decreases approximately linearly when Δ = u ^ ( θ ) u ( θ ) > 0 .
The posterior expectation of the LINEX (48) is expressed as follows:
E ξ L I N E X [ u ^ ( θ ) u ( θ ) ] | z e a u ^ ( θ ) E e a u ( θ ) | z ̲ a u ^ ( θ ) E [ u ( θ ) | z ̲ ] 1 .
Using the LINEX loss function, the Bayes estimate of u ( θ ) is obtained as follows:
u ^ ( θ ) L I N E X = 1 a ln E ( e a u ( θ ) | z ̲ ) ,
provided that E ( e a u ( θ ) | z ̲ ) exists and is finite.

5.3. Estimation Based on General Entropy (GE) Loss Function

Basu et al. [27] introduced a modified LINEX loss function. An alternative loss function that can be considered as a viable substitute for the modified LINEX loss is the GE loss, which is defined as
ξ G E ( u ^ ( θ ) , u ( θ ) ) u ^ ( θ ) u ( θ ) a a ln u ^ ( θ ) u ( θ ) 1 ,
where the symbol u ^ ( θ ) denotes an estimation of the parameter u ( θ ) . It is crucial to note that for a > 0 , a positive error carries greater consequences than a negative error. Conversely, when a < 0 , a negative error results in more serious implications than a positive error.
The Bayes estimator u ^ ( θ ) G E under the GE loss function is expressed as follows:
u ^ ( θ ) G E = E ( u ( θ ) a | z ̲ ) 1 a ,
provided that E ( u ( θ ) a | x ̲ ) exists and is finite. It can be shown that when a = 1 , the Bayes estimate (52) coincides with the Bayes estimate under the SE loss function. We use the Metropolis–Hasting method with a normal proposal distribution to generate random numbers from these distributions (see Metropolis et al. [23]). Now, we illustrate the steps of the process for the Metropolis–Hasting within Gibbs sampling (Algorithm 1):
Algorithm 1: Metropolis–Hasting within Gibbs sampling
  • Start with initial guesses of α , θ , and γ , denoted as α ( 0 ) , θ ( 0 ) , and γ ( 0 ) respectively. Set M as the burn-in period.
  • Initialize j as 1.
  • Generate a sample for γ ( j ) from a Gamma distribution with shape parameter a 3 + W and scale parameter b 3 + i = 1 W ln 1 + z i α θ .
  • Use the Metropolis–Hastings algorithm to generate samples for α ( j ) and θ ( j ) from their respective conditional posterior density functions π 1 * ( α | θ , γ , z ̲ ) and π 2 * ( θ | α , γ , z ̲ ) . The proposal distributions for α ( j ) and θ ( j ) are normal distributions with means α ( j 1 ) and θ ( j 1 ) , and variances v a r ( α ) and v a r ( θ ) respectively, which are obtained from the variance–covariance matrix.
    (i) Compute the acceptance probability as:
    r 1 = min 1 , π 1 * ( α * | θ j 1 , γ j , z ̲ ) π 1 * ( α j 1 | θ j 1 , γ j , z ̲ ) ,
    r 2 = min 1 , π 2 * ( θ * | α j , γ j , z ̲ ) π 2 * ( θ j 1 | α j , γ j , z ̲ ) .
    (ii) Generate random numbers u 1 and u 2 from a Uniform distribution between 0 and 1.
    (iii) If u 1 r 1 , accept the proposal and set α ( j ) = α * ; otherwise, keep α ( j ) = α ( j 1 ) .
    (iv) If u 2 r 2 , accept the proposal and set θ ( j ) = θ * ; otherwise, keep θ ( j ) = θ ( j 1 ) .
  • Increment j by 1.
  • Repeat Steps 3 to 6 for a total of N iterations, starting from j = M + 1 , to obtain samples for α ( j ) θ ( j ) , γ ( j ) , S ( j ) ( t ) , and h ( j ) ( t ) j = M + 1 , , N .
  • f i * ( z c | z ̲ ) is obtained as
    F i ^ * ( z c | z ̲ ) = 1 N M j = M + 1 N f i ( z c | α j , θ j , γ j , z ̲ ) , i = 1 , 2 , 3 , 4 .
  • f i * Y c : m ( y c | z ̲ ) is obtained as
    F i ^ Y c : m * ( y c | z ̲ ) = 1 N M j = M + 1 N f i Y c : m ( y c | α j , θ j , γ j , z ̲ ) , i = 1 , 2 , 3 , 4 .

6. Applications

In this section, we examine actual datasets to demonstrate the practical implementation of the prediction methods discussed earlier. These datasets were obtained from the National Climatic Data Center (NCDC) in Asheville, USA and contain measurements of wind speeds in knots over a 30-day period. Our analysis specifically concentrates on the daily average wind speeds recorded in Cairo city from 1 December 2015 to 30 December 2015. Within this timeframe, we collected a total of 24 observations as follows:
2.3 2.7 3.2 3.7 3.9 4.3 4.5 4.8 4.8 4.9 5.1 5.2 5.5 5.5 5.8 
6.4 6.5 6.8 6.9 7 7.3 7.4 7.7 7.9.             
The K-S test was employed to assess the goodness-of-fit of the data distribution to TPBXIID. The K-S distance was calculated to be 0.0785975 , which is smaller than the critical value of 0.24170 at a significance level of 5% for a sample size of 24. The corresponding p-value was determined to be 0.985348 . Based on these results, we accept the null hypothesis that the data conform to the TPBXIID distribution, as the high p-value suggests a good fit. Figure 6 displays the empirical and fitted survival functions (denoted as S ( t ) ) for visual comparison. It is important to note that TPBXIID serves as an appropriate model for this dataset.
Now, we consider the six cases, as follows:
I:  
T 1 = 6.6 , T 2 = 7.2 , k = 16 , r = 17 . W = 18 , B = T 1 = 6.8 .
II: 
T 1 = 6.6 , T 2 = 7.2 , k = 17 , r = 19 . W = 19 , B = z r : n = 6.9 .
III:
T 1 = 7 , T 2 = 7.2 , k = 19 , r = 22 . W = 21 , B = T 2 = 7.3 .
IV:
T 1 = 7 , T 2 = 7.75 , k = 21 , r = 22 . W = 22 , B = z r : n = 7.4 .
V: 
T 1 = 7 , T 2 = 7.6 , k = 22 , r = 24 . W = 23 , B = T 2 = 7.7 .
VI:
T 1 = 7 , T 2 = 7.6 , k = 24 , r = 25 . W = 24 , B = z k : n = 7.9 .
The results obtained in Section 2 were utilized to create 95% one-sample Bayesian prediction intervals for future order statistics Z c : n , where c = 25 , 26 , 27 , 28 , 29 , 30 , using the same sample. Additionally, 95% two-sample Bayesian prediction intervals were constructed for future order statistics Y c : m , where c = 1 , 2 , , 10 , based on a future unobserved sample with a size of m = 10 . To assess the sensitivity of the Bayesian prediction intervals to the hyperparameters ( a i , b i ) , where i = 1 , 2 , 3 , 4 , 5 , 6 , two different priors were considered. Firstly, non-informative priors were employed with a i = 0 and b i = 0 . Secondly, informative priors were used with a i = 1.5 and b i = 2.0 . The results of the one-sample predictions are displayed in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6, while the results of the two-sample predictions can be found in Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12.

7. Simulation

To compare the effectiveness of each approach put forward in this study, simulation results are offered in this study. Comparing the effectiveness of ML estimates with Bayesian estimates for the TPBXIID’s unknown parameters is the main objective. Additionally, three distinct loss functions are used to evaluate the performance of the survival and hazard functions, with a focus on MSEs, CP, and length. The steps taken for the simulation analysis are described in the following way:
  • Random values of α , θ , and γ are generated from the respective distributions defined in Equations (18)–(20), using given hyperparameters a 1 , b 1 , a 2 , b 2 , a 3 , and b 3 .
  • Based on the derived parameter values from Step 1, random samples are produced using the TPBXIID’s inverse cumulative distribution function. After that, these samples have been organised in ascending order.
  • The ML estimates of α , θ , and γ are obtained by numerically solving the three nonlinear Equations (12)–(14). Additionally, using the observed Fisher information matrix, 95% CIs are calculated.
  • The Bayesian estimates of α , θ , and γ are computed, along with their 95% CRIs, using the MCMC method with 10,000 observations. The estimations are performed under three different loss functions: SE loss function (47), LINEX loss function (50), and GE loss function (52).
  • The values ( ϕ ϕ ^ ) are calculated, where ϕ ^ denotes a ϕ estimate (ML estimate or Bayesian estimate).
  • A sample is generated using TPBXIID with the following parameter values: α = 6.5780 , θ = 8.4568 , γ = 0.5849 , and n = 100 . Steps 1–6 are performed at least 1000 times. The simulation is run with various values for k, r, T 1 , and T 2 . α , θ , γ , S ( t ) , and h ( t ) , are estimated using ML estimations, and the MSEs, CP, and length of CIs are calculated for T 2 = 12 and T 1 = 9.5 . Table 13, Table 14 and Table 15, show the results.
  • Bayesian estimates are used to estimate α , θ , γ , S ( t ) , and h ( t ) under the SE, LINEX, and GE loss functions. Informative gamma priors are used for the shape and scale parameters, with specific hyperparameters ( a 1 = 0.55 , b 1 = 0.34 , a 2 = 0.44 , b 2 = 1.550 , a 3 = 0.38 , and b 3 = 0.22 ) when T 2 = 12 and T 1 = 9.5 . The results, including 95% CRIs, MSEs, CP, and length, are displayed in Table 13, Table 14 and Table 15.
  • Furthermore, the MSE of the estimates is calculated using the following formula:
    MSE ( ϕ ^ ) = i = 1 1000 ( ϕ i ^ ϕ ) 2 1000 .

8. Conclusions

By employing UHCS from TPBXIID, we derive Bayesian prediction intervals for future observations using both one-sample and two-sample prediction techniques. The model incorporates prior beliefs through independent gamma priors for the scale and shape parameters. The computation of Bayesian prediction intervals involves utilizing the Gibbs sampling technique to generate MCMC samples, considering both non-informative and informative priors. The results are demonstrated using a real dataset. In addition, we performed a simulation research to evaluate and contrast how well the suggested approaches performed for various sample sizes ( r , k ) and various scenarios (I, II, III, IV, V, and VI). We can learn more about the methods’ efficacy based on the earlier results.
  • The results presented in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12 reveal that the length of the prediction intervals increases with higher values of c. Specifically, Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 indicate that the lower bounds are relatively insensitive to hyper-parameter specifications, while the upper bounds exhibit some sensitivity. Conversely, Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12 demonstrate that both the lower and upper bounds are relatively insensitive to the specification of the hyper-parameters.
  • Table 13, Table 14 and Table 15, clearly demonstrate that the Bayes estimates for α , θ , and γ outperform the MLEs in terms of MSEs.
  • For cases (I, II, III), it is observed from Table 13, Table 14 and Table 15 that the MSEs and lengths decrease while the CP increases as T 1 and r increase, keeping T 2 and k fixed, for α , θ , and γ .
  • For cases (IV, V, VI), it is evident from Table 13, Table 14 and Table 15 that the MSEs and lengths decrease while the CP increases as T 2 and k increase, keeping T 1 and r fixed, for α , θ , and γ .
  • Table 13, Table 14 and Table 15 reveal that the length of the CRIs for the Bayes estimates of α , θ , and γ are smaller than the corresponding lengths of the CIs of the MLEs. Additionally, the CP of the Bayes estimates are greater than the corresponding CP of the MLEs.
  • Table 13, Table 14 and Table 15 reveal that the length of the credible intervals (CRIs) for the Bayes estimates of α , θ , and γ are smaller than the corresponding lengths of the confidence intervals (CIs) of the MLEs. Additionally, the coverage probabilities (CP) of the Bayes estimates are greater than the corresponding CP of the MLEs.

Author Contributions

Methodology, M.M.H. (Mustafa M. Hasaballah); Software, M.M.H. (Mustafa M. Hasaballah); Validation, M.E.B.; Formal analysis, A.A.A.-B.; Resources, A.A.A.-B. and M.M.H. (Md. Moyazzem Hossain); Data curation, A.A.A.-B., M.M.H. (Md. Moyazzem Hossain) and M.E.B.; Writing—original draft, M.M.H. (Mustafa M. Hasaballah); Writing—review & editing, M.E.B. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deputyship for Research and Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project no. (IFKSUOR3–058–1).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All datasets are reported within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Mathematical notations used in this paper.
Table A1. Mathematical notations used in this paper.
NotationMeaning
α , θ , γ Parameters of three-parameter Burr-XII distribution
μ ˎ r Moment
θ 2 Variance of three-parameter Burr-XII distribution
z q Inverse of cumulative distribution function
W = ( G 1 , G 2 , r , k ) Refers to the total number of failures in the test up to period B
B = ( T 1 , T 2 , z r : n , z k : n ) The stopping time point
I i j Fisher information matrix
a 1 , a 2 , a 3 , b 1 , b 2 , b 3 Hyper-parameters
Table A2. Abbreviations used in this paper.
Table A2. Abbreviations used in this paper.
AbbreviationMeaning
UHCSUnified Hybrid Censoring Scheme
TPBXIIDThree-Parameter Burr-XII Distribution
MCMCMarkov Chain Monte Carlo
pdfProbability Density Function
cdfCumulative Distribution Function
MLEsMaximum Likelihood Estimators
MLMaximum Likelihood
CIsConfidence Intervals
SESquared Error Loss Function
LINEXLinear Exponential Loss Function
GEGeneral Entrop Loss Function
MSEMean Squared Error
MAEMean Absolute Error
CPCoverage Probability
K-SKolmogorov-Smirnov

Appendix B

2 l ( z ̲ ; α , θ , γ ) α 2 = W θ α 2 + ( γ + 1 ) i = 1 W 2 α z i ( z i α ) 1 1 + z i α θ + θ z i 2 z i α θ 2 ] α 2 1 + z i α θ 2 + γ ( n W ) 2 α B ( B α ) 1 1 + B α θ + θ B 2 B α θ 2 ] α 2 1 + B α θ 2 + ( γ + 1 ) i = 1 W θ z i α θ [ α 2 ( θ 1 ) 1 + z i α θ α 2 1 + z i α θ 2 + γ ( n W ) θ B α θ [ α 2 ( θ 1 ) 1 + B α θ α 2 1 + B α θ 2 ,
2 l ( z ̲ ; α , θ , γ ) θ 2 = W θ 2 ( γ + 1 ) i = 1 W z i α θ ln z i α 2 1 + z i α θ 2 γ ( n W ) B α θ ln B α 2 1 + B α θ 2 ,
2 l ( z ̲ ; α , θ , γ ) γ 2 = W γ 2 ,
2 l ( z ̲ ; α , θ , γ ) α θ = 2 l ( z ̲ ; α , θ , γ ) θ α = W α + ( γ + 1 ) i = 1 W 1 α z i α θ ln z i α + z i α θ + 1 1 + z i α θ 2 + γ ( n W ) 1 α B α θ ln B α + B α θ + 1 1 + B α θ 2 ,
2 l ( z ̲ ; α , θ , γ ) γ α = 2 l ( z ̲ ; α , θ , γ ) α γ = i = 1 W θ z i ( z i α ) θ 1 α 2 1 + z i α ^ θ ^ + ( n W ) θ C ( B α ) θ 1 α 2 1 + B α ^ θ ^ ,
2 l ( z ̲ ; α , θ , γ ) γ θ = 2 l ( z ̲ ; α , θ , γ ) θ γ = i = 1 W z i α θ ln z i α 1 + x i α θ ( n W ) B α θ ln B α 1 + B α θ .

Appendix C

τ 1 = ( 1 ) ω + q ( n g ) ! n g c g 1 ω n c q ( c g 1 ) ! ( n c ) ! ,
and
ν j ( T 1 ) = 1 j = r c 1 n j F ( T 1 ) j 1 F ( T 1 ) ( n j ) .
τ 2 = ( 1 ) ω + q ( n r ) ! c r 1 ω n c q ( c r 1 ) ! ( n c ) ! .
τ 3 = ( 1 ) ω + q ( n g ) ! n g c g 1 ω n c q ( c g 1 ) ! ( n c ) ! ,
and
ν j ( T 2 ) = 1 j = k r * 1 n j F ( T 2 ) j 1 F ( T 2 ) ( n j ) .
τ 4 = ( 1 ) ω + q ( n k ) ! c k 1 ω n c q ( c k 1 ) ! ( n c ) ! .
ν j ( T 1 ) = j = r c 1 n j 1 1 + T 1 α θ γ j 1 + T 1 α θ γ ( n j ) 1 .
ν j ( T 2 ) = j = k r * 1 n j 1 1 + T 2 α θ γ j 1 + T 2 α θ γ ( n j ) 1 .

References

  1. Balakrishnan, N.; Rasouli, A.; Sanjari Farsipour, N. Exact likelihood inference based on an unified hybrid censored sample from the exponential distribution. J. Stat. Comput. Simul. 2008, 78, 475–788. [Google Scholar] [CrossRef]
  2. Burr, I.W. Cumulative frequency functions. Ann. Math. Stat. 1942, 13, 215–232. [Google Scholar] [CrossRef]
  3. Shao, Q. Notes on maximum likelihood estimation for the three-parameter Burr-XII distribution. Comput. Stat. Data Anal. 2004, 45, 675–687. [Google Scholar] [CrossRef]
  4. Wu, S.J.; Chen, Y.J.; Chang, C.T. Statistical inference based on progressively censored samples with random removals from the Burr type XII distribution. J. Stat. Comput. Simul. 2007, 77, 19–27. [Google Scholar] [CrossRef]
  5. Silva, G.O.; Ortega, E.M.M.; Garibay, V.C.; Barreto, M.L. Log-Burr-XII regression models with censored data. Comput. Stat. Data Anal. 2008, 52, 3820–3842. [Google Scholar] [CrossRef]
  6. Ganora, D.; Laio, F. Hydrological applications of the Burr distribution: Practical method for parameter estimation. J. Hydrol. Eng. 2015, 20, 04015024. [Google Scholar] [CrossRef]
  7. Cook, R.D.; Johnson, M.E. Generalized Burr-Pareto-Logistic distribution with application to a uranium exploration data set. Technometrics 1986, 28, 123–131. [Google Scholar] [CrossRef]
  8. Zimmer, W.J.J.; Keats, B.; Wang, F.K. The Burr-XII distribution in reliability analysis. J. Qual. Technol. 1998, 30, 386–394. [Google Scholar] [CrossRef]
  9. Tadikamalla, P.R. A look at the Burr and related distributions. Int. Stat. Rev. 1980, 48, 337–344. [Google Scholar] [CrossRef] [Green Version]
  10. Belaghi, R.A.; Asl, M.N. Estimation based on progressively Type-I hybrid censored data from the Burr-XII distribution. Stat. Pap. 2019, 60, 761–803. [Google Scholar] [CrossRef]
  11. Nasir, A.; Yousof, H.M.; Jamal, F.; Korkmaz, M.Ç. The exponentiated Burr-XII power series distribution: Properties and applications. Stats 2019, 2, 15–31. [Google Scholar] [CrossRef] [Green Version]
  12. Jamal, F.; Chesneau, C.; Nasir, M.A.; Saboor, A.; Altun, E.; Khan, M.A. On a modified Burr-XII distribution having flexible hazard rate shapes. Math. Slovaca 2020, 70, 193–212. [Google Scholar] [CrossRef]
  13. Sen, T.; Bhattacharya, R.; Pradhan, B.; Tripathi, Y.M. Statistical inference and Bayesian optimal life-testing plans under Type-II unified hybrid censoring scheme. Qual Reliab. Eng Int. 2021, 37, 78–89. [Google Scholar] [CrossRef]
  14. Dutta, S.; Kayal, S. Bayesian and non-Bayesian inference of Weibull lifetime model based on partially observed competing risks data under unified hybrid censoring scheme. Qual. Reliab. Eng. Int. 2022, 38, 3867–3891. [Google Scholar] [CrossRef]
  15. Dutta, S.; Ng, H.K.T.; Kayal, S. Inference for a general family of inverted exponentiated distributions under unified hybrid censoring with partially observed competing risks data. J. Comput. Appl. Math. 2023, 422, 114934. [Google Scholar] [CrossRef]
  16. Sagrillo, M.; Guerra, R.R.; Machado, R.; Bayer, F.M. A generalized control chart for anomaly detection in SAR imagery. Comput. Ind. Eng. 2023, 177, 109030. [Google Scholar] [CrossRef]
  17. Balakrishnan, N.; Shafay, A.R. One- and two-Sample Bayesian prediction intervals based on Type-II hybrid censored data. Commun. Stat. Theory Method 2012, 41, 1511–1531. [Google Scholar] [CrossRef]
  18. AL-Hussaini, E.K.; Ahmad, A.A. On Bayesian predictive distributions of generalized order statistics. Metrika 2003, 57, 165–176. [Google Scholar] [CrossRef]
  19. Shafay, A.R.; Balakrishnan, N. One- and two-sample Bayesian prediction intervals based on Type-I hybrid censored data. Commun. Stat. Simul. Comput. 2012, 41, 65–88. [Google Scholar] [CrossRef]
  20. Shafay, A.R. Bayesian estimation and prediction based on generalized Type-II hybrid censored sample. J. Stat. Comput. Simul. 2015, 86, 1970–1988. [Google Scholar] [CrossRef]
  21. Shafay, A.R. Bayesian estimation and prediction based on generalized Type-I hybrid censored sample. Commun. Stat. Theory Methods 2016, 46, 4870–4887. [Google Scholar] [CrossRef]
  22. Ateya, S.F.; Alghamdi, A.S.; Mousa, A.A.A. Future Failure Time Prediction Based on a Unified Hybrid Censoring Scheme for the Burr-X Model with Engineering Applications. Mathematics 2022, 10, 1450. [Google Scholar] [CrossRef]
  23. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equations of state calculations by fast computing machines. J. Chem. Phys. 1953, 21, 1087–1091. [Google Scholar] [CrossRef] [Green Version]
  24. Arnold, B.C.; Balakrishnan, N.; Nagaraja, H.N. A First Course in Order Statistics; Wiley: New York, NY, USA, 1992. [Google Scholar]
  25. Varian, H.R. A Bayesian Approach to Real Estate Assessment; North Holland: Amsterdam, The Netherlands, 1975; pp. 195–208. [Google Scholar]
  26. Zellner, A. Bayesian estimation and prediction using asymmetric loss functions. J. Am. Assoc. Nurse Pract. 1986, 81, 446–551. [Google Scholar] [CrossRef]
  27. Basu, A.P.; Ebrahimi, N. Bayesian Approach to Life Testingand Reliability Estimation Using Asymmetric Loss Function. J. Statist. Plann. Infer. 1991, 29, 21–31. [Google Scholar] [CrossRef]
Figure 1. (a) The pdf of TPBXIID with α = 7.0 , γ = 6.0 and for various values of the shape parameter θ . (b) The pdf of TPBXIID with θ = 7.0 , γ = 6.0 and for various values of the scale parameter α . (c) The pdf of TPBXIID with α = 7.0 , θ = 7.0 and for various values of the shape parameter γ .
Figure 1. (a) The pdf of TPBXIID with α = 7.0 , γ = 6.0 and for various values of the shape parameter θ . (b) The pdf of TPBXIID with θ = 7.0 , γ = 6.0 and for various values of the scale parameter α . (c) The pdf of TPBXIID with α = 7.0 , θ = 7.0 and for various values of the shape parameter γ .
Symmetry 15 01552 g001
Figure 2. (a) The cdf of TPBXIID with α = 7.0 , θ = 11 and for various values of the shape parameter γ . (b) The cdf of TPBXIID with θ = 7.0 , γ = 8.0 and for various values of the scale parameter α . (c) The cdf of TPBXIID with α = 7.0 , γ = 8.0 and for various values of the shape parameter θ .
Figure 2. (a) The cdf of TPBXIID with α = 7.0 , θ = 11 and for various values of the shape parameter γ . (b) The cdf of TPBXIID with θ = 7.0 , γ = 8.0 and for various values of the scale parameter α . (c) The cdf of TPBXIID with α = 7.0 , γ = 8.0 and for various values of the shape parameter θ .
Symmetry 15 01552 g002
Figure 3. (a) The survival function of TPBXIID with α = 10 , γ = 8.0 and for various values of the shape parameter θ . (b) The survival function of TPBXIID with θ = 7.0 , γ = 8.0 and for various values of the scale parameter α . (c) The survival function of TPBXIID with α = 10 , θ = 7.0 and for various values of the shape parameter γ .
Figure 3. (a) The survival function of TPBXIID with α = 10 , γ = 8.0 and for various values of the shape parameter θ . (b) The survival function of TPBXIID with θ = 7.0 , γ = 8.0 and for various values of the scale parameter α . (c) The survival function of TPBXIID with α = 10 , θ = 7.0 and for various values of the shape parameter γ .
Symmetry 15 01552 g003
Figure 4. (a) The hazard rate function of TPBXIID with α = 10 , γ = 2.0 and for various values of the shape parameter θ . (b) The hazard rate function of TPBXIID with θ = 4.0 , γ = 2.0 and for various values of the scale parameter α . (c) The hazard rate function of TPBXIID with α = 10 , θ = 3.0 and for various values of the shape parameter γ .
Figure 4. (a) The hazard rate function of TPBXIID with α = 10 , γ = 2.0 and for various values of the shape parameter θ . (b) The hazard rate function of TPBXIID with θ = 4.0 , γ = 2.0 and for various values of the scale parameter α . (c) The hazard rate function of TPBXIID with α = 10 , θ = 3.0 and for various values of the shape parameter γ .
Symmetry 15 01552 g004
Figure 5. (a) Posterior density function for α and (b) Posterior density function for θ .
Figure 5. (a) Posterior density function for α and (b) Posterior density function for θ .
Symmetry 15 01552 g005
Figure 6. Empirical and Fitted Survival Functions.
Figure 6. Empirical and Fitted Survival Functions.
Symmetry 15 01552 g006
Table 1. The 95% one-sample Bayesian prediction intervals for Z c : 30 , where c = 25 , , 30 , are given for Case I.
Table 1. The 95% one-sample Bayesian prediction intervals for Z c : 30 , where c = 25 , , 30 , are given for Case I.
Non-Informative PriorInformative Prior
sLowerUpperLengthLowerUpperLength
253.87368.53334.65964.855610.59625.7405
263.28258.70005.41743.200011.40148.2014
273.2195311.90838.68883.2676911.85208.5843
284.560213.15138.59107.199715.9058.70527
295.202516.549311.34674.9080614.70779.79967
306.917718.900011.98236.9538818.900011.9461
Table 2. The 95% one-sample Bayesian prediction intervals for Z c : 30 , where c = 25 , , 30 , are given for Case II.
Table 2. The 95% one-sample Bayesian prediction intervals for Z c : 30 , where c = 25 , , 30 , are given for Case II.
Non-Informative PriorInformative Prior
sLowerUpperLengthLowerUpperLength
257.78699.06001.27314.80099.54514.7441
264.14578.69564.54984.22239.76655.5442
274.866310.27155.40524.200011.11296.9129
284.200010.50006.30004.389311.50007.1107
295.222012.52207.300010.232120.787910.5558
309.212319.775610.563311.201123.483312.2822
Table 3. The 95% one-sample Bayesian prediction intervals for Z c : 30 , where c = 25 , , 30 , are given for Case III.
Table 3. The 95% one-sample Bayesian prediction intervals for Z c : 30 , where c = 25 , , 30 , are given for Case III.
Non-Informative PriorInformative Prior
sLowerUpperLengthLowerUpperLength
255.7030811.10375.400596.11537.72941.6140
263.383012.26108.87804.08689.47815.3912
274.200014.422210.22224.974310.70005.7257
285.1967317.843412.64678.968622.56313.5944
295.904719.945114.04037.990724.379316.3885
306.000023.259917.25996.142725.867719.7250
Table 4. The 95% one-sample Bayesian prediction intervals for Z c : 30 , where c = 25 , , 30 , are given for Case IV.
Table 4. The 95% one-sample Bayesian prediction intervals for Z c : 30 , where c = 25 , , 30 , are given for Case IV.
Non-Informative PriorInformative Prior
sLowerUpperLengthLowerUpperLength
253.73968.70004.96043.48768.70005.2124
265.598412.61377.01524.93009.00914.0791
276.278113.57897.30076.612113.94627.3341
286.600013.95137.35126.700017.40569.8056
297.660015.97008.31008.661118.973310.3122
309.233120.255611.022510.212123.224113.0120
Table 5. The 95% one-sample Bayesian prediction intervals for Z c : 30 , where c = 25 , , 30 , are given for Case V.
Table 5. The 95% one-sample Bayesian prediction intervals for Z c : 30 , where c = 25 , , 30 , are given for Case V.
Non-Informative PriorInformative Prior
sLowerUpperLengthLowerUpperLength
255.536177.780962.244795.36438.11472.7504
265.600310.70005.09975.600011.74926.1491
275.200012.20467.00455.527613.43467.9069
285.440015.500010.06005.440315.34449.9041
299.122322.255613.133310.100022.992012.8920
3010.002223.876613.874410.954824.547013.5922
Table 6. The 95% one-sample Bayesian prediction intervals for Z c : 30 , where c = 25 , , 30 , are given for Case VI.
Table 6. The 95% one-sample Bayesian prediction intervals for Z c : 30 , where c = 25 , , 30 , are given for Case VI.
Non-Informative PriorInformative Prior
sLowerUpperLengthLowerUpperLength
254.321668.70004.378344.400013.16788.7677
265.100010.51785.41776.152211.47335.3211
275.958211.50295.54475.623512.95467.3311
286.6033313.86867.26526.604513.95887.3543
297.660016.35348.69337.661115.97338.3122
3010.200033.593423.393410.536425.789615.2532
Table 7. The 95% two-sample for Y c : 10 , where c = 1 , , 10 , are provided for Case I.
Table 7. The 95% two-sample for Y c : 10 , where c = 1 , , 10 , are provided for Case I.
Non-Informative PriorInformative Prior
sLowerUpperLengthLowerUpperLength
10.24863.39453.14590.32543.54133.2159
20.35504.87254.51750.58014.02413.4440
30.45705.15104.69370.58454.22413.6396
40.45735.45134.99401.62955.35403.7245
54.022011.86677.84461.902511.86679.9642
65.454713.96878.51392.342613.968711.6261
75.480914.96739.48642.374214.967312.5931
85.555015.27869.72352.403515.278612.8751
95.638815.742710.10392.4902515.742713.2524
106.669119.859613.19044.5697219.859615.2899
Table 8. The 95% two-sample for Y c : 10 , where c = 1 , , 10 , are provided for Case II.
Table 8. The 95% two-sample for Y c : 10 , where c = 1 , , 10 , are provided for Case II.
Non-Informative PriorInformative Prior
sLowerUpperLengthLowerUpperLength
10.10311.63221.52910.18642.01451.8281
20.25484.65844.40360.30155.12364.8221
30.36475.66435.29960.36475.489845.1251
40.69597.42276.72680.98236.25485.2725
51.214412.866711.65231.145811.265910.1201
61.367513.968712.60121.367513.968712.6012
71.378914.967313.58841.378914.967313.5884
81.456315.278613.82231.456315.278613.8223
91.513415.742714.22931.443516.578015.1345
104.6947519.859615.16494.539019.859615.3206
Table 9. The 95% two-sample for Y c : 10 , where c = 1 , , 10 , are provided for Case III.
Table 9. The 95% two-sample for Y c : 10 , where c = 1 , , 10 , are provided for Case III.
Non-Informative PriorInformative Prior
sLowerUpperLengthLowerUpperLength
10.20313.32153.11840.18453.18533.0008
20.25445.12214.86770.23455.12034.8858
30.27395.56705.29310.27395.401025.1271
40.29428.65488.36060.35427.95497.6007
51.130912.489211.35830.998411.985610.9872
61.198713.348612.14991.123614.698513.5749
71.205614.334513.12891.201515.011413.8099
81.254415.445314.19091.254415.445314.1909
91.356715.548314.19162.143616.015313.8717
102.057720.112518.05484.457020.112515.6555
Table 10. The 95% two-sample for Y c : 10 , where c = 1 , , 10 , are provided for Case IV.
Table 10. The 95% two-sample for Y c : 10 , where c = 1 , , 10 , are provided for Case IV.
Non-Informative PriorInformative Prior
sLowerUpperLengthLowerUpperLength
10.11751.64371.52620.18242.08431.9019
20.15485.45395.29910.35464.98754.6329
30.22125.59945.37820.42566.12355.6979
40.27128.55048.27920.69437.39426.6999
51.294712.337811.04311.220011.199610.0896
61.356813.223211.86641.356813.223211.8664
71.578914.117712.53881.654815.321413.6666
81.789614.459612.67001.789615.899714.1101
91.808716.147814.33912.469517.325814.8563
103.384220.012516.62824.2465520.012515.7659
Table 11. The 95% two-sample for Y c : 10 , where c = 1 , , 10 , are provided for Case V.
Table 11. The 95% two-sample for Y c : 10 , where c = 1 , , 10 , are provided for Case V.
Non-Informative PriorInformative Prior
sLowerUpperLengthLowerUpperLength
10.12343.64003.51660.12341.86541.7420
20.21115.22225.01110.21034.87564.6653
30.25485.28705.03220.25485.08334.8285
40.26846.88946.62100.31245.98745.6750
51.032512.365411.33290.998811.9879109891
61.194313.356412.16211.104513.987612.8831
71.253414.623113.36971.253414.623113.3697
81.323015.623114.30011.276514.984913.7089
91.542316.684415.14211.294815.681314.3865
103.1157420.369817.25412.899218.369815.4705
Table 12. The 95% two-sample for Y c : 10 , where c = 1 , , 10 , are provided for Case VI.
Table 12. The 95% two-sample for Y c : 10 , where c = 1 , , 10 , are provided for Case VI.
Non-Informative PriorInformative Prior
sLowerUpperLengthLowerUpperLength
10.21111.28961.07850.22452.01111.7866
20.22144.45644.23500.24563.23542.9898
30.26365.21404.95040.27123.31473.0435
40.32147.45647.13520.27453.32573.0512
50.468611.898611.43000.998712.849311.8506
61.210913.687912.47701.356813.223211.8664
71.265314.236412.97111.578914.117712.5388
81.334514.448613.11411.789615.899714.1101
91.464416.458714.99431.808716.147814.3391
103.039020.769517.73053.858220.012516.1542
Table 13. Evaluation of MSE, CP, and Length of Estimates for parameter α at T 2 = 12 and T 1 = 9.5 .
Table 13. Evaluation of MSE, CP, and Length of Estimates for parameter α at T 2 = 12 and T 1 = 9.5 .
Casesrk T 1 T 2 = 12
MLEMCMC
MSELengthCPSELINEX GE LengthCP
a = −4a = 4a = −4a = 4
I777510.000.40310.78540.82300.43210.43280.43180.43170.43150.00550.933
797510.500.21540.55240.8500.38770.38790.38750.38700.38660.00350.922
II847510.600.52780.78870.8630.52780.52790.52750.52650.52140.00340.935
887510.700.50300.75420.8690.36370.36350.36340.36380.36320.00200.970
III907511.000.39880.71900.8640.39800.39770.39700.39660.37110.00220.928
957511.500.22480.22490.8950.22540.22110.22330.22140.22000.00120.955
Casesrk T 2 T 1 = 9 . 5
MLEMCMC
MSELengthCPSELINEX GE LengthCP
a = −4a = 4a = −4a = 4
IV968013.000.32210.53210.8710.37400.37440.36300.36220.36210.00290.923
968513.500.26780.78540.8230.26220.25340.25320.25310.24320.00250.933
V969012.100.45320.56860.8660.44420.44320.43210.43120.42540.00440.911
969212.200.26540.65470.8570.26210.25470.25440.25330.25220.00220.933
VI969311.000.55540.75800.8440.55230.55220.55120.54210.53450.00200.970
969311.500.28970.52290.8880.29770.29760.29700.29440.28540.00300.961
Table 14. Evaluation of MSE, CP, and Length of Estimates for parameter θ at T 2 = 12 and T 1 = 9.5 .
Table 14. Evaluation of MSE, CP, and Length of Estimates for parameter θ at T 2 = 12 and T 1 = 9.5 .
Casesrk T 1 T 2 = 12
MLEMCMC
MSELengthCPSELINEX GE LengthCP
a = −4a = 4a = −4a = 4
I777510.000.74350.85770.8770.70440.66220.62550.62400.60010.09700.900
797510.500.71770.56540.8530.55540.54470.54320.53240.51000.0830.978
II847510.600.75770.58560.8500.54770.54230.53220.45500.54200.05410.945
887510.700.65220.55410.8650.48600.46500.45680.43210.42580.04520.972
III9075110.62450.92000.8520.55820.56440.53330.54220.51200.03590.923
957511.500.54440.88400.8430.55550.54450.45550.44520.45420.02300.976
Casesrk T 2 T 1 = 9 . 5
MLEMCMC
MSELengthCPSELINEX GE LengthCP
a = −4a = 4a = −4a = 4
IV968013.000.98770.88880.8210.96950.92600.91600.91570.91350.07570.929
968513.500.83980.82280.8650.82770.81290.80200.80020.70390.04880.948
V969012.100.764000.88490.8460.75480.75440.74410.74230.73900.04450.989
969212.200.64700.51790.8690.66870.66470.65470.64670.63210.04270.992
VI969311.000.56470.45120.8330.56200.52300.52200.52120.50480.07800.915
969511.500.45870.84000.8780.44870.43210.42130.41250.41140.06580.949
Table 15. Evaluation of MSE, CP, and Length of Estimates for parameter γ at T 2 = 12 and T 1 = 9.5 .
Table 15. Evaluation of MSE, CP, and Length of Estimates for parameter γ at T 2 = 12 and T 1 = 9.5 .
Casesrk T 1 T 2 = 12
MLEMCMC
MSELengthCPSELINEX GE LengthCP
a = −4a = 4a = −4a = 4
I777510.000.77710.66000.8590.52600.53100.52400.49200.38280.07500.931
797510.500.73600.42760.8650.49240.52510.44840.44300.39910.06120.988
II847510.600.53330.84110.8410.53120.45550.43880.42560.41230.08800.942
887510.700.46740.69780.8580.48600.42350.49120.38000.37880.07700.989
III907511.000.23440.43200.8000.32100.23010.23000.22870.22540.07400.954
957511.500.21540.42150.8220.22360.22210.22140.22010.21980.21120.987
Casesrk T 2 T 1 = 9 . 5
MLEMCMC
MSELengthCPSELINEX GE LengthCP
a = −4a = 4a = −4a = 4
IV968013.000.32450.35800.8870.32140.32120.32110.32100.31990.02310.919
968512.50032100.34590.8880.31540.31240.31220.31120.31110.02110.942
V969012.100.44890.48880.8630.44650.44560.44230.43590.43540.02000.939
969212.200.42990.42110.8020.41250.41120.41090.41050.41020.41000.962
VI969311.000.35990.45990.8280.33540.32690.32150.32110.32100.02650.978
969511.500.36980.54800.8440.35480.35440.35220.34690.33540.02350.987
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hasaballah, M.M.; Al-Babtain, A.A.; Hossain, M.M.; Bakr, M.E. Theoretical Aspects for Bayesian Predictions Based on Three-Parameter Burr-XII Distribution and Its Applications in Climatic Data. Symmetry 2023, 15, 1552. https://doi.org/10.3390/sym15081552

AMA Style

Hasaballah MM, Al-Babtain AA, Hossain MM, Bakr ME. Theoretical Aspects for Bayesian Predictions Based on Three-Parameter Burr-XII Distribution and Its Applications in Climatic Data. Symmetry. 2023; 15(8):1552. https://doi.org/10.3390/sym15081552

Chicago/Turabian Style

Hasaballah, Mustafa M., Abdulhakim A. Al-Babtain, Md. Moyazzem Hossain, and Mahmoud E. Bakr. 2023. "Theoretical Aspects for Bayesian Predictions Based on Three-Parameter Burr-XII Distribution and Its Applications in Climatic Data" Symmetry 15, no. 8: 1552. https://doi.org/10.3390/sym15081552

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop