Next Article in Journal
Second-Order Implicit–Explicit Difference Scheme for Pseudoparabolic Equations with Nonlinear Flux
Previous Article in Journal
Enhanced Fast Fractional Fourier Transform (FRFT) Scheme Based on Closed Newton-Cotes Rules
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Joint Progressively Censored Gumbel Type-II Distributions: (Non-) Bayesian Estimation with an Application to Physical Data

by
Mustafa M. Hasaballah
1,*,
Mahmoud E. Bakr
2,
Oluwafemi Samson Balogun
3 and
Arwa M. Alshangiti
2
1
Department of Basic Sciences, Marg Higher Institute of Engineering and Modern Technology, Cairo 11721, Egypt
2
Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
3
Department of Computing, University of Eastern Finland, FI-70211 Kuopio, Finland
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(7), 544; https://doi.org/10.3390/axioms14070544
Submission received: 2 June 2025 / Revised: 13 July 2025 / Accepted: 15 July 2025 / Published: 20 July 2025

Abstract

This paper presents a comprehensive statistical analysis of the Gumbel Type-II distribution based on joint progressive Type-II censoring. It derives the maximum likelihood estimators for the distribution parameters and constructs their asymptotic confidence intervals. It investigates Bayesian estimation using non-informative and informative priors under the squared error loss function and the LINEX loss function, applying Markov Chain Monte Carlo methods. A detailed simulation study evaluates the estimators’ performance in terms of average estimates, mean squared errors, and average confidence interval lengths. Results show that Bayesian estimators can outperform maximum likelihood estimators, especially with informative priors. A real data example demonstrates the practical use of the proposed methods. The analysis confirms that the Gumbel Type-II distribution with joint progressive censoring provides a flexible and effective model for lifetime data, enabling more accurate reliability assessment and risk analysis in engineering and survival studies.

1. Introduction

The Gumbel Type-II (G-II) distribution, introduced by the German mathematician Emil Gumbel [1], serves as an essential model for extreme value analysis, such as floods, earthquakes, and other natural disasters. It is also used in a number of disciplines, such as rainfall studies, hydrology, and life expectancy tables. In comparative longevity tests, Gumbel emphasized its remarkable ability to replicate the anticipated lifespan of products. Moreover, this distribution plays a crucial role in forecasting the likelihood of climatic events and natural catastrophes.
Over the years, numerous researchers have advanced statistical inference techniques for the G-II distribution, such as Mousa et al. [2], Malinowska and Szynal [3], Miladinovic and Tsokos [4], Nadarajah and Kotz [5], Feroze and Aslam [6], Abbas et al. [7], Feroze and Aslam [8], Reyad and Ahmed [9], Sindhu et al. [10], Abbas et al. [11], and Qiu and Gui [12]. For θ > 0 and σ > 0 , the cumulative distribution function (CDF) of the G-II distribution is given as follows:
F ( x ) = e θ x σ , x > 0 , θ > 0 , σ > 0 ,
Moreover, the corresponding probability density function (PDF) is provided as follows:
f ( x ) = θ σ x ( σ + 1 ) e θ x σ , x > 0 , θ > 0 , σ > 0 .
Here, the shape and scale parameters are denoted by σ and θ , respectively. The probability density function (PDF) and cumulative distribution function (CDF) of the G-II distribution for different values of the parameters θ and σ are shown in Figure 1 and Figure 2.
Censored data is prevalent in various fields, including science, public health, and medicine, making it essential to implement an appropriate censoring scheme. One of the most widely recognized methods is progressive Type-II censoring. In this approach, the number of observed failures, denoted as m (where m n ), is predetermined along with a censoring scheme R = ( R 1 , , R m ) . At each i-th failure, a specified number of remaining units are randomly removed from the experiment. This method offers the dual advantage of reducing both cost and time by allowing the removal of surviving units during the trial. Due to these benefits, progressive Type-II censoring has gained significant attention and has been extensively studied in the literature [13,14,15,16,17].
Implementing censoring strategies on a single group can present several challenges. Although progressive Type-II censoring allows for the removal of certain data points, obtaining a sufficient number of observations remains costly. Moreover, studying group reliability and interactions, which cannot be adequately captured through experiments involving only one group, is often a key objective. To address these limitations, Rasouli and Balakrishnan [18] proposed the Joint Progressive Censoring Scheme (Joint PCS). This approach enables the occurrence of failures in two groups simultaneously, effectively halving the time required to collect the same volume of data. Additionally, the Joint PCS facilitates the comparison of failure times between the two groups under identical experimental conditions, enhancing the robustness and applicability of the results.
Initially, groups A and B contain m and n units, respectively. The Joint PCS method originates from a life-testing experiment involving these two groups. The expected number of failures, denoted as ϵ , is predetermined. The failure points are represented by φ 1 , , φ ϵ . At each failure time, s i surviving units from group A and t i units from group B are randomly removed. Thus, at the i-th failure, a total of R i = s i + t i units are eliminated. Additionally, a second set of random variables, ν 1 , , ν ϵ , is introduced, where each variable takes a value of either 1 or 0. These values indicate the source of the failure as follows:
ν i = 1 if φ i is from group A , 0 if φ i is from group B .
The censored sample is represented as ( ( φ 1 , ν 1 , s 1 ) , , ( φ ϵ , ν ϵ , s ϵ ) ) . Here, ϵ 1 = i = 1 ϵ ν i represents the total number of failures from group A. Similarly, ϵ 2 = i = 1 ϵ ( 1 ν i ) = ϵ ϵ 1 indicates the number of failures from group B.
The Joint PCS has generated substantial interest among researchers. Several authors have examined Joint PCS and its related inference techniques, contributing to a rich body of literature. Across various applications, numerous scholars have explored a wide range of methodologies and heterogeneous lifetime models. For further reading, the works of [19,20,21,22,23,24,25,26,27,28,29,30,31] are recommended.
This manuscript makes significant contributions to the estimation of unknown parameters for the G-II distribution under a joint PCS, which is widely used in reliability analysis and lifetime data modeling. It employs both classical and Bayesian inference methods, utilizing maximum likelihood estimation (MLE) for point and interval estimation in the frequentist framework and Bayesian estimation (BE) via Markov Chain Monte Carlo (MCMC) methods, specifically the Metropolis–Hastings (M–H) algorithm, to handle complex posterior computations. Asymptotic confidence intervals (ACIs) and highest posterior density (HPD) credible intervals (CRIs) are derived to provide a comprehensive uncertainty assessment. The study explores the impact of different loss functions on Bayesian estimators. Through extensive Monte Carlo simulations, it evaluates the efficiency of the proposed estimation techniques by analyzing point estimates, interval estimates, and mean squared errors (MSE), demonstrating that Bayesian methods provide more precise and efficient estimates than MLE, particularly when informative priors are available. Additionally, the practical effectiveness of the proposed methodologies is validated through a real dataset, reinforcing their applicability in reliability analysis. Despite these contributions, the study encounters challenges in handling complex likelihood functions analytically, necessitating computational methods for parameter estimation. The selection of prior distributions in Bayesian inference and the computational burden of MCMC methods also pose difficulties. Moreover, while joint progressive censoring enhances efficiency in data collection, achieving a balance between cost, time, and sufficient observations remains a challenge. Overall, this study advances statistical inference for extreme value distributions by enhancing parameter estimation under joint PCS and offering a robust methodological framework applicable across various scientific and engineering disciplines.
The main goal of this study is to develop and compare estimation methods for the G-II distribution under a joint PCS by employing both classical MLE and Bayesian inference with different loss functions. The study introduces novelty by being the first to provide comprehensive parameter estimation for this distribution using joint progressive censoring, incorporating ACIs and highest posterior density CRIs, and applying squared error and LINEX loss functions in the Bayesian framework. It also uses the M–H MCMC algorithm to handle complex posterior computations and validates the proposed methods with real-world failure data, demonstrating their effectiveness in lifetime and reliability analysis.
The remainder of the paper is structured as follows. Section 2 develops the model, derives point estimates using the MLE method, and obtains ACIs. Section 3 presents BEs of the parameters under different loss functions, along with the proposed MCMC algorithm. Section 4 includes a simulation study to compare the proposed methods. Section 5 demonstrates the application of the inference procedures using a real-life dataset. Finally, Section 6 provides concluding remarks.

2. Classical Inference

Classical inference, also known as frequentist inference, is a fundamental approach in statistical analysis that relies on probability models to draw conclusions about unknown parameters. It is based on the concept of repeated sampling, where estimators are evaluated in terms of their long-run properties, such as unbiasedness, consistency, and efficiency. Methods like MLE and the Method of Moments are commonly used to estimate parameters, while hypothesis testing and confidence intervals provide a framework for decision-making. Unlike Bayesian inference, classical inference does not incorporate prior information and relies solely on observed data. This approach is widely used in scientific research due to its well-established theoretical foundations and objective interpretation of results.

2.1. Maximum Likelihood Estimation

If X 1 , , X m represent observations from group A, then Y 1 , , Y n correspond to observations from group B. The observed data, denoted as ( φ 1 , ν 1 , s 1 , t 1 ) , , ( φ ϵ , ν ϵ , s ϵ , t ϵ ) , follow a specified joint PCS ( R 1 , , R ϵ ) . Throughout the subsequent sections, the term “data” refers to the censored dataset ( φ 1 , ν 1 , s 1 , t 1 ) , , ( φ ϵ , ν ϵ , s ϵ , t ϵ ) . The likelihood function is formulated as follows:
L ( σ 1 , σ 2 , θ 1 , θ 2 | d a t a ) = i = 1 ϵ [ f ( φ i ) ] ν i [ w ( φ i ) ] 1 ν i [ F ¯ ( φ i ) ] s i [ W ¯ ( φ i ) ] t i ,
φ 1 φ 2 φ ϵ , F ¯ = 1 F , W ¯ = 1 W , and i = 1 ϵ s i + i = 1 ϵ t i = i = 1 ϵ R i .
By incorporating the CDF and PDF derived from Equations (1) and (2), respectively, into the likelihood equation in (3), the following result is obtained.
L ( σ 1 , σ 1 , θ 1 , θ 2 | d a t a ) = σ 1 ϵ 1 θ 1 ϵ 1 σ 2 ϵ 2 θ 2 ϵ 2 e ( σ 1 + 1 ) i = 1 ϵ ν i ln φ i e ( σ 2 + 1 ) i = 1 ϵ ( 1 ν i ) ln φ i e θ 1 i = 1 ϵ ν i φ i σ 1 × e θ 2 i = 1 ϵ ( 1 ν i ) φ i σ 2 e i = 1 ϵ s i ln 1 e θ 1 φ i σ 1 e i = 1 ϵ t i ln 1 e θ 2 φ i σ 2 .
The log-likelihood function, ( σ 1 , σ 2 , θ 1 , θ 2 data ) , is defined as the natural logarithm of the likelihood function L ( σ 1 , σ 2 , θ 1 , θ 2 data ) and is expressed as follows:
( σ 1 , σ 2 , θ 1 , θ 2 | d a t a ) = ϵ 1 ln σ 1 + ϵ 1 ln θ 1 + ϵ 2 ln σ 2 + ϵ 2 ln θ 2 ( σ 1 + 1 ) i = 1 ϵ ν i ln φ i ( σ 2 + 1 ) i = 1 ϵ ( 1 ν i ) ln φ i θ 1 i = 1 ϵ ν i φ i σ 1 θ 2 i = 1 ϵ ( 1 ν i ) φ i σ 2 + i = 1 ϵ s i ln 1 e θ 1 φ i σ 1 + i = 1 ϵ t i ln 1 e θ 2 φ i σ 2
By taking the partial derivatives of Equation (5), we derive the normal equations for the unknown parameters σ 1 , σ 2 , θ 1 , and θ 2 . The resulting expressions are then set to zero. Solving these equations yields the estimators σ 1 ^ , σ 2 ^ , θ 1 ^ , and θ 2 ^ for the respective parameters.
ϵ 1 σ 1 i = 1 ϵ ν i ln φ i + θ 1 i = 1 ϵ ν φ i σ 1 ln φ i i = 1 ϵ s i e θ 1 φ i σ 1 θ 1 φ i σ 1 ln φ i 1 e θ 1 φ i σ 1 = 0 ,
ϵ 2 σ 2 i = 1 ϵ ( 1 ν i ) ln φ i + θ 2 i = 1 ϵ ( 1 ν ) φ i σ 2 ln φ i i = 1 ϵ t i e θ 2 φ i σ 2 θ 2 φ i σ 2 ln φ i 1 e θ 2 φ i σ 2 = 0 ,
ϵ 1 θ 1 i = 1 ϵ ν i φ i σ 1 + i = 1 ϵ s i e θ 1 φ i σ 1 φ i σ 1 1 e θ 1 φ i σ 1 = 0 ,
ϵ 2 θ 2 i = 1 ϵ ( 1 ν i ) φ i σ 2 + i = 1 ϵ t i e θ 2 φ i σ 2 φ i σ 2 1 e θ 2 φ i σ 2 = 0 .
Since the resulting system of nonlinear equations lacks closed-form solutions, the first derivatives of the log-likelihood function with respect to each parameter, as given in Equations (6)–(9), cannot be determined analytically. Consequently, the MLEs must be computed using iterative numerical approximation methods. One effective approach for obtaining these estimates is the Newton–Raphson iteration method.

Existence and Uniqueness of MLEs

This part analyzes the existence and uniqueness of the MLEs for the unknown model parameters. Define
  • G σ 1 ( σ 1 ) = σ 1 ,
  • G σ 2 ( σ 2 ) = σ 2 ,
  • G θ 1 ( θ 1 ) = θ 1 ,
  • G θ 2 ( θ 2 ) = θ 2 .
From Equation (6), if σ 1 > 0 is fixed, we find
lim σ 1 0 G σ 1 ( σ 1 ) + ,
and
lim σ 1 G σ 1 ( σ 1 ) i = 1 ϵ ν i ln φ i + θ 1 i = 1 ϵ ν φ i σ 1 ln φ i i = 1 ϵ s i e θ 1 φ i σ 1 θ 1 φ i σ 1 ln φ i 1 e θ 1 φ i σ 1 = 0 .
Similarly, using Equation (7), we obtain
lim σ 2 0 G σ 2 ( σ 2 ) + ,
and
lim σ 2 G σ 2 ( σ 2 ) i = 1 ϵ ( 1 ν i ) ln φ i + θ 2 i = 1 ϵ ( 1 ν ) φ i σ 2 ln φ i i = 1 ϵ t i e θ 2 φ i σ 2 θ 2 φ i σ 2 ln φ i 1 e θ 2 φ i σ 2 = 0 .
From Equation (8), if θ 1 > 0 is fixed, we find
lim θ 1 0 G θ 1 ( θ 1 ) + ,
and
lim θ 1 G θ 1 ( θ 1 ) i = 1 ϵ ν i φ i σ 1 + i = 1 ϵ s i e θ 1 φ i σ 1 φ i σ 1 1 e θ 1 φ i σ 1 = 0 .
Similarly, using Equation (9), we obtain
lim θ 2 0 G θ 2 ( θ 2 ) + ,
and
lim θ 2 G θ 2 ( θ 2 ) i = 1 ϵ ( 1 ν i ) φ i σ 2 + i = 1 ϵ t i e θ 2 φ i σ 2 φ i σ 2 1 e θ 2 φ i σ 2 = 0 .
The second derivatives G σ 1 ( σ 1 ) = 2 σ 1 2 < 0 , G σ 2 ( σ 2 ) = 2 σ 2 2 < 0 , G θ 1 ( θ 1 ) = 2 θ 1 2 < 0 , and G θ 2 ( θ 2 ) = 2 θ 2 2 < 0 show that G σ 1 ( σ 1 ) , G σ 2 ( σ 2 ) , G θ 1 ( θ 1 ) and G θ 2 ( θ 2 ) are continuous and strictly decreasing in σ 1 , σ 2 , θ 1 and θ 2 over the interval ( 0 , ) , with each function tending to minus infinity at some negative values, ensuring the existence and uniqueness of the maximum likelihood estimators for σ 1 , σ 2 , θ 1 and θ 2 .

2.2. Asymptotic Normal Confidence Intervals

To construct the ACIs, the asymptotic variance–covariance matrix is required. This matrix is obtained by inverting the Fisher information matrix (FIM). The FIM I ( ζ ) is given below, and the maximum likelihood estimator (MLE) of ζ = ( σ 1 , σ 2 , θ 1 , θ 2 ) is expressed as ζ ^ = ( σ 1 ^ , σ 2 ^ , θ 1 ^ , θ 2 ^ ) .
I ( ζ ) = E 2 σ 1 2 2 σ 1 σ 2 2 σ 1 θ 1 2 σ 1 θ 1 2 σ 2 σ 1 2 σ 2 2 2 σ 2 θ 1 2 σ 2 θ 2 2 θ 1 σ 1 2 θ 1 σ 2 2 θ 1 2 2 θ 1 θ 2 2 θ 2 σ 1 2 θ 2 σ 2 2 θ 2 θ 1 2 θ 2 2
The covariance matrix of the parameter estimates ( ζ ) is given by the inverse of the FIM:
Cov ( ζ ^ ) = I 1 ( ζ ^ ) = Var ( σ ^ 1 ) Cov ( σ ^ 1 , σ ^ 2 ) Cov ( σ ^ 1 , θ ^ 1 ) Cov ( σ ^ 1 , θ ^ 2 ) Cov ( σ ^ 2 , σ ^ 1 ) Var ( σ ^ 2 ) Cov ( σ ^ 2 , θ ^ 1 ) Cov ( σ ^ 2 , θ ^ 2 ) Cov ( θ ^ 1 , σ ^ 1 ) Cov ( θ ^ 1 , σ ^ 2 ) Var ( θ ^ 1 ) Cov ( θ ^ 1 , θ ^ 2 ) Cov ( θ ^ 2 , σ ^ 1 ) Cov ( θ ^ 2 , σ ^ 2 ) Cov ( θ ^ 2 , θ ^ 1 ) Var ( θ ^ 2 )
We use the following asymptotic normality result, to compute the ACIs:
N ( 0 , I 1 ( ζ ) ) n ( ζ ^ ζ ) .
Under specific regularity conditions, the distribution of ζ ^ is approximately normal with mean ζ and covariance matrix I 1 ( ζ ) .
It is difficult to derive the precise closed-form algebraic equation for I ( ζ ) . Utilizing the distinctive property of the MLE, we approximate I 1 ( ζ ) by I 1 ( ζ ^ ) to estimate the ACIs for the unknown parameters ζ . This yields the following confidence intervals:
σ 1 ^ ± Z η / 2 var ^ ( σ 1 ^ ) , σ 2 ^ ± Z η / 2 var ^ ( σ 2 ^ ) , θ 1 ^ ± Z η / 2 var ^ ( θ 1 ^ ) , and θ 2 ^ ± Z η / 2 var ^ ( θ 2 ^ ) .
The estimated variances var ^ ( σ 1 ^ ) , var ^ ( σ 2 ^ ) , var ^ ( θ 1 ^ ) , and var ^ ( θ 2 ^ ) correspond to the diagonal elements of I 1 ( ζ ^ ) .

3. Bayesian Estimation

BE is a cornerstone of modern statistical inference, offering a robust framework for updating beliefs in light of new evidence. At its core, Bayesian methods utilize Bayes’ theorem to combine prior knowledge with observed data, yielding posterior distributions that reflect updated probabilities. This approach is particularly valuable in situations where prior information is available or when dealing with complex models that require flexible parameter estimation. The importance of BE lies in its ability to quantify uncertainty in a coherent and intuitive manner, making it a powerful tool for decision-making and predictive modeling. It is widely applied in diverse fields such as machine learning, epidemiology, finance, and social sciences, where incorporating prior expertise can significantly enhance the accuracy of inferences. Moreover, Bayesian methods excel in handling small datasets, as they leverage prior distributions to compensate for limited data. By providing a probabilistic interpretation of parameters, BE bridges the gap between theoretical statistics and real-world applications, enabling researchers to draw meaningful conclusions while accounting for uncertainty. Its adaptability and interpretability make it an indispensable tool in the statistician’s arsenal.

3.1. Prior Distribution

The prior distribution, which represents our pre-existing beliefs or prior knowledge about a parameter before we see any data, is crucial to the field of Bayesian statistics. This distribution serves as the foundation upon which the posterior distribution is constructed, combining prior knowledge with new evidence through Bayes’ theorem. The importance of the prior distribution lies in its ability to influence the inference process, especially in scenarios with limited data, where it can guide the analysis towards more plausible conclusions. However, the choice of prior must be made judiciously, as an overly restrictive or biased prior can lead to misleading results. Conversely, a well-chosen prior can enhance the accuracy and robustness of statistical inferences, making it an indispensable tool in Bayesian analysis. Whether informative or non-informative, the prior distribution bridges the gap between subjective assumptions and objective data, enabling a coherent framework for updating beliefs in light of new information.
In this section, we present the Bayesian estimates for the unknown parameters, along with their corresponding CRIs, utilizing the Joint PCS as outlined earlier. Our main focus is on the SEL and LINEX loss functions.
The prior distributions for σ 1 , σ 2 , θ 1 , and θ 2 will now be established. To ensure that the prior and posterior densities belong to comparable families, it is ideal for the model parameters to be independent. These chosen priors allow the posterior distribution to be handled analytically and computed more efficiently. For the priors of σ 1 , σ 2 , θ 1 , and θ 2 , it is appropriate to assume that these four parameters follow independent gamma distributions. The probability density functions for these distributions are given by:
σ 1 G a m m a ( a 1 , b 1 ) , π 1 ( σ 1 ) σ 1 a 1 1 e b 1 σ 1 , σ 1 > 0 , a 1 , b 1 > 0 ,
σ 2 G a m m a ( a 2 , b 2 ) , π 2 ( σ 2 ) σ 2 a 2 1 e b 2 σ 2 , σ 2 > 0 , a 2 , b 2 > 0 ,
θ 1 G a m m a ( a 3 , b 3 ) , π 3 ( θ 1 ) θ 1 a 3 1 e b 3 θ 1 , θ 1 > 0 , a 3 , b 3 > 0 ,
θ 2 G a m m a ( a 4 , b 4 ) , π 4 ( θ 2 ) θ 2 a 4 1 e b 4 θ 2 , θ 2 > 0 , a 4 , b 4 > 0 .
To incorporate prior information about σ 1 , σ 2 , θ 1 , and θ 2 , the parameters a i and b i (where i = 1 , 2 , 3 , 4 ) are selected.
The joint prior density for σ 1 , σ 2 , θ 1 , and θ 2 is derived by combining the prior distributions specified in Equations (15)–(18) as follows:
π ( σ 1 , σ 2 , θ 1 , θ 2 ) σ 1 a 1 1 σ 2 a 2 1 θ 1 a 3 1 θ 2 a 4 1 e b 1 σ 1 b 2 σ 2 b 3 θ 1 b 4 θ 2 .

3.2. Posterior Distribution

The posterior distribution is a fundamental concept in Bayesian statistics, representing the updated beliefs about a parameter after observing new data. It combines prior information, expressed through the prior distribution, with the likelihood of the observed data to produce a comprehensive view of the parameter’s possible values. This integration is achieved through Bayes’ theorem, which mathematically formalizes how evidence modifies prior beliefs. The importance of the posterior distribution lies in its ability to provide credible intervals and point estimates that reflect uncertainty in parameter estimation. Additionally, it serves as a basis for making predictions and decisions in various fields, including medicine, finance, and machine learning. By incorporating past knowledge with empirical data, the posterior distribution allows statisticians and researchers to make more informed conclusions and enhance the robustness of their analyses.
The joint posterior density is expressed as:
π ( σ 1 , σ 2 , θ 1 , θ 2 data ) = L ( ζ ; data ) π ( σ 1 , σ 2 , θ 1 , θ 2 ) L ( ζ ; data ) π ( σ 1 , σ 2 , θ 1 , θ 2 ) d σ 1 d σ 2 d θ 1 d θ 2 .
Equations (4) and (19) are combined to derive the joint posterior density function for σ 1 , σ 2 , θ 1 , and θ 2 , given by:
π ( σ 1 , σ 2 , θ 1 , θ 2 | d a t a ) σ 1 ϵ 1 + a 1 1 σ 2 ϵ 2 + a 2 1 θ 1 ϵ 1 + a 3 1 θ 2 ϵ 2 + a 4 1 e b 1 σ 1 b 2 σ 2 b 3 θ 1 b 4 θ 2 × e ( σ 1 + 1 ) i = 1 ϵ ν i ln φ i e ( σ 2 + 1 ) i = 1 ϵ ( 1 ν i ) ln φ i e θ 1 i = 1 ϵ ν i φ i σ 1 × e θ 2 i = 1 ϵ ( 1 ν i ) φ i σ 2 e i = 1 ϵ s i ln 1 e θ 1 φ i σ 1 e i = 1 ϵ t i ln 1 e θ 2 φ i σ 2 .
It can be challenging to derive explicit formulas for the marginal posterior distributions, as shown in Equation (20). To address this, the MCMC approach can be used to generate samples from Equation (20). The conditional posterior density functions for θ 1 , θ 2 , σ 1 , and σ 2 are given below:
π 1 ( σ 1 | σ 2 , θ 1 , θ 2 ) σ 1 ϵ 1 + a 1 1 e σ 1 [ b 1 + i = 1 ϵ ν i ln φ i ] e i = 1 ϵ s i ln 1 e θ 1 φ i σ 1 ,
π 2 ( σ 2 | σ 1 , θ 1 , θ 2 ) σ 2 ϵ 2 + a 2 1 e σ 2 [ b 1 + i = 1 ϵ ( 1 ν i ) ln φ i ] e i = 1 ϵ t i ln 1 e θ 2 φ i σ 2 ,
π 3 ( θ 1 | σ 1 , σ 2 , θ 2 ) θ 1 ϵ 1 + a 3 1 e θ 1 [ b 3 + i = 1 ϵ ν i φ i σ 1 ] e i = 1 ϵ s i ln 1 e θ 1 φ i σ 1 ,
π 4 ( θ 2 | σ 1 , σ 2 , θ 1 ) θ 2 ϵ 2 + a 4 1 e θ 2 [ b 4 + i = 1 ϵ ( 1 ν i ) φ i σ 2 ] e i = 1 ϵ t i ln 1 e θ 2 φ i σ 2 .
Equations (21)–(24) demonstrate issues related to mathematical tractability. To estimate the unknown parameters, we develop Bayes estimators utilizing the SEL and the LINEX loss functions.

3.3. Loss Functions

Loss functions play a crucial role in statistical estimation, influencing the accuracy and robustness of parameter estimates. Two widely used loss functions in BE are the squared error loss (SEL) and the LINEX loss function. Each has distinct properties and applications depending on the nature of the estimation problem.

3.3.1. SEL Function

The SEL function is defined as:
L ( Θ , θ ) = ( Θ θ ) 2
where Θ is the estimate of the true parameter θ . SEL penalizes deviations symmetrically, giving equal weight to overestimation and underestimation. This function is commonly used due to its mathematical simplicity and optimality under the mean squared error criterion. However, it assumes that over- and under-estimation have identical consequences, which may not always hold in real-world applications.

3.3.2. LINEX Loss Function

The LINEX loss function proposed by Varian [32] is expressed as:
L ( Θ , θ ) = e ε ( Θ θ ) ε ( Θ θ ) 1
where ε is a constant controlling the asymmetry of the loss function. When ε > 0 , overestimation is penalized more heavily than underestimation, and when ε < 0 , the opposite occurs. The LINEX function is particularly useful in applications where the cost of overestimating a parameter is not equal to the cost of underestimating it, such as reliability analysis and survival studies.

3.4. MCMC Method

MCMC is a fundamental tool in statistical estimation, especially in Bayesian inference, where direct sampling from complex probability distributions is often infeasible. MCMC generates dependent samples from a target distribution by constructing a Markov chain that converges to the desired distribution over time. This allows for approximating posterior distributions, estimating model parameters, and solving high-dimensional problems. The M–H algorithm is used when the target distribution is complex and cannot be sampled directly (see Metropolis et al. [33] and Hastings [34]). It works by proposing candidate samples and accepting or rejecting them based on an acceptance probability that ensures convergence to the correct distribution. Gibbs sampling is a special case of MCMC used when conditional distributions of parameters are available and easier to sample from directly. It is particularly useful for high-dimensional problems where parameters can be updated sequentially, reducing computational complexity. M–H is preferred when conditional distributions are intractable, while Gibbs sampling is efficient when conditional distributions have closed-form solutions.
M–H algorithm
  • Step 1: The process begins with the initial values σ 1 ( 0 ) , σ 2 ( 0 ) , θ 1 ( 0 ) , θ 2 ( 0 ) , where K represents the burn-in period.
  • Step 2: Set j = 1 .
  • Step 3: Equations (21)–(24) are used to generate σ 1 ( j ) , σ 2 ( j ) , θ 1 ( j ) , and θ 2 ( j ) through the M–H algorithm. The proposed normal distributions for this process are N ( σ 1 ( j 1 ) , var ( σ 1 ) ) , N ( σ 2 ( j 1 ) , var ( σ 2 ) ) , N ( θ 1 ( j 1 ) , var ( θ 1 ) ) , and N ( θ 2 ( j 1 ) , var ( θ 2 ) ) .
    (I)
    Generate the proposed values σ 1 , σ 2 , θ 1 , and θ 2 from their corresponding normal distributions.
    (II)
    Using the steps listed below, determine the probability of acceptance.
    r 1 = min 1 , π 1 ( σ 1 | θ 2 ( j 1 ) , θ 1 ( j 1 ) , σ 2 ( j 1 ) , d a t a ) π 1 ( σ 1 ( j 1 ) | θ 2 ( j 1 ) , θ 1 ( j 1 ) , σ 2 ( j 1 ) , d a t a ) ,
    r 2 = min 1 , π 2 ( σ 2 | θ 2 ( j 1 ) , θ 1 ( j 1 ) , σ 1 ( j 1 ) , d a t a ) π 2 ( σ 2 ( j 1 ) | θ 2 ( j 1 ) , θ 1 ( j 1 ) , σ 1 ( j 1 ) , d a t a ) ,
    r 3 = min 1 , π 3 ( θ 1 | θ 2 ( j 1 ) , σ 1 j , σ 2 j , d a t a ) π 3 ( θ 1 ( j 1 ) | θ 2 ( j 1 ) , σ 1 j , σ 2 j , d a t a ) ,
    r 4 = min 1 , π 4 ( θ 2 | θ 1 ( j 1 ) , σ 1 j , σ 2 j , d a t a ) π 4 ( θ 2 ( j 1 ) | θ 1 ( j 1 ) , σ 1 j , σ 2 j , d a t a ) .
    (III)
    Select a uniformly distributed random variable u with values ranging from 0 to 1.
    (IV)
    If u r 1 , accept the proposal and update σ 1 ( j ) to σ 1 ; otherwise, retain σ 1 ( j ) as σ 1 ( j 1 ) .
    (V)
    If u r 2 , accept the proposal and update σ 2 ( j ) to σ 2 ; otherwise, retain σ 2 ( j ) as σ 2 ( j 1 ) .
    (VI)
    If u r 3 , accept the proposal and update θ 1 ( j ) to θ 1 ; otherwise, retain θ 1 ( j ) as θ 1 ( j 1 ) .
    (VII)
    If u r 4 , accept the proposal and update θ 2 ( j ) to θ 2 ; otherwise, retain θ 2 ( j ) as θ 2 ( j 1 ) .
  • Step 7: Set j = j + 1.
  • Step 8: Repeat steps two through seven ϵ times. Thus, under the SE loss function, the estimated posterior means of ( σ 1 , σ 2 , θ 1 , θ 2 ) , denoted by ζ , can be calculated as follows:
    β ^ B S = E [ β | x ̲ ] = 1 ϵ K i = k + 1 ϵ β ( j ) .
    Finally, determine the Bayesian estimates of ζ using the LINEX loss function.
    β ^ B L = 1 ϵ ln 1 ϵ K i = k + 1 ϵ e ϵ β ( j ) .

4. Real Data Analysis

4.1. Example 1

The dataset represents the oil breakdown times of insulating fluid exposed to various constant elevated test voltages. Originally reported by Nelson [35], the data were collected under different stress levels. In this study, we focus on the dataset recorded at a stress level of 30 kilovolts (kV), representing normal use conditions, and at 32 kV, representing accelerated conditions. To simplify computations, each observed value in the original dataset has been divided by 10.
We employed various goodness-of-fit tests to assess the model’s performance, including the following:
  • Anderson–Darling (A–D) test.
  • Cramér–von Mises (C–M) test.
  • Kuiper test.
  • Kolmogorov–Smirnov (K–S) test.
Since the p-values in Table 1 are greater than 0.05, the null hypothesis of the distribution cannot be rejected. This indicates that both datasets follow the G-II distribution, demonstrating the applicability of the proposed model to real data. To further illustrate this, Figure 3 presents the fitted and empirical survival functions of the G-II distribution for both datasets, showing strong agreement between them.
To further validate the model’s fit, Figure 4 and Figure 5 display the quantile plots and the observed versus expected probability plots, respectively. Figure 6 and Figure 7 present the smooth histogram and box-and-whisker chart for Dataset-I and Dataset-II. Figure 8 and Figure 9 illustrate the unimodal behavior of the profile log-likelihood function, which peaks within specific parameter ranges: 1.0 to 1.2 for σ 1 , 2.5 to 3.5 for θ 1 , 0.4 to 0.6 for σ 2 , and 0.5 to 0.7 for θ 2 . These distinct peaks confirm the existence of unique MLEs for θ 1 , σ 1 , θ 2 , and σ 2 . Finally, Figure 10 presents the histograms for both datasets, providing further evidence of the distribution’s suitability.
The censoring method outlined below was used to generate a Joint PCS sample based on the previously provided datasets. To perform Joint PCS with r = 10 , the first sample was set to m = 11 and the second to n = 15 . The censoring vectors are defined as follows:
S = ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 6 ) , R = ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 16 ) , T = ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 10 ) .
The datasets created are as follows:
w = ( 0.027 , 0.04 , 0.069 , 0.079 , 0.275 , 0.391 , 0.774 , 0.988 , 1.395 , 1.593 ) , z = ( 0 , 0 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0 ) .
To determine estimates for σ 1 , σ 2 , θ 1 , and θ 2 , we used the MLE approach, depending on the type of data used in this study. Table 2 displays the point estimates. While Table 3 and Table 4 give the 95% ACIs for σ 1 , σ 2 , θ 1 , and θ 2 , We ran 10,000 iterations using the MCMC method for BE, excluding the first 2000 iterations as "burn-in" to guarantee convergence. For the previous distributions, the hyperparameters a i and b i that we have selected are 10 3 , which is near zero. The corresponding point estimates are shown in Table 2. Bayesian estimates were constructed for σ 1 , σ 2 , θ 1 , and θ 2 using both the SEL and LINEX loss functions. The 95% CRIs for σ 1 , σ 2 , θ 1 , and θ 2 are also included in Table 3 and Table 4.

4.2. Example 2

This section presents a real dataset to illustrate the practical use of the proposed methods. The dataset was originally obtained from Zimmer et al. [36] and includes the following:
Data III Failure times (in hours) for 15 devices: 0.19, 0.78, 0.96, 1.31, 2.78, 3.16, 4.15, 4.76, 4.85, 6.5, 7.35, 8.01, 8.27, 12.06, 31.75.
Data IV First failure times (in months) for 20 electronic cards: 0.9, 1.5, 2.3, 3.2, 3.9, 5.0, 6.2, 7.5, 8.3, 10.4, 11.1, 12.6, 15.0, 16.3, 19.3, 22.6, 24.8, 31.5, 38.1, 53.0.
We applied several goodness-of-fit tests to evaluate the performance of the model, including the following
  • A–D test.
  • C–M test.
  • Kuiper test.
  • K–S test.
As the p-values in Table 5 exceed 0.05, the null hypothesis cannot be rejected. This suggests that both datasets follow the G-II distribution, confirming the model’s applicability to real-world data. Figure 11 further supports this by comparing the fitted and empirical survival functions for both datasets, which show a strong alignment.
To further verify the model’s fit, Figure 12 and Figure 13 show the quantile plots and the observed versus expected probability plots. Figure 14 and Figure 15 display the smooth histograms and box-and-whisker plots for Dataset-III and Dataset-IV. Figure 16 and Figure 17 illustrate the unimodal shape of the profile log-likelihood functions, with peaks observed in the following parameter ranges: 0.7–0.8 for σ 1 , 1.4–1.8 for θ 1 , 0.8–1.2 for σ 2 , and 4.0–5.0 for θ 2 . These distinct peaks confirm the presence of unique maximum likelihood estimates for θ 1 , σ 1 , θ 2 , and σ 2 . Finally, Figure 18 presents the histograms of both datasets, offering additional evidence of the distribution’s suitability.
The censoring scheme described below was used to generate a joint PCS sample based on the previously provided datasets. To implement the joint PCS with r = 15 , the first sample size was set to m = 15 and the second to n = 20 . The censoring vectors are specified as follows:
S = ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 8 ) , R = ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 20 ) , T = ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 12 ) .
The generated datasets are as follows:
w = ( 0.19 , 0.78 , 0.9 , 0.96 , 1.31 , 1.5 , 2.3 , 2.78 , 3.16 , 3.2 , 3.9 , 4.15 , 4.76 , 4.85 , 5 ) , z = ( 1 , 1 , 0 , 1 , 1 , 0 , 0 , 1 , 1 , 0 , 0 , 1 , 1 , 1 , 0 ) .
Estimates for σ 1 , σ 2 , θ 1 , and θ 2 were obtained using the MLE method, depending on the data type used in this study. Table 6 presents the corresponding point estimates, while Table 7 and Table 8 provide the 95% ACIs for these parameters. For the Bayesian estimation, we performed 12,000 MCMC iterations, discarding the first 3000 as burn-in to ensure convergence. The hyperparameters a i and b i were set to 10 3 , close to zero, for the prior distributions. The results are summarized in Table 6. Bayesian estimates for σ 1 , σ 2 , θ 1 , and θ 2 were obtained under both the SEL and LINEX loss functions. Table 7 and Table 8 also include the 95% CRIs for these parameters.
To further validate the applicability of the proposed model, we conducted an in-depth analysis using the newly added datasets. The evaluation included multiple goodness-of-fit tests, such as the A–D, C–M, K–S, and Kuiper tests. In addition, graphical diagnostics were used, including fitted versus empirical survival functions, quantile plots, and probability plots. The analysis results confirmed that the proposed Gumbel Type-II model under joint progressive censoring provides improved fitting accuracy and narrower CRIs when compared with classical estimation methods. These findings support the practical advantages of the proposed approach in modeling lifetime and reliability data.

5. Simulation Study

This section evaluates the performance of the proposed estimation methods under Joint PCS using Monte Carlo simulations. The simulations were conducted using Mathematica version 10. The comparison focuses on point estimators for lifetime parameters based on the MSE:
MSE ( Θ ^ ) = i = 1 1000 ( Θ i Θ ^ ) 2 1000 .
A lower MSE indicates improved estimation accuracy. All results are based on 1000 replications. Additionally, average confidence lengths (ACLs) are examined, where a smaller ACL suggests better interval estimation performance. We selected two distinct censoring schemes, detailed below, for different failure counts ( r = 15 , 20 , 25 , 35 , 40 , 50 ) and sample sizes ( m = 15 , 30 , 40 ; n = 20 , 40 , 60 ).
  • Scheme I: R 1 = R 2 = = R m 1 = 0 and R m = n m .
  • Scheme II: R 1 = n m and R 2 = R 3 = = R m 1 = 0 .
The parameter values for both datasets are σ 1 = 1.5 , σ 2 = 2.5 , θ 1 = 1.5 , and θ 2 = 2.5 . We computed the MLEs and 95% confidence intervals for σ 1 , σ 2 , θ 1 , and θ 2 . The mean values of the MLEs and their corresponding confidence interval lengths were determined after 1000 repetitions. The results are presented in Table 9, Table 10, Table 11 and Table 12.
Additionally, within the BE framework under SEL and LINEX loss functions, we employed informative gamma priors for α 1 , α 2 , γ 1 , and γ 2 . The hyperparameters were set as a i = 0.05 and b i = 0.07 for i = 1 , 2 , 3 , 4 . The values ε = 6 and ε = 6 represented overestimation and underestimation, respectively. Where
  • ε = 6 adds a constant positive bias, modeling overestimation.
  • ε = 6 adds a constant negative bias, modeling underestimation.
These shifts test how the estimator performs when the data is systematically too high or too low.
Using the MCMC approach, we obtained Bayesian estimates with 95% CRIs for σ 1 , σ 2 , θ 1 , and θ 2 based on 1000 simulations with 21,000 samples. The initial 5000 iterations were discarded as a "burn-in" phase to ensure convergence. The performance of the obtained estimators for σ 1 , σ 2 , θ 1 , and θ 2 was evaluated using MSE. After 1000 repetitions, we computed the mean values of the MLEs and their respective lengths. The results are presented in Table 9, Table 10, Table 11 and Table 12.
Here are the key findings from the manuscript:
  • BE methods generally outperform classical MLE, particularly in terms of MSE and interval estimation precision.
  • Joint PCS is shown to be an effective strategy for balancing cost and time while maintaining estimation accuracy.
  • Bayesian estimators provide shorter CRIs compared with ACIs, making them more precise in uncertainty quantification.
  • The LINEX loss function results in more efficient Bayesian estimators compared with the SEL function.
  • The proposed methods effectively model real-world failure data, validating their use in reliability analysis.
  • Extensive Monte Carlo simulations confirm that Bayesian estimates exhibit lower MSE and narrower CRIs, especially when informative priors are used.
  • The M–H algorithm successfully approximates complex posterior distributions, ensuring efficient parameter estimation.

6. Conclusions

This study investigates estimating unknown parameters of the G-II distribution under joint progressive censoring using classical and Bayesian methods. It develops and compares estimation techniques, evaluates performance through simulations, and applies them to real data. Classical estimation uses maximum likelihood estimation for point estimates and approximate confidence intervals to measure uncertainty, showing that estimates improve with larger samples. Due to the complex likelihood, numerical methods are needed for parameter estimation. BE applies MCMC, specifically Metropolis–Hastings, to approximate posterior distributions, incorporating squared error and LINEX loss functions to compute Bayes estimators and obtain highest posterior density intervals. Simulations reveal that Bayesian methods generally yield lower mean squared errors and more precise intervals than classical methods by using prior information. Monte Carlo simulations compare estimation methods under different censoring schemes and parameter values, showing that increasing sample size reduces mean squared error and improves estimate precision, with Bayesian credible intervals shorter than classical intervals. Analysis of a real dataset demonstrates practical use of the proposed methods, confirming that joint progressive censoring effectively models failure data with cost and time benefits and that BE provides more robust results, especially with informative priors. The study improves statistical inference for extreme value distributions by refining parameter estimation under joint progressive censoring and highlights choosing estimation methods based on data and goals, suggesting future research apply these approaches to other heavy-tailed distributions, explore flexible priors, and enhance computational techniques for high-dimensional problems.

Author Contributions

Conceptualization, M.M.H.; Methodology, M.M.H.; Software, M.M.H.; Validation, A.M.A., O.S.B. and M.E.B.; Formal analysis, A.M.A. and O.S.B.; Investigation, M.E.B.; Resources, M.E.B.; Data curation, A.M.A. and O.S.B.; Writing—original draft, M.M.H.; Writing—review and editing, M.M.H.; Visualization, M.E.B.; Funding acquisition, A.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

The study was funded by the Ongoing Research Funding program (ORF-2025-538), King Saud University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All datasets are reported within the article.

Acknowledgments

The authors would like to thank the Ongoing Research Funding program (ORF-2025-538), King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gumbel, E. Statistics of Extremes; Columbia University Press: New York, NY, USA, 1958. [Google Scholar]
  2. Mousa, M.A.; Jaheen, Z.; Ahmad, A. Bayesian estimation, prediction and characterization for the Gumbel model based on records. Stat. J. Theor. Appl. Stat. 2002, 36, 65–74. [Google Scholar] [CrossRef]
  3. Malinowska, I.; Szynal, D. On a family of Bayesian estimators and predictors for a Gumbel model based on the kth lower records. Appl. Math. 2004, 1, 107–115. [Google Scholar] [CrossRef]
  4. Miladinovic, B.; Tsokos, C.P. Ordinary, Bayes, empirical Bayes, and non-parametric reliability analysis for the modified Gumbel failure model. Nonlinear Anal. Theory Methods Appl. 2009, 71, e1426–e1436. [Google Scholar] [CrossRef]
  5. Nadarajah, S.; Kotz, S. The beta Gumbel distribution. Math. Probl. Eng. 2004, 2004, 323–332. [Google Scholar] [CrossRef]
  6. Feroze, N.; Aslam, M. Bayesian Analysis Of Gumbel Type II Distribution Under Doubly Censored Samples Using Different Loss Functions. Casp. J. Appl. Sci. Res. 2012, 1, 26–43. [Google Scholar]
  7. Abbas, K.; Fu, J.; Tang, Y. Bayesian estimation of Gumbel type-II distribution. Data Sci. J. 2013, 12, 33–46. [Google Scholar] [CrossRef]
  8. Feroze, N.; Aslam, M. Bayesian estimation of twocomponent mixture of gumbel type II distribution under informative priors. Int. J. Adv. Sci. Technol. 2013, 53, 11–30. [Google Scholar]
  9. Reyad, H.M.; Ahmed, S.O. E-Bayesian analysis of the Gumbel type-II distribution under type-II censored scheme. Int. J. Adv. Math. Sci. 2015, 3, 108–120. [Google Scholar] [CrossRef]
  10. Sindhu, T.N.; Feroze, N.; Aslam, M. Study of the left censored data from the gumbel type II distribution under a bayesian approach. J. Mod. Appl. Stat. Methods 2016, 15, 10–31. [Google Scholar] [CrossRef]
  11. Abbas, K.; Hussain, Z.; Rashid, N.; Ali, A.; Taj, M.; Khan, S.A.; Manzoor, S.; Khalil, U.; Khan, D.M. Bayesian estimation of gumbel type-II distribution under type-II censoring with medical applicatioNs. Comput. Math. Methods Med. 2020, 2020, 1876073. [Google Scholar] [CrossRef]
  12. Qiu, Y.; Gui, W. Statistical Inference for Two Gumbel Type-II Distributions under Joint Type-II Censoring Scheme. Axioms 2023, 12, 572. [Google Scholar] [CrossRef]
  13. Balakrishnan, N.; Aggarwala, R. Progressive Censoring: Theory, Methods, and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  14. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring; Statistics for Industry and Technology; Birkhsuser: Boston, MA, USA, 2014. [Google Scholar]
  15. Balakrishnan, N.; Sandhu, R.A. A simple simulational algorithm for generating progressive Type-II censored samples. Am. Stat. 1995, 49, 229–230. [Google Scholar] [CrossRef]
  16. Singh, K.; Kumar Mahto, A.; Mani Tripathi, Y.; Wang, L. Estimation in a multicomponent stress-strength model for progressive censored lognormal distribution. Proc. Inst. Mech. Eng. Part J. Risk Reliab. 2024, 238, 622–642. [Google Scholar]
  17. Hasaballah, M.M.; Tashkandy, Y.A.; Balogun, O.S.; Bakr, M.E. Bayesian and Classical Inference of the Process Capability Index under Progressive Type-II Censoring Scheme. Phys. Scr. 2024, 99, 055241. [Google Scholar] [CrossRef]
  18. Rasouli, A.; Balakrishnan, N. Exact Likelihood Inference for Two Exponential Populations Under Joint Progressive Type-II Censoring. Commun. Stat.-Theory Methods 2010, 39, 2172–2191. [Google Scholar] [CrossRef]
  19. Pandey, R.; Srivastava, P. Bayesian inference for two log-logistic populations under joint progressive type II censoring schemes. Int. Syst. Assur. Eng. Manag. 2022, 13, 2981–2991. [Google Scholar] [CrossRef]
  20. Qiao, Y.; Gui, W. Statistical Inference of Weighted Exponential Distribution under Joint Progressive Type-II Censoring. Symmetry 2022, 14, 2031. [Google Scholar] [CrossRef]
  21. Abdel-Aty, Y.; Kayid, M.; Alomani, G. Generalized Bayes Estimation Based on a Joint Type-II Censored Sample from K-Exponential Populations. Mathematics 2023, 11, 2190. [Google Scholar] [CrossRef]
  22. Long, B.; Jiang, Z. Estimation and prediction for two-parameter Pareto distribution based on progressively double Type-II hybrid censored data. AIMS Math. 2023, 8, 15332–15351. [Google Scholar] [CrossRef]
  23. Ren, H.; Hu, X. Estimation for inverse Weibull distribution under progressive type-II censoring scheme. AIMS Math. 2023, 8, 22808–22829. [Google Scholar] [CrossRef]
  24. Hu, X.; Ren, H. Statistical inference of the stress-strength reliability for inverse Weibull distribution under an adaptive progressive type-II censored sample. AIMS Math. 2023, 8, 28465–28487. [Google Scholar] [CrossRef]
  25. Panahi, H. Reliability estimation and order-restricted inference based on joint type-II progressive censoring scheme with application to splashing data in atomization process. Proc. Inst. Mech. Eng. Part J. Risk Reliab. 2024, 239. [Google Scholar] [CrossRef]
  26. Hasaballah, M.M.; Balogun, O.S.; Bakr, M.E. Non-Bayesian and Bayesian estimation for Lomax distribution under randomly censored with application. AIP Adv. 2024, 14, 025318. [Google Scholar] [CrossRef]
  27. Hasaballah, M.M.; Tashkandy, Y.A.; Bakr, M.E.; Balogun, O.S.; Ramadan, D.A. Classical and Bayesian inference of inverted modified Lindley distribution based on progressive type-II censoring for modeling engineering data. AIP Adv. 2024, 14, 035021. [Google Scholar] [CrossRef]
  28. Hasaballah, M.M.; Tashkandy, Y.A.; Balogun, O.S.; Bakr, M.E. Statistical Inference of Unified Hybrid Censoring Scheme for Generalized Inverted Exponential Distribution with Application to COVID-19 Data. AIP Adv. 2024, 14, 045111. [Google Scholar] [CrossRef]
  29. Hasaballah, M.M.; Tashkandy, Y.A.; Oluwafemi Samson Balogun, O.S.; Bakr, M.E. Reliability analysis for two populations Nadarajah-Haghighi distribution under Joint progressive type-II censoring. AIMS Math. 2024, 9, 10333–10352. [Google Scholar] [CrossRef]
  30. Hasaballah, M.M.; Balogun, O.S.; Bakr, M.E. Point and interval estimation based on joint progressive censoring data from two Rayleigh-Weibull distribution with applications. Phys. Scr. 2024, 99, 8. [Google Scholar] [CrossRef]
  31. Hasaballah, M.M.; Balogun, O.S.; Bakr, M.E. Frequentist and Bayesian approach for the generalized logistic lifetime model with applications to air-conditioning system failure times under joint progressive censoring data. AIMS Math. 2024, 9, 29346–29369. [Google Scholar] [CrossRef]
  32. Varian, H.R. A Bayesian approach to real state assessment. In Studies in Bayesian Econometrics and Statistics; In Honor of Leonard J. Savage; Stephen, E.F., Zellner, A., Eds.; North-Holland: Amsterdam, The Netherland, 1975; pp. 195–208. [Google Scholar]
  33. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of State Calculations by Fast Computing. Mach. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef]
  34. Hastings, W.K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
  35. Nelson, W.B. Accelerated Testing: Statistical Models, Test Plans, and Data Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  36. Zimmer, W.J.; Keats, J.B.; Wang, F.K. The Burr-XII distribution in reliability analysis. J. Qual. Technol. 1998, 30, 386–394. [Google Scholar] [CrossRef]
Figure 1. Illustrates the PDF of the G-II distribution for various σ values while keeping θ fixed.
Figure 1. Illustrates the PDF of the G-II distribution for various σ values while keeping θ fixed.
Axioms 14 00544 g001
Figure 2. Illustrates the CDF of the G-II distribution for various σ values while keeping θ fixed.
Figure 2. Illustrates the CDF of the G-II distribution for various σ values while keeping θ fixed.
Axioms 14 00544 g002
Figure 3. (a) Empirical and fitted survival function for Dataset-I. (b) Empirical and fitted survival function for Dataset-II.
Figure 3. (a) Empirical and fitted survival function for Dataset-I. (b) Empirical and fitted survival function for Dataset-II.
Axioms 14 00544 g003
Figure 4. (a) Probability plot for Dataset-I. (b) Probability plot for Dataset-II.
Figure 4. (a) Probability plot for Dataset-I. (b) Probability plot for Dataset-II.
Axioms 14 00544 g004
Figure 5. (a) Quantile plot for Dataset-I. (b) Quantile plot for Dataset-II.
Figure 5. (a) Quantile plot for Dataset-I. (b) Quantile plot for Dataset-II.
Axioms 14 00544 g005
Figure 6. (a) Smooth histogram for Dataset-I. (b) Smooth histogram for Dataset-II.
Figure 6. (a) Smooth histogram for Dataset-I. (b) Smooth histogram for Dataset-II.
Axioms 14 00544 g006
Figure 7. (a) Box whisker chart for Dataset-I. (b) Box whisker chart for Dataset-II.
Figure 7. (a) Box whisker chart for Dataset-I. (b) Box whisker chart for Dataset-II.
Axioms 14 00544 g007
Figure 8. (a) Profile log-likelihood function of σ 1 for Dataset-I. (b) Profile log-likelihood function of θ 1 for Dataset-II.
Figure 8. (a) Profile log-likelihood function of σ 1 for Dataset-I. (b) Profile log-likelihood function of θ 1 for Dataset-II.
Axioms 14 00544 g008
Figure 9. (a) Profile log-likelihood function of σ 2 for Dataset-I. (b) Profile log-likelihood function of θ 2 for Dataset-II.
Figure 9. (a) Profile log-likelihood function of σ 2 for Dataset-I. (b) Profile log-likelihood function of θ 2 for Dataset-II.
Axioms 14 00544 g009
Figure 10. (a) Histogram for Dataset-I. (b) Histogram for Dataset-II.
Figure 10. (a) Histogram for Dataset-I. (b) Histogram for Dataset-II.
Axioms 14 00544 g010
Figure 11. (a) Empirical and fitted survival function for Dataset-III. (b) Empirical and fitted survival function for Dataset-IV.
Figure 11. (a) Empirical and fitted survival function for Dataset-III. (b) Empirical and fitted survival function for Dataset-IV.
Axioms 14 00544 g011
Figure 12. (a) Probability plot for Dataset-III. (b) Probability plot for Dataset-IV.
Figure 12. (a) Probability plot for Dataset-III. (b) Probability plot for Dataset-IV.
Axioms 14 00544 g012
Figure 13. (a) Quantile plot for Dataset-III. (b) Quantile plot for Dataset-IV.
Figure 13. (a) Quantile plot for Dataset-III. (b) Quantile plot for Dataset-IV.
Axioms 14 00544 g013
Figure 14. (a) Smooth histogram for Dataset-III. (b) Smooth histogram for Dataset-IV.
Figure 14. (a) Smooth histogram for Dataset-III. (b) Smooth histogram for Dataset-IV.
Axioms 14 00544 g014
Figure 15. (a) Box whisker chart for Dataset-III. (b) Box whisker chart for Dataset-IV.
Figure 15. (a) Box whisker chart for Dataset-III. (b) Box whisker chart for Dataset-IV.
Axioms 14 00544 g015
Figure 16. (a) Profile log-likelihood function of σ 1 for Dataset-III. (b) Profile log-likelihood function of θ 1 for Dataset-IV.
Figure 16. (a) Profile log-likelihood function of σ 1 for Dataset-III. (b) Profile log-likelihood function of θ 1 for Dataset-IV.
Axioms 14 00544 g016
Figure 17. (a) Profile log-likelihood function of σ 2 for Dataset-III. (b) Profile log-likelihood function of θ 2 for Dataset-IV.
Figure 17. (a) Profile log-likelihood function of σ 2 for Dataset-III. (b) Profile log-likelihood function of θ 2 for Dataset-IV.
Axioms 14 00544 g017
Figure 18. (a) Histogram for Dataset-III. (b) Histogram for Dataset-IV.
Figure 18. (a) Histogram for Dataset-III. (b) Histogram for Dataset-IV.
Axioms 14 00544 g018
Table 1. Statistical measures for goodness-of-fit analysis of observed data.
Table 1. Statistical measures for goodness-of-fit analysis of observed data.
ModesA–DC–MKuiperK–S
Statistic p-Value Statistic p-Value Statistic p-Value Statistic p-Value
Dataset-I0.43460.81010.06330.79250.26460.43310.20040.6982
Dataset-II0.53940.70340.08650.65490.22980.49590.16730.7349
Table 2. MLE and Bayes estimates for ( σ 1 , σ 2 , θ 1 , θ 2 ) .
Table 2. MLE and Bayes estimates for ( σ 1 , σ 2 , θ 1 , θ 2 ) .
ParametersMLEBayes
SEL LINEX
ε = 6.0 ε = 0.00001 ε = 6.0
σ 1 0.67710.89550.88650.88440.8750
σ 2 0.36230.53670.52520.52420.5232
θ 1 2.56112.04952.04112.03692.0253
θ 2 0.88401.13521.12311.11221.1112
Table 3. 95% CI and CRI for the parameters ( σ 1 , σ 2 ) .
Table 3. 95% CI and CRI for the parameters ( σ 1 , σ 2 ) .
Method σ 1 σ 2
Lower Upper Length Lower Upper Length
ACI−0.47521.82942.30470.17870.54580.3671
CRI0.894740.89590.00110.53650.53670.0002
Table 4. 95% CI and CRI for the parameters ( θ 1 , θ 2 ) .
Table 4. 95% CI and CRI for the parameters ( θ 1 , θ 2 ) .
Method θ 1 θ 2
Lower Upper Length Lower Upper Length
ACI0.41564.70664.29100.38421.38360.9994
CRI2.04942.04960.00021.13511.13560.0005
Table 5. Statistical measures for goodness-of-fit analysis of observed data.
Table 5. Statistical measures for goodness-of-fit analysis of observed data.
ModesA–DC–MKuiperK–S
Statistic p-Value Statistic p-Value Statistic p-Value Statistic p-Value
Dataset-III1.540620.1673660.08620.65630.21210.60180.15000.8403
Dataset-IV0.56070.68320.08690.65290.19920.56180.13310.8250
Table 6. MLE and Bayes estimates for ( σ 1 , σ 2 , θ 1 , θ 2 ) .
Table 6. MLE and Bayes estimates for ( σ 1 , σ 2 , θ 1 , θ 2 ) .
ParametersMLEBayes
SEL LINEX
ε = 3.0 ε = 0.0001 ε = 3.0
σ 1 0.56310.44300.44310.44420.4433
σ 2 0.73830.76200.76180.76190.7621
θ 1 1.81612.31712.31392.31332.3135
θ 2 3.63243.82683.82653.82663.8264
Table 7. 95% CI and CRI for the parameters ( σ 1 , σ 2 ) .
Table 7. 95% CI and CRI for the parameters ( σ 1 , σ 2 ) .
Method σ 1 σ 2
Lower Upper Length Lower Upper Length
ACI0.29810.82800.52980.28531.19110.9057
CRI0.44290.44340.00050.76170.76230.0006
Table 8. 95% CI and CRI for the parameters ( θ 1 , θ 2 ) .
Table 8. 95% CI and CRI for the parameters ( θ 1 , θ 2 ) .
Method θ 1 θ 2
Lower Upper Length Lower Upper Length
ACI0.94282.68931.74641.38735.87744.4901
CRI2.31672.31750.00073.82623.82680.0006
Table 9. (Non-) Bayesian estimates for the parameter θ 1 are presented with their associated interval lengths and corresponding MSEs (shown in parentheses) for each censoring scheme.
Table 9. (Non-) Bayesian estimates for the parameter θ 1 are presented with their associated interval lengths and corresponding MSEs (shown in parentheses) for each censoring scheme.
( n , m ) rSchemeNon-BayesianBayesian
MLE Length SEL LINEX Length
ε = 6.0 ε = 6.0
(20, 15)15I1.60841.46351.92691.92761.92630.3445
(0.5117) (0.1823)(0.1829)(0.1817)
15II0.95921.36681.66251.68471.63790.2843
(0.4924) (0.1264)(0.1261)(0.1256)
(20, 15)20I1.36051.24611.64991.67591.62130.2775
(0.4195) (0.1225)(0.1232)(0.1222)
20II1.72071.23562.69592.79212.57590.2503
(0.3887) (0.1219)(0.1214)(0.1201)
(40, 30)25I1.42440.94971.06381.06751.05970.1146
(0.3057) (0.1203)(0.1187)(0.1139)
25II2.34562.85870.76120.87020.67950.2492
(0.2715) (0.1183)(0.1098)(0.1058)
(40, 30)35I1.30740.97651.26711.26881.26550.1905
(0.2371) (0.1079)(0.1044)(0.1024)
35II1.24061.18711.32661.32931.32380.1800
(0.2273) (0.0987)(0.0975)(0.0954)
(60, 40)40I1.55421.04591.84651.84691.84610.1754
(0.2029) (0.0954)(0.0954)(0.0912)
40II1.44271.19121.29791.30791.28750.1654
(0.1985) (0.0458)(0.0452)(0.0362)
(60, 40)50I1.48380.93091.28281.28591.27940.1585
(0.1875) (0.0412)(0.0411)(0.0310)
50II1.21990.92320.90690.91340.89950.1521
(0.1560) (0.0251)(0.0223)(0.0211)
Table 10. (Non-) Bayesian estimates for the parameter θ 2 are presented with their associated interval lengths and corresponding MSEs (shown in parentheses) for each censoring scheme.
Table 10. (Non-) Bayesian estimates for the parameter θ 2 are presented with their associated interval lengths and corresponding MSEs (shown in parentheses) for each censoring scheme.
( m , n ) rSchemeNon-BayesianBayesian
MLE Length SEL LINEX Length
ε = 6.0 ε = 6.0
(20, 15)15I5.77109.70296.79547.05034.76272.6867
(2.6991) (1.4506)(1.4050)(1.1197)
15II6.43845.71244.55164.55124.54112.2046
(1.1269) (1.1092)(1.1088)(1.1010)
(20, 15)20I3.07953.49083.49363.49363.49361.6985
(0.8358) (0.5872)(0.5741)(0.5633)
20II4.02182.19766.89976.89956.69541.3280
(0.5157) (0.3572)(0.3421)(0.3410)
(40, 30)25I2.40291.84232.93802.22632.75500.6685
(0.4094) (0.3112)(0.125)(0.3110)
25II1.90331.64782.12902.96991.28440.62022
(0.3560) (0.3056)(0.2593)(0.2776)
(40, 30)35I2.35051.55022.30352.33401.37330.4392
(0.3224) (0.2386)(0.2276)(0.2695)
35II2.84691.53512.17122.38051.27980.4093
(0.2204) (0.2199)(0.2181)(0.2088)
(60, 40)40I2.19201.14481.51691.54791.39550.3292
(0.2549) (0.2068)(0.2011)(0.1965)
40II2.52441.11171.22391.30021.49300.3258
(0.1854) (0.1432)(0.1365)(0.1221)
(60, 40)50I3.07451.00142.13662.13202.11250.3014
(0.1785) (0.1321)(0.1311)(0.1254)
50II3.12021.00102.09842.81741.42980.2903
(0.1658) (0.1246)(0.1252)(0.1250)
Table 11. (Non-) Bayesian estimates for the parameter σ 1 are presented with their associated interval lengths and corresponding MSEs (shown in parentheses) for each censoring scheme.
Table 11. (Non-) Bayesian estimates for the parameter σ 1 are presented with their associated interval lengths and corresponding MSEs (shown in parentheses) for each censoring scheme.
( m , n ) rSchemeNon-BayesianBayesian
MLE Length SEL LINEX Length
ε = 6.0 ε = 6.0
(20, 15)15I1.19041.17921.71271.45081.51890.6262
(0.7792) (0.3698)(0.3654)(0.3421)
15II1.84721.00691.14451.24441.35390.5939
(0.7206) (0.2827)(0.2814)(0.2749)
(20, 15)20I1.77161.45211.39451.39931.02310.4276
(0.6737) (0.2452)(0.2421)(0.2369)
20II1.41371.41341.86661.23271.25160.4177
(0.5574) (0.2337)(0.2329)(0.2258)
(40, 30)25I1.31281.01201.70531.31471.18840.4122
(0.5218) (0.2298)(0.2244)(0.2236)
25II1.57581.29261.41521.32231.55720.3987
(0.2126) (0.1995)(0.1888)(0.1176)
(40, 30)35I1.36061.06291.58591.41401.72150.3158
(0.1994) (0.1384)(0.1654)(0.1123)
35II1.44621.09061.65541.33531.68330.2496
(0.1779) (0.1079)(0.1054)(0.1025)
(60, 40)40I1.35711.15781.71651.40141.30790.2215
(0.1564) (0.0964)(0.0932)(0.0929)
40II1.40661.02231.80791.22631.87750.2156
(0.1426) (0.0864)(0.0845)(0.0810)
(60, 40)50I1.57500.95931.67471.42091.32210.2145
(0.1354) (0.0123)(0.0121)(0.0112)
50II1.24730.80201.09351.32691.85070.2011
(0.1236) (0.0114)(0.0110)(0.0089)
Table 12. (Non-) Bayesian estimates for the parameter σ 2 are presented with their associated interval lengths and corresponding MSEs (shown in parentheses) for each censoring scheme.
Table 12. (Non-) Bayesian estimates for the parameter σ 2 are presented with their associated interval lengths and corresponding MSEs (shown in parentheses) for each censoring scheme.
( m , n ) rSchemeNon-BayesianBayesian
MLE Length SEL LINEX Length
ε = 6 . 0 ε = 6 . 0
(20, 15)15I4.87125.44435.43835.55414.12234.6096
(0.6227) (0.5637)(0.5624)(0.4096)
15II4.81773.59475.53935.28844.90384.0509
(0.5655) (0.3373)(0.3996)(0.3195)
(20, 15)20I5.71563.56466.16775.49563.73533.5691
(0.4465) (0.3231)(0.3125)(0.2396)
20II4.08393.05525.04085.23835.90632.9548
(0.3731) (0.2556)(0.2315)(0.2149)
(40, 30)25I2.69302.47752.04912.63192.91092.7726
(0.3373) (0.1903)(0.1945)(0.1864)
25II2.68521.74971.32101.26501.19651.5091
(0.3215) (0.1865)(0.1823)(0.1895)
(40, 30)35I2.34041.52201.48731.53175.21770.2865
(0.2954) (0.1564)(0.1452)(0.1266)
35II2.01331.01371.96891.22261.81710.2254
(0.2548) (0.1369)(0.1364)(0.1124)
(60, 40)40I3.26731.90531.04821.62441.56260.2110
(0.2487) (0.1265)(0.1264)(0.1214)
40II3.27361.07981.15691.24470.84700.2042
(0.2321) (0.1540)(0.1523)(0.1139)
(60, 40)50I2.62261.08501.71421.55401.02230.1982
(0.2214) (0.1258)(0.1202)(0.1054)
50II2.49121.02341.32441.11221.07650.1101
(0.1952) (0.1162)(0.1102)(0.1001)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hasaballah, M.M.; Bakr, M.E.; Balogun, O.S.; Alshangiti, A.M. On Joint Progressively Censored Gumbel Type-II Distributions: (Non-) Bayesian Estimation with an Application to Physical Data. Axioms 2025, 14, 544. https://doi.org/10.3390/axioms14070544

AMA Style

Hasaballah MM, Bakr ME, Balogun OS, Alshangiti AM. On Joint Progressively Censored Gumbel Type-II Distributions: (Non-) Bayesian Estimation with an Application to Physical Data. Axioms. 2025; 14(7):544. https://doi.org/10.3390/axioms14070544

Chicago/Turabian Style

Hasaballah, Mustafa M., Mahmoud E. Bakr, Oluwafemi Samson Balogun, and Arwa M. Alshangiti. 2025. "On Joint Progressively Censored Gumbel Type-II Distributions: (Non-) Bayesian Estimation with an Application to Physical Data" Axioms 14, no. 7: 544. https://doi.org/10.3390/axioms14070544

APA Style

Hasaballah, M. M., Bakr, M. E., Balogun, O. S., & Alshangiti, A. M. (2025). On Joint Progressively Censored Gumbel Type-II Distributions: (Non-) Bayesian Estimation with an Application to Physical Data. Axioms, 14(7), 544. https://doi.org/10.3390/axioms14070544

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop