Next Article in Journal
Bayesian Inference for Copula-Linked Bivariate Generalized Exponential Distributions: A Comparative Approach
Previous Article in Journal
On the Algebraic Independence of the Values of Functions That Are Certain Integrals Involving the 1F1(1; λ + 1; z) Hypergeometric Function
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian Analysis of the Maxwell Distribution Under Progressively Type-II Random Censoring

by
Rajni Goel
1,
Mahmoud M. Abdelwahab
2 and
Mustafa M. Hasaballah
3,*
1
Department of Mathematics, School of Physical Sciences, DIT University, Dehradun 248009, India
2
Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
3
Department of Basic Sciences, Marg Higher Institute of Engineering and Modern Technology, Cairo 11721, Egypt
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(8), 573; https://doi.org/10.3390/axioms14080573
Submission received: 7 June 2025 / Revised: 19 July 2025 / Accepted: 21 July 2025 / Published: 25 July 2025

Abstract

Accurate modeling of product lifetimes is vital in reliability analysis and engineering to ensure quality and maintain competitiveness. This paper proposes the progressively randomly censored Maxwell distribution, which incorporates both progressive Type-II and random censoring within the Maxwell distribution framework. The model allows for the planned removal of surviving units at specific stages of an experiment, accounting for both deliberate and random censoring events. It is assumed that survival and censoring times each follow a Maxwell distribution, though with distinct parameters. Both frequentist and Bayesian approaches are employed to estimate the model parameters. In the frequentist approach, maximum likelihood estimators and their corresponding confidence intervals are derived. In the Bayesian approach, Bayes estimators are obtained using an inverse gamma prior and evaluated through a Markov Chain Monte Carlo (MCMC) method under the squared error loss function (SELF). A Monte Carlo simulation study evaluates the performance of the proposed estimators. The practical relevance of the methodology is demonstrated using a real data set.

1. Introduction

In engineering and reliability analysis, assessing product quality and reliability relies primarily on lifetime data obtained from life-test experiments and field failure observations. Product longevity is becoming increasingly important in today’s competitive marketplace, so manufacturers face increasing pressure to deliver reliable, high-quality products on time. As a result, it is essential to reduce the time between the start of product testing and its market launch, which requires the effective strategies to streamline testing and ensure reliable data collection.
Censoring has emerged as a valuable method for optimizing the life-testing process, reducing both the time and costs associated with the collection of lifetime data. This approach enables researchers to concentrate on relevant survival data without requiring complete failure time observations.
Frequently used censoring methods include Type-I and Type-II censoring schemes, which offer structured approaches for handling right-censored data in life experiments. In Type-I censoring, the experiment concludes at a predetermined time, whereas in Type-II censoring, the experiment ends after a specified number of failures are observed. More recently, progressive censoring schemes have gained attention for their ability to systematically remove surviving units at various stages during the experiment. This flexibility not only optimizes resource management but also enhances the reliability of statistical estimates. The progressive censoring scheme was initially introduced in a Ph.D. dissertation by [1]. Progressive censoring and its applications have been extensively studied, with significant contributions found in the works of [2,3]. More recent studies, including those by [4,5,6,7,8,9], have further improved understanding and developments in this field.
Over the span of two decades, the random censoring scheme has gained significant popularity for handling censored units (or dropouts) in reliability and survival analysis. This approach more accurately reflects real-world scenarios, where the occurrence of censored observations can be influenced by various stochastic factors, such as participant dropout, equipment failure, or administrative reasons. Random censoring has been widely explored in the statistical literature, with its applications extending to diverse fields such as biostatistics, engineering, and economics. Significant contributions to this topic can be found in the works of [10,11,12,13,14,15,16,17].
Recently, the Progressive Type-II Random Censoring Scheme (PRCS) was introduced by [18] for two Lindley populations. Subsequently, Ref. [19] extended the PRCS to Burr Type XII distributions. This innovative approach combines the intentional removal of data with random censoring, addressing situations where units may be unintentionally lost due to factors like equipment failure or participant dropout. This modeling technique improves the realism of lifetime assessments and enhances the effectiveness of statistical analyses.
Simultaneously, the Maxwell distribution (MXD), known for its application in modeling particle velocities in gases, provides a flexible and effective framework for analyzing lifetime data. By applying the MXD within the context of progressively randomly censored data, this study aims to deepen the understanding of product lifetimes and address the challenges of censoring. Prior research on the MXD has been conducted by several scholars, including [20,21,22,23,24] etc. This research seeks to integrate the principles of progressive random censoring with the MXD, contributing to the development of statistical methodologies in reliability engineering and enhancing decision-making processes in product development and quality assurance. The primary advantage of the Maxwell distribution (MXD) over common lifetime models like the Lindley, and Burr XII distributions lies in its unique combination of a unimodal hazard function and model parsimony. The Lindley distribution’s hazard rate is always increasing. Consequently, neither can model critical real-world scenarios where the risk of failure peaks at an intermediate time before declining, a pattern characteristic of phenomena like post-operative mortality or specific component wear-out profiles. The more complex Burr XII distribution can exhibit a unimodal hazard rate; however, the MXD’s advantage here is its simplicity. As a one-parameter model in its standard form, the MXD provides a more stable and less data-intensive alternative for capturing this specific failure pattern, reducing the risk of overfitting that can accompany more heavily parameterized models like the Burr XII. Therefore, the Maxwell distribution occupies a vital role—it is the most straightforward and often most appropriate choice for modeling unimodal hazard phenomena, offering superior interpretability over complex alternatives and providing a necessary modeling capability that simpler monotonic distributions lack.
This paper develops a modeling framework that applies the Progressive Type-II Random Censoring Scheme (PRCS) to the Maxwell distribution, where the lifetime and censoring times are independently modeled using Maxwell distributions with different parameters, β and θ . The model parameters are estimated using both classical and Bayesian approaches. In the classical framework, maximum likelihood estimators are derived, and asymptotic confidence intervals are constructed using the Fisher Information Matrix. In the Bayesian setting, inverse gamma priors are assumed, and the Bayes estimates are obtained via Gibbs sampling with the Metropolis-Hastings algorithm under the squared error loss function.
A Monte Carlo simulation study is carried out to compare the performance of the maximum likelihood and Bayes estimators under various censoring schemes. The practical usefulness of the model is demonstrated using a censored survival dataset, with model fit assessed through comparison with the Kaplan–Meier estimator and validated using the Kolmogorov–Smirnov and Anderson–Darling tests.
The structure of the paper is as follows. Section 2 describes the proposed model under progressive random censoring. Section 3 presents the maximum likelihood estimation and confidence intervals. Section 4 covers the Bayesian estimation procedure using MCMC techniques. Section 5 provides the results of the simulation study. Section 6 analyzes a real dataset to demonstrate the method’s applicability.

2. Mathematical Model

Life-testing study is performed on a sample of n identically distributed and mutually independent units using the PRCS. The lifetimes of these items are assumed to follow a MXD, represented as M X D ( β ) . Let X i denote the failure time of the i-th item, with f X ( x ) and F X ( x ) being its probability density function (PDF) and cumulative distribution function (CDF), respectively.
The PDF and CDF are defined as follows
f ( x ; β ) = 4 π β 3 / 2 x 2 e x 2 / β , x 0 , β > 0 ,
F ( x ; β ) = Γ 3 2 , x 2 β .
Here, Γ s , x denotes the upper incomplete gamma function, defined as Γ ( s , x ) = x t s 1 e t d t . Let T j represent the censoring time for the j-th test unit and follows the M X D ( θ ) , with its PDF g T ( t ) and CDF G T ( t ) .
g ( t ; θ ) = 4 π θ 3 / 2 t 2 e t 2 / θ , t 0 , θ > 0
G ( t ; θ ) = Γ 3 2 , t 2 θ .
We assume that both X i and T i are independent and follow Maxwell distributions with distinct parameters. Furthermore, let Z i = min ( X i , T i ) , which represents the time at which the first event occurs. We also introduce the random variable Δ i as follows
Δ i = 1 , if X i T i , 0 , otherwise .
The probability p = P [ X < T ] can be expressed as
p = P [ X < T ] = 4 π θ 3 / 2 0 t 2 e t 2 / θ Γ 3 2 , t 2 β d t .
Let i = 1 , 2 , , n , and n = i = 1 m S i + m , where S = ( S 1 , S 2 , , S m ) indicates the censoring scheme, and m is an integer satisfying m n . The censoring scheme S is fixed prior to commences the experiment. The experiment based on PRCS will proceed according to these specifications.
Throughout the experiment, upon the occurrence of the first event, either a failure or a random censoring event, labeled as a z 1 group of S 1 live units, are extracted from the test. At the second event time, z 2 , S 2 units are extracted from the remaining n S 1 1 units. This pattern continues until the m-th event, z m , which signals the end of the experiment. At this point, the leftover S m = n S 1 S 2 S m 1 m + 1 units are also taken out of the test, as illustrated in Figure 1.
The observed data can be represented as ( z , δ ) = { ( z 1 , δ 1 ) , ( z 2 , δ 2 ) , , ( z m , δ m ) } , where z = ( z 1 , z 2 , , z m ) and δ = ( δ 1 , δ 2 , , δ m ) correspond to the values of Z = ( Z 1 , Z 2 , , Z m ) and Δ = ( Δ 1 , Δ 2 , , Δ m ) , respectively.
The joint PDF of ( Z , Δ ) under the Maxwell lifetime models is given by
f Z , Δ ( z , δ ; β , θ ) = 4 π 1 β 3 / 2 z 2 e z 2 / β 1 Γ z 2 θ , 3 2 δ × 4 π 1 θ 3 / 2 z 2 e z 2 / θ 1 Γ z 2 β , 3 2 1 δ ; z , β , θ 0 , δ = 0 , 1 .
Lastly, the marginal CDF of Z = min ( X , T ) can be expressed as
F Z ( z ) = 1 1 Γ z 2 β , 3 2 1 Γ z 2 θ , 3 2 , z 0 .

3. Maximum Likelihood Estimation

3.1. Point Estimation

The observed data ( Z , Δ , S ) , the likelihood function, are represented as
L ( Z , Δ , S ) = D i = 1 m f X ( z i ) F ¯ T ( z i ) δ i f T ( z i ) F ¯ X ( z i ) 1 δ i F ¯ X ( z i ) F ¯ T ( z i ) S i , 0 z i < , 0 S i < n ,
where D = n ( n S 1 1 ) ( n S 1 S 2 1 ) n i = 1 m 1 S i m + 1 . Under the MXD, using Equations (1) and (5), the likelihood function in Equation (7) is
L ( Z , Δ , S , β , θ ) = C 1 β 3 d 1 / 2 1 θ 3 d 2 / 2 e 1 β i = 1 m δ i z i 2 e 1 θ i = 1 m ( 1 δ i ) z i 2 i = 1 m z i 2 × i = 1 m 1 Γ z i 2 β , 3 2 ( 1 δ i ) + S i i = 1 m 1 Γ z i 2 θ , 3 2 δ i + S i .
where, d 1 = i = 1 m δ i and d 2 = i = 1 m 1 δ i and the log-likelihood function of (8) is
log L = log C 3 2 d 1 log β 3 2 d 2 log θ 1 β i = 1 m δ i z i 2 1 θ i = 1 m ( 1 δ i ) z i 2 + i = 1 m z i 2 + i = 1 m ( ( 1 δ i ) + S i ) log 1 Γ z i 2 θ , 3 2 + i = 1 m ( δ i + S i ) log 1 Γ z i 2 β , 3 2 .
Differentiating Equation (9) concerning the parameters and equating to zero, we have
log L β = 3 d 1 2 β + i = 1 m δ i z i 2 β 2 + ψ β = 0 ,
and
log L θ = 3 d 2 2 θ + i = 1 m ( 1 δ i ) z i 2 θ 2 + ψ θ = 0
where
ψ β = 3 2 β i = 1 m Γ z i 2 β , 5 2 ( ( 1 δ i ) + S i ) Γ z i 2 β , 3 2 1 Γ z i 2 β , 3 2
and
ψ θ = 3 2 θ i = 1 m Γ z i 2 θ , 5 2 ( δ i + S i ) Γ z i 2 θ , 3 2 1 Γ z i 2 θ , 3 2
The Equations (10) and (11) cannot be solved analytically. As a result, iterative computational methods are required. In this study, the “nlm” function in R software, version 4.4.2 was employed to perform the calculations.

3.2. Fisher’s Information Matrix

The Fisher information matrix (FIM), denoted by I ( β , θ ) , quantifies the amount of information that the observed data provide on the unknown parameters β and θ . It is defined as the expected value of the second partial derivative of the logL (given in Equation (9)) with the concerning the model parameters.
To begin, we compute the second-order partial derivatives of the logL. Using the expressions from Equations (10) and (11), we derive the following forms for the components of the FIM
I ( β , β ) = E 2 log L β 2 = 3 d 1 2 β 2 2 i = 1 m δ i z i 2 β 3 + E [ ψ β ] ,
where ψ β denotes the partial derivative of ψ β with respect to β .
Similarly, for θ , the second derivative is given by
I ( θ , θ ) = E 2 log L θ 2 = 3 d 2 2 θ 2 2 i = 1 m ( 1 δ i ) z i 2 θ 3 + E [ ψ θ ] .
For the off-diagonal terms, we calculate the second derivative with respect to both β and θ
I ( β , θ ) = E 2 log L β θ = I ( θ , β ) = E 2 log L θ β = 0 ,
Therefore, the FIM is given by
I ( β , θ ) = 3 d 1 2 β 2 2 i = 1 m δ i z i 2 β 3 + E [ ψ β ] 0 0 3 d 2 2 θ 2 2 i = 1 m ( 1 δ i ) z i 2 θ 3 + E [ ψ θ ] .
The inverse of the FIM provides the asymptotic covariance matrix of the Maximum Likelihood Estimators (MLEs) for β and θ .

3.3. Asymptotic Confidence Intervals

The asymptotic normality of the MLEs states that, as the sample size increases, the distribution of the MLEs converges to a normal distribution, where the true parameters serve as the mean and the inverse of the FIM represents the covariance matrix.
Denote by β ^ and θ ^ the MLEs of the parameters β and θ , respectively. The asymptotic covariance matrix is derived as the inverse of the FIM
Cov ( β ^ , θ ^ ) = I ( β , θ ) 1 .
The asymptotic confidence interval (CI) for each parameter can be constructed as
θ ^ ϕ α / 2 S E ( θ ^ ) , θ ^ + ϕ α / 2 SE ( θ ^ ) ,
Here, ϕ α / 2 is the quantile from the standard normal distribution corresponding to the desired confidence level (for 95%, ϕ α / 2 = 1.96 ), and SE ( θ ^ ) denotes the standard error of the estimate θ ^ . The CI for β ^ can be expressed as
β ^ ϕ α / 2 SE ( β ^ ) , β ^ + ϕ α / 2 SE ( β ^ ) .

4. Bayesian Estimation

Here, the Bayes estimate of the unknown model parameters are computed. For the PRCS MXD, Bayesian methods have been extensively studied, often using prior distributions that reflect known or assumed properties of the parameters.
Here, the prior distributions for parameters β and θ are chosen as independent inverted gamma distributions, defined as
g 1 ( β ) = b 1 a 1 Γ ( a 1 ) 1 β a 1 + 1 exp b 1 β , β > 0 , a 1 , b 1 > 0 ,
g 2 ( θ ) = b 2 a 2 Γ ( a 2 ) 1 θ a 2 + 1 exp b 2 θ , θ > 0 , a 2 , b 2 > 0 .
The posterior distribution of θ and β given the data ( z , Δ , S ) is formulated as
π ( θ , β z , Δ , S ) = K 1 1 β 3 d 1 2 + a 1 + 1 e ( δ i z i 2 + b 1 ) β i = 1 m 1 Γ z i 2 β , 3 2 ( 1 δ i ) + S i × 1 θ 3 d 2 2 + a 2 + 1 e ( ( 1 δ i ) z i 2 + b 2 ) θ i = 1 m 1 Γ z i 2 θ , 3 2 δ i + S i ,
where θ , β > 0 , a 1 , a 2 , b 1 , b 2 > 0 .
In the Bayesian framework, to estimate the Bayesian estimates, we considered the Squared Error Loss Function (SELF). This loss function is defined as
L ( θ , θ ) = ( θ θ ) 2
where θ is the estimated value of θ .
Here, we observe that the Bayesian estimates for θ and β are typically computed numerically due to the complexity of the posterior distribution. Therefore, we apply the MCMC approximation method.

4.1. MCMC Sample Generation

Gibbs sampling is a specific instance of the MCMC method where each parameter’s full conditional density (FCD) is iteratively sampled, given the values of the other model parameters [25,26].
Since the full conditional distributions do not follow standard forms, we employ the Metropolis–Hastings (M-H) algorithm within Gibbs sampling. For both β and θ , we use standard normal proposal distributions, as follows:
β N ( 0 , 1 ) , θ N ( 0 , 1 ) .
The proposed values are then shifted by adding the current parameter values as follows:
β prop = β ( k 1 ) + β , θ prop = θ ( k 1 ) + θ ,
ensuring positivity via reflection if necessary.
This choice is supported by the approximate normal shape of the posterior densities, as shown in Figure 2.
The marginal posterior distributions are used to derive the full FCD of β given θ and the data ( z , Δ , S ) , which is
π ( β θ , z , Δ , S ) β 3 d 1 2 + a 1 + 1 exp ( δ i z i 2 + b 1 ) β i = 1 m 1 Γ z i 2 β , 3 2 ( 1 δ i ) + S i
Similarly, the FCD of θ given β and the data is
π ( θ β , z , Δ , S ) θ 3 d 2 2 + a 2 + 1 exp ( ( 1 δ i ) z i 2 + b 2 ) θ i = 1 m 1 Γ z i 2 θ , 3 2 δ i + S i
The Metropolis–Hastings within Gibbs sampling procedure for generating MCMC samples is summarized as follows:
  • Initialize the parameters as ( β ( 0 ) , θ ( 0 ) ) .
  • At iteration k,
    Generate β N ( 0 , 1 ) and compute β prop = β ( k 1 ) + β . If β prop 0 , reflect it to ensure positivity.
    Compute acceptance ratio:
    r β = π ( β prop θ ( k 1 ) , d a t a ) π ( β ( k 1 ) θ ( k 1 ) , d a t a )
    Accept or reject based on min ( 1 , r β ) .
    Repeat similarly for θ using standard normal proposal and shift.
  • Repeat step 2 for K iterations, discarding the first M 0 as burn-in.
Thus, the MCMC samples for β and θ are represented as
( β ( k ) , θ ( k ) ) k = M 0 + 1 K ,
where the first M 0 samples are discarded as part of the burn-in phase.
The Bayesian estimates of β and θ are then computed as the posterior mean or median of the samples as follows:
β ^ = 1 K M 0 k = M 0 + 1 K β ( k ) , θ ^ = 1 K M 0 k = M 0 + 1 K θ ( k ) .
Convergence diagnostics, including trace plots and Heidelberger–Welch tests, were applied to assess the stability of the chains.

4.2. HPD Credible Intervals

To assess the uncertainty in the estimates of β and θ , we construct Highest Posterior Density (HPD) credible intervals using the MCMC samples. For a given parameter, ψ { β , θ } , we order the MCMC samples such that
ψ ( 1 ) < ψ ( 2 ) < < ψ ( K M 0 ) .
Then, a 100 ( 1 α ) % HPD credible interval for ψ is given by
ψ ( k ) , ψ k + ( 1 α ) ( K M 0 ) , k = 1 , 2 , , K M 0 .
Here, [ . ] represents the largest integer function, and k is chosen so that the interval length is minimized.

5. Simulation Study

This section presents a Monte Carlo simulation study, aimed at evaluating the performance and efficiency of the theoretical results derived from the PRCS Maxwell population. The accuracy of point estimates is evaluated using the following two metrics: absolute bias (AB) and mean squared error (MSE). For interval estimates, we examine the average length (AL) and coverage probability (CP). To streamline the notation for the PRCS scheme, we introduce specific representations. For example, S = ( 0 ( 2 ) , 1 , ( 0 , 2 ) ( 2 ) ) denotes the sequence S 1 = 0 , S 2 = 0 , S 3 = 1 , S 4 = 0 , S 5 = 2 , S 6 = 0 , S 7 = 2 .
The unknown model parameters are estimated using the squared error loss function (SELF) within the Bayesian framework, exploring various combinations of θ and β . The hyperparameters ( a 1 , b 1 , a 2 , b 2 ) = 0.0001 are assumed. Bayes estimates of the parameters are obtained through Markov Chain Monte Carlo (MCMC) methods, with M = 10,000 samples and a burn-in period of M 0 = 2000. The simulation is executed for N = 10,000 trials, and for each simulated sample, both Bayesian estimates and MLEs of ( θ , β ) are calculated. To evaluate the efficiency of the estimators in point estimation, the average bias (AB) and mean squared error (MSE) are computed, defined as follows:
AB = 1 N i = 1 N ( θ ^ i θ ) , and MSE = 1 N i = 1 N ( θ ^ i θ ) 2 ,
In addition to the scenario with θ = 0.5 , β = 0.8 , and P = 0.6456 , a second set of simulations is conducted with θ = 1.5 , β = 2 , and P = 0.5906 to validate the generality of the estimation techniques. The censoring schemes and their corresponding labels (C1–C14) are listed in Table 1, and the results of AB, MSE, AL, and CP for both sets of parameters are presented in Table 2, Table 3, Table 4 and Table 5.
The simulation results for the new parameter set further confirm the consistency of the findings. Specifically, Bayesian estimators outperform MLEs across all schemes with regard to lower AB and MSE values. This trend is especially evident under heavy censoring schemes (e.g., C4, C5, C9, and C10), where the advantage of Bayesian estimation is more prominent. Moreover, the credible intervals under Bayesian estimation consistently maintain better coverage probabilities and shorter average lengths compared to confidence intervals based on MLEs.
These findings emphasize that the superiority of Bayesian estimation is not confined to a specific parameter configuration, but it holds under varying conditions, affirming the robustness and efficiency of the Bayesian approach in estimating Maxwell model parameters under progressive Type-II censoring.
Based on the results of the Monte Carlo simulation study, the following key conclusions can be drawn regarding the performance of parameters θ and β using different sample sizes:
  • Bayesian estimation generally shows lower average bias (AB) compared to MLE for both parameters θ and β , especially as the sample size increases.
  • MLE tends to have higher bias, particularly for smaller sample sizes, and the bias decreases more slowly compared to Bayesian estimation.
  • Bayesian estimation also tends to outperform MLE in terms of mean squared error (MSE), indicating more precise and less variable estimates.
  • Bayesian estimation consistently shows narrower credible intervals for both θ and β compared to MLE, suggesting that Bayesian intervals tend to be more precise.
  • Bayesian estimation produces more reliable credible intervals, as evidenced by higher coverage probabilities (CP) for both θ and β . scenarios.

6. Real Data Analysis

To demonstrate the theoretical results developed in this article, we analyze a real dataset originally used in [27], which records the survival times (in months) of 24 patients with Dukes’ C colorectal cancer. A Progressive Type-II Right Censored Sample (PRCS) is constructed from this dataset as follows:
  • n = 24 , m = 18
  • Z = { 3 , 6 , 6 , 6 , 8 , 8 , 12 , 12 , 15 , 16 , 18 , 20 , 22 , 24 , 28 , 28 , 30 , 30 }
  • Δ = { 0 , 1 , 1 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 1 , 0 , 1 , 0 , 0 , 1 , 0 }
  • S = { 0 ( 3 ) , 1 , 0 ( 2 ) , 1 , 0 ( 3 ) , 1 , 0 ( 3 ) , 1 , 0 ( 2 ) , 2 }
To assess the goodness-of-fit of the MXD to the PRCS dataset, we compare the estimated theoretical survival function with the nonparametric Kaplan–Meier (KM) estimator. The KM estimator is appropriate for censored data and provides a nonparametric estimate of the survival function.
Here it is important to mention that the theoretical cumulative distribution function is evaluated using the analytical form provided in Equation (6). The values of the parameters β and θ are estimated using the maximum likelihood method. These MLEs are then substituted into the CDF expression to obtain the fitted theoretical distribution for comparison with the data.
Figure 3 shows the Kaplan–Meier survival curve alongside the fitted survival curve based on the MXD using maximum likelihood estimates. The graphical agreement between the two curves indicates a good fit.
To further validate the fit, we conducted the Kolmogorov–Smirnov (KS) test using the ks.test function in R. The KS statistic is D = 0.2234 with a p-value of 0.2863, suggesting no significant deviation from the theoretical distribution.
Anderson–Darling (A-D) test: Additionally, the Anderson–Darling test yielded a statistic of A = 0.46417 with a p-value of 0.2249, further supporting the adequacy of the model.
The parameters β and θ were estimated using both maximum likelihood and Bayesian methods, as follows:
  • MLE Results:  β = 98.1995 with 95% CI ( 61.11319 , 135.2859 ) , θ = 175.1148 with 95% CI ( 110.5791 , 239.6504 ) . Figure 4 shows the log-likelihood surface, which exhibits a clear peak, confirming the existence and uniqueness of the MLEs for θ and β .
  • Bayesian Estimates:  β = 98.22188 with a 95% credible interval ( 96.3308 , 100.1712 ) , and θ = 175.1201 with a 95% credible interval ( 173.1834 , 176.9799 ) .
To validate the convergence of the Gibbs sampling algorithm used for Bayesian estimation, we applied the Heidelberger–Welch diagnostic test, autocorrelation plots, histograms, and trace plots.
Heidelberger–Welch Test: Both parameters passed the stationarity test starting from iteration 1, indicating no burn-in was required. The p-values were 0.645 for β and 0.747 for θ , suggesting convergence.
Figure 5 confirms that the MCMC chains mix well and converge for both parameters. Histograms show symmetric posterior distributions, and trace plots are stable across iterations.
In conclusion, the Maxwell distribution under the proposed censoring scheme offers a reliable fit for the dataset, and both MLE and Bayesian methods provide robust parameter estimates.

7. Conclusions

In this study, we have proposed a Bayesian and classical framework for estimating the parameters of the Maxwell distribution under a Progressive Type-II Random Censoring Scheme (PRCS). The novelty of the approach lies in modeling both failure and censoring times using independent Maxwell distributions with different parameters. Maximum Likelihood Estimators (MLEs) were derived, and Bayesian estimators were obtained using MCMC methods under the Squared Error Loss Function. The performance of these estimators was evaluated through simulation studies, and the proposed model was applied to a real censored survival dataset. Model adequacy was assessed through goodness-of-fit tests and a visual comparison with the Kaplan–Meier estimator.
The results indicate that Bayesian estimation provides slightly more accurate and stable parameter estimates compared to classical MLEs, particularly under heavy censoring. The use of PRCS provides a flexible framework for handling incomplete data, which frequently arise in reliability and biomedical studies.
Future work may extend this approach in several directions. First, the PRCS–Maxwell model can be generalized to other lifetime distributions such as Weibull, Log-normal, or flexible mixtures. Second, covariates may be introduced into the model via accelerated failure time or proportional hazards frameworks. Third, extensions can be considered for bivariate, multivariate, or competing risks scenarios under PRCS. Finally, alternative loss functions (e.g., LINEX or entropy-based) and estimation methods such as E-Bayesian or hierarchical Bayesian inference can be developed to further improve the robustness of the estimates.

Author Contributions

Conceptualization, M.M.A. and M.M.H.; Methodology, R.G. and M.M.H.; Software, R.G.; Validation, R.G. and M.M.H.; Formal analysis, M.M.H.; Resources, R.G.; Data curation, R.G. and M.M.A.; Writing—original draft, R.G.; Writing—review and editing, M.M.H.; Visualization, M.M.A.; Funding acquisition, M.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) (grant number IMSIU-DDRSP2502).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The colorectal cancer survival dataset analyzed during the current study is publicly available in McIllmurray and Turkie [27].

Acknowledgments

The authors gratefully acknowledge the support provided by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) through the Research Group project IMSIU-DDRSP2502.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Herd, G.R. Estimation of the Parameters of a Population from a Multi-Censored Sample; Iowa State University: Ames, IA, USA, 1956. [Google Scholar]
  2. Balakrishnan, N.; Aggarwala, R. Progressive Censoring: Theory, Methods, and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  3. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring, Volume 138; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  4. Kuş, C.; Kaya, M.F. Estimation for the parameters of the pareto distribution under progressive censoring. Commun. Stat.-Theory Methods 2007, 36, 1359–1365. [Google Scholar] [CrossRef]
  5. Almetwaly, E.M.; Almongy, H.M. Estimation of the generalized power weibull distribution parameters using progressive censoring schemes. Int. J. Prob. Stat. 2018, 7, 51–61. [Google Scholar]
  6. Kumar, I.; Kumar, K.; Ghosh, I. Reliability estimation in inverse pareto distribution using progressively first failure censored data. Am. J. Math. Manag. Sci. 2023, 42, 126–147. [Google Scholar] [CrossRef]
  7. Dey, S.; Elshahhat, A.; Nassar, M. Analysis of progressive type-ii censored gamma distribution. Comput. Stat. 2023, 38, 481–508. [Google Scholar] [CrossRef]
  8. Tian, Y.; Liang, Y.; Gui, W. Inference and optimal censoring scheme for a competing-risks model with type-ii progressive censoring. Math. Popul. Stud. 2024, 31, 1–39. [Google Scholar] [CrossRef]
  9. Goel, R.; Krishna, H. A study of accidental breakages in progressively type-ii censored lifetime experiments. Int. J. Syst. Assur. Eng. Manag. 2024, 15, 2105–2119. [Google Scholar] [CrossRef]
  10. Şirvanci, M. Comparison of two weibull distributions under random censoring. Commun. Stat.-Theory Methods 1986, 15, 1819–1836. [Google Scholar] [CrossRef]
  11. Danish, M.Y.; Aslam, M. Bayesian inference for the randomly censored weibull distribution. J. Stat. Comput. Simul. 2014, 84, 215–230. [Google Scholar] [CrossRef]
  12. Atem, F.D.; Matsouaka, R.A.; Zimmern, V.E. Cox regression model with randomly censored covariates. Biom. J. 2019, 61, 1020–1032. [Google Scholar] [CrossRef]
  13. Feng, H.; Luo, Q. A weighted quantile regression for nonlinear models with randomly censored data. Commun. Stat.-Theory Methods 2021, 50, 4167–4179. [Google Scholar] [CrossRef]
  14. Krishna, H.; Goel, R. Inferences based on correlated randomly censored gumbel’s type-i bivariate exponential distribution. Ann. Data Sci. 2024, 11, 1185–1207. [Google Scholar] [CrossRef]
  15. Ayushee; Kumar, N.; Goyal, M. A nonparametric test for randomly censored data. Ann. Data Sci. 2024, 11, 2059–2075. [Google Scholar] [CrossRef]
  16. Hasaballah, M.M.; Balogun, O.S.; Bakr, M.E. On a randomly censoring scheme for generalized logistic distribution with applications. Symmetry 2024, 16, 1240. [Google Scholar] [CrossRef]
  17. Hasaballah, M.M.; Abdelwahab, M.M. Statistical analysis under a random censoring scheme with applications. Symmetry 2025, 17, 1048. [Google Scholar] [CrossRef]
  18. Goel, R.; Krishna, H. Progressive type-II random censoring scheme with Lindley failure and censoring time distributions. Int. J. Agric. Stat. Sci. 2020, 16, 23–34. [Google Scholar]
  19. Goel, R.; Kumar, K.; Ng, H.K.T.; Kumar, I. Statistical inference in burr type xii lifetime model based on progressive randomly censored data. Qual. Eng. 2024, 36, 150–165. [Google Scholar] [CrossRef]
  20. Bekker, A.; Roux, J. Reliability characteristics of the maxwell distribution: A bayes estimation study. Commun. Stat. Theory Methods 2005, 34, 2169–2178. [Google Scholar] [CrossRef]
  21. Krishna, H.; Vivekan; Kumar, K. Estimation in maxwell distribution with randomly censored data. J. Stat. Comput. Simul. 2015, 85, 3560–3578. [Google Scholar] [CrossRef]
  22. Tomer, S.K.; Panwar, M. Estimation procedures for maxwell distribution under type-i progressive hybrid censoring scheme. J. Stat. Comput. Simul. 2015, 85, 339–356. [Google Scholar] [CrossRef]
  23. Al-Omari, A.I.; Alanzi, A.R.A. Inverse length biased maxwell distribution: Statistical inference with an application. Comput. Syst. Sci. Eng. 2021, 39, 147–164. [Google Scholar] [CrossRef]
  24. Godase, D.G.; Mahadik, S.B.; Rakitzis, A.C. The sprt control charts for the maxwell distribution. Qual. Reliab. Eng. Int. 2022, 38, 1713–1728. [Google Scholar] [CrossRef]
  25. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of state calculations by fast computing machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef]
  26. Hastings, W.K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
  27. McIllmurray, M.; Turkie, W. Controlled trial of gamma linolenic acid in duke’s c colorectal cancer. Br. Med. J. (Clin. Res. Ed.) 1987, 294, 1260. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Progressive random censoring scheme.
Figure 1. Progressive random censoring scheme.
Axioms 14 00573 g001
Figure 2. Posterior density of β when θ = 0.5 , β = 1 , n = 40 , m = 30 .
Figure 2. Posterior density of β when θ = 0.5 , β = 1 , n = 40 , m = 30 .
Axioms 14 00573 g002
Figure 3. Kaplan–Meier estimator versus theoretical survival function.
Figure 3. Kaplan–Meier estimator versus theoretical survival function.
Axioms 14 00573 g003
Figure 4. Existence and uniqueness of MLE for β and θ .
Figure 4. Existence and uniqueness of MLE for β and θ .
Axioms 14 00573 g004
Figure 5. Convergence diagnostics for Gibbs sampling ( β and θ ).
Figure 5. Convergence diagnostics for Gibbs sampling ( β and θ ).
Axioms 14 00573 g005
Table 1. Different censoring schemes used in the study.
Table 1. Different censoring schemes used in the study.
CodeCensoring Scheme (n, m, cs(S))
C1(40,30, ( 0 ( 29 ) , 10 ) )
C2(40,30, ( 0 , 1 , 0 ) ( 10 ) )
C3(40,30, ( 10 , 0 ( 29 ) ) )
C4(40,24,2,1,1,0,2,1,0,0,0,2,1,0,2,1,0,0,0,1,2,0,1,0,1,0)
C5(40,16,3,2,2,1,2,2,1,0,2,2,1,1,2,2,1,0)
C6(40,40, ( 0 ( 39 ) ) )
C7(30,20, ( 1 , 0 ) ( 10 ) )
C8(30,20, ( 0 ( 19 ) , 10 ) )
C9(30,18,2,1,0,1,2,0,1,1,0,2,1,0,1,0,2,1,0,2)
C10(30,12,3,2,2,2,1,2,1,1,1,2,2,1)
C11(30,30, ( 0 ( 39 ) ) )
C12(30,25, ( 5 , 0 ( 24 ) ) )
C13(30,25, ( 1 , 0 , 0 , 0 , 0 ) ( 5 ) )
C14(30,25, ( 0 ( 24 ) , 5 ) )
Table 2. AB and MSE of MLE and Bayes estimates for different censoring schemes ( θ = 0.5 , β = 0.8 , p = 0.6456).
Table 2. AB and MSE of MLE and Bayes estimates for different censoring schemes ( θ = 0.5 , β = 0.8 , p = 0.6456).
Scheme CodeMLE θ MLE β Bayes θ Bayes β
ABMSEABMSEABMSEABMSE
C10.6780.0450.8750.0200.5650.0080.8720.019
C20.5740.0150.7430.0130.5210.0020.7440.013
C30.4890.0140.6740.0590.5000.0000.7770.058
C40.6110.0180.7710.0140.5240.0020.7650.014
C50.7130.0300.8440.0190.5490.0050.8380.018
C60.5590.0120.7530.0110.5180.0010.7560.011
C70.5480.0160.7070.0230.5170.0020.7060.022
C80.6950.0570.8930.0320.5740.0110.8830.029
C90.5820.0200.7780.0150.5110.0020.7700.014
C100.7010.0340.8250.0240.5400.0050.8100.022
C110.5630.0150.7520.0140.5200.0020.7540.014
C120.4690.0120.7350.0370.4980.0010.6380.036
C130.5390.0130.7180.0200.5140.0010.7200.019
C140.6300.0310.8300.0160.5450.0050.8270.015
Table 3. Average Length (AL) and Coverage Probability (CP) for MLE and Bayes estimates ( θ = 0.5 , β = 0.8 , p = 0.6456).
Table 3. Average Length (AL) and Coverage Probability (CP) for MLE and Bayes estimates ( θ = 0.5 , β = 0.8 , p = 0.6456).
Scheme CodeMLE θ MLE β Bayes θ Bayes β
ALCPALCPALCPALCP
C10.42370.92500.50080.96100.40390.96500.46560.9520
C20.33680.90200.40330.97600.22630.95500.37660.9750
C30.23760.91900.30260.93500.17320.94500.28780.9700
C40.35780.92700.44590.95400.24340.95600.42110.9490
C50.43330.90350.51070.94500.30280.94400.46840.9400
C60.31510.93900.39510.91300.20250.96200.37010.9410
C70.37670.94200.44660.91400.25070.95200.41630.9150
C81.35540.90200.60180.95400.46090.93650.55380.9390
C90.36010.91800.43800.95100.24120.95600.40810.9485
C100.47400.90500.50670.94100.30160.94050.45530.9370
C110.36610.90600.45620.90100.24310.94650.42560.9430
C120.30690.89200.38620.89900.19720.95650.36620.9230
C130.35900.92100.44150.93100.22880.96150.41290.9432
C140.43920.92420.53180.96700.34480.94650.49170.9660
Table 4. AB and MSE of MLE and Bayes estimates for different censoring schemes ( θ = 1.5 , β = 2 , p = 0.5906).
Table 4. AB and MSE of MLE and Bayes estimates for different censoring schemes ( θ = 1.5 , β = 2 , p = 0.5906).
Scheme CodeMLE θ MLE β Bayes θ Bayes β
ABMSEABMSEABMSEABMSE
C11.89620.23922.22720.14021.53910.03082.19350.1063
C21.62640.07961.90360.07171.51460.00911.90570.0633
C31.31430.13131.95720.34801.50360.00772.04790.0229
C41.74310.16512.13420.13451.51870.01682.09010.1015
C51.81160.19832.20640.15801.52450.02092.14020.1102
C61.58870.06781.91520.06121.53480.00731.91950.0545
C71.54160.08221.92310.11851.52020.00852.25680.1042
C81.94620.33042.26170.20741.51770.03782.19870.0136
C91.69120.11532.11460.12151.51120.01482.08370.0972
C101.82960.17522.28460.14831.52560.02572.21500.1170
C111.59510.09052.31170.09341.53890.01032.11230.0793
C121.33130.10301.98780.21651.54620.00742.14640.1057
C131.53210.08342.28320.11671.51830.00841.98370.1025
C141.78830.18022.13330.12311.50210.02272.10270.0898
Table 5. Average Length (AL) and Coverage Probability (CP) for MLE and Bayes estimates ( θ = 1.5 , β = 2 , p= 0.5906).
Table 5. Average Length (AL) and Coverage Probability (CP) for MLE and Bayes estimates ( θ = 1.5 , β = 2 , p= 0.5906).
Scheme CodeMLE θ MLE β Bayes θ Bayes β
ALCPALCPALCPALCP
C11.15620.91901.28540.96000.94830.94501.13870.9480
C20.93380.93101.04290.92000.68240.96500.96570.9320
C30.67600.89490.77670.97000.63270.94450.74540.9342
C40.96270.91801.10460.94800.73520.94901.00670.9450
C51.01730.90501.17610.94100.78630.93801.04840.9370
C60.87860.93701.01300.93000.63530.97640.93740.9490
C71.03830.93401.16240.98540.76980.95101.06690.9588
C81.40500.92401.53760.96001.09280.94371.31570.9560
C91.00840.92601.12690.95300.72940.95001.01500.9480
C101.17590.91301.29610.94500.84100.94201.13930.9400
C111.01890.93401.16880.91700.75300.95501.06570.9340
C120.85700.90500.99840.96150.70340.95600.94140.9667
C130.99820.91401.13700.90400.74290.97751.04630.9879
C141.21890.90701.37810.97500.93280.95341.20830.9474
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Goel, R.; Abdelwahab, M.M.; Hasaballah, M.M. Bayesian Analysis of the Maxwell Distribution Under Progressively Type-II Random Censoring. Axioms 2025, 14, 573. https://doi.org/10.3390/axioms14080573

AMA Style

Goel R, Abdelwahab MM, Hasaballah MM. Bayesian Analysis of the Maxwell Distribution Under Progressively Type-II Random Censoring. Axioms. 2025; 14(8):573. https://doi.org/10.3390/axioms14080573

Chicago/Turabian Style

Goel, Rajni, Mahmoud M. Abdelwahab, and Mustafa M. Hasaballah. 2025. "Bayesian Analysis of the Maxwell Distribution Under Progressively Type-II Random Censoring" Axioms 14, no. 8: 573. https://doi.org/10.3390/axioms14080573

APA Style

Goel, R., Abdelwahab, M. M., & Hasaballah, M. M. (2025). Bayesian Analysis of the Maxwell Distribution Under Progressively Type-II Random Censoring. Axioms, 14(8), 573. https://doi.org/10.3390/axioms14080573

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop