You are currently viewing a new version of our website. To view the old version click .
Axioms
  • Article
  • Open Access

21 September 2025

Bayesian Stochastic Inference and Statistical Reliability Modeling of Maxwell–Boltzmann Model Under Improved Progressive Censoring for Multidisciplinary Applications

,
and
1
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
2
Department of Statistics, Faculty of Commerce, Zagazig University, Zagazig 44519, Egypt
3
Faculty of Technology and Development, Zagazig University, Zagazig 44519, Egypt
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Stochastic and Statistical Analyses in Natural Sciences, Second Edition

Abstract

The Maxwell–Boltzmann (MB) distribution is important because it provides the statistical foundation for connecting microscopic particle motion to macroscopic gas properties by statistically describing molecular speeds and energies, making it essential for understanding and predicting the behavior of classical ideal gases. This study advances the statistical modeling of lifetime distributions by developing a comprehensive reliability analysis of the MB distribution under an improved adaptive progressive censoring framework. The proposed scheme strategically enhances experimental flexibility by dynamically adjusting censoring protocols, thereby preserving more information from test samples compared to conventional designs. Maximum likelihood estimation, interval estimation, and Bayesian inference are rigorously derived for the MB parameters, with asymptotic properties established to ensure methodological soundness. To address computational challenges, Markov chain Monte Carlo algorithms are employed for efficient Bayesian implementation. A detailed exploration of reliability measures—including hazard rate, mean residual life, and stress–strength models—demonstrates the MB distribution’s suitability for complex reliability settings. Extensive Monte Carlo simulations validate the efficiency and precision of the proposed inferential procedures, highlighting significant gains over traditional censoring approaches. Finally, the utility of the methodology is showcased through real-world applications to physics and engineering datasets, where the MB distribution coupled with such censoring yields superior predictive performance. This genuine examination is conducted through two datasets (including the failure times of aircraft windshields, capturing degradation under extreme environmental and operational stress, and mechanical component failure times) that represent recurrent challenges in industrial systems. This work contributes a unified statistical framework that broadens the applicability of the Maxwell–Boltzmann model in reliability contexts and provides practitioners with a powerful tool for decision making under censored data environments.

1. Introduction

In the latter half of the nineteenth century, James Clerk Maxwell in 1860 and Ludwig Boltzmann in 1871 formulated the probability law governing the distribution of molecular speeds in a gas at a given temperature. This formulation, now known as the Maxwell–Boltzmann (MB( θ )) distribution, constitutes a fundamental statistical model in classical kinetic theory, describing the distribution of molecular speeds in an ideal gas at thermal equilibrium. Recently, the MB law underpins key thermodynamic relations, such as pressure, diffusion, and thermal transport. Owing to these properties, the distribution finds extensive applications across physics, chemistry, and statistical mechanics, as well as in emerging domains such as engineering reliability analysis, where it serves as a flexible model for lifetime and degradation data (see Rowlinson [] for details).
Let Y be a non-negative continuous random variable representing a lifetime. If Y MB ( θ ) , where θ > 0 denotes a scale parameter, then the probability density function (PDF; g ( · ) ) and cumulative distribution function (CDF; G ( · ) ) of Y are given, respectively, by
g ( y ; θ ) = 4 π y 2 θ 3 e y 2 θ , y > 0
and
G ( y ; θ ) = Γ 3 2 y 2 θ ,
where
Γ α ( β ) = 1 Γ ( α ) 0 β y α 1 e y d y and Γ ( α ) = 0 y α 1 e y d y
denote the gamma and incomplete-gamma functions, respectively.
Function (2) admits the following equivalent form:
F ( y ; θ ) = 2 erf y θ 2 π y θ e y 2 θ , = 1 4 π 1 θ 3 ( y , 2 , θ ) ,
where erf ( α ) = 2 π 0 α e t 2 d t , and ( y , α , θ ) = y t α e t 2 θ d t .
Moreover, we investigated two unknown time metrics—namely the reliability function (RF; R ( · ) ) and hazard rate function (HRF; h ( · ) ) of the MB( θ ) distribution—at distinct time x > 0 , which are given, respectively, by
R ( x ; θ ) = 4 π 1 θ 3 ( x , 2 , θ )
and
h ( x ; θ ) = x 2 e x 2 θ ( x , 2 , θ ) .
Taking several configurations of θ , Figure 1 indicates that the MB’s PDF (1) behaves like a right-skewed or unimodal density shape, while the MB’s HRF (5) is strictly increasing for y + .
Figure 1. The PDF (left) and HRF (right) shapes of the MB( θ ) distribution.
Over roughly the last decade, the MB model has evolved into a well-established tool in reliability research, attracting wide-ranging investigations under diverse censoring and sampling schemes. For example, Krishna and Malik [,] analyzed the model under conventional Type-II censoring (T2-C) and Type-II progressive censoring (T2-PC), respectively. Tomer and Panwar [] considered its application in a Type-I hybrid progressive censoring (T1-HPC), while Chaudhary and Tomer [] examined stress–strength reliability settings with T2-PC data. Pathak et al. [] studied the distribution in the context of the T2-PC plan incorporating binomial removals under step-stress partially accelerated life testing. More recently, Elshahhat et al. [] discussed the MB reliability metrics using samples created from a generalized T2-HPC strategy. These studies, among others, illustrate the model’s adaptability to a variety of experimental designs and reinforce its value as a versatile lifetime distribution for modern engineering reliability analysis.
The T2-PC has become a well-established tool in both reliability engineering and survival analysis, primarily because it offers greater operational flexibility than the T2-C framework. A key advantage of T2-PC is its capacity to progressively withdraw the functioning units that were investigated during the experimental design, which is particularly valuable in large-scale industrial testing or long-term clinical investigations. In a standard T2-PC arrangement, m failures are targeted from n identical experimental units, with a predetermined censoring plan S = ( S 1 , S 2 , , S m ) fixed prior to commencement. Following the first observed failure Y 1 : m : n , a randomly chosen set of S 1 surviving units is removed from the remaining n 1 items. Upon the second failure Y 2 : m : n , a further S 2 units are withdrawn from the updated risk set of n 2 S 1 items, and this process then continues. Once the mth failure occurs, all remaining S m survivors are censored, marking the end of the experiment [].
Within the broader class of hybrid censoring schemes, Kundu and Joarder [] introduced the T1-HPC plan. In this setting, n test units are placed under a progressive censoring scheme S , and the termination time is defined as T = min ( t , Y m : m : n ) , where t is a fixed inspection time. While this approach limits the maximum test duration, it can lead to a small number of observed failures, thereby reducing the precision of statistical inference. To address this drawback, Ng et al. [] developed the adaptive Type-II progressive censoring (AT2-PC) strategy, where the number of failures m is fixed in advance but the censoring vector S can be adaptively modified during the experiment. The AT2-PC retains the sequential removal structure of T2-PC while improving estimation efficiency.
However, for test items with exceptionally long lifetimes, the AT2-PC framework may still lead to excessively prolonged experiments. To mitigate this, Yan et al. [] recently proposed the improved AT2-PC (IAT2-PC) model. This generalization of both T1-HPC and AT2-PC incorporates two finite time thresholds— τ 1 and τ 2 , where 0 < τ 1 < τ 2 < —to control the total test duration. Under IAT2-PC, m failures are planned from n starting units, with removals specified by S . After each failure Y i : m : n , a designated number S i of surviving units is randomly withdrawn from the remaining set. The numbers of observed failures by times τ 1 and τ 2 are recorded as d 1 and d 2 , respectively, providing additional structure to the stopping rules while ensuring sufficient data for analysis.
Subsequently, the observed data collected by the practitioner will be one of the following censoring fashions:
  • Case-1: As Y m : m : n < τ 1 , stop the test at Y m : m : n .
  • Case-2: As τ 1 < Y m : m : n < τ 2 , reset S as S d 1 + 1 = = S m 1 = 0 , stop the test at Y m : m : n .
  • Case-3: As τ 2 < Y m : m : n , reset S as S d 1 + 1 = = S d 2 1 = 0 , stop the test at τ 2 .
Let y = { ( Y 1 , S 1 ) , , ( Y d 1 , S d 1 ) , ( Y d 1 + 1 , 0 ) , , ( Y d 2 1 , 0 ) , ( Y d 2 , S ) } denote a sample of size d 2 , which is obtained under an IAT2-PC from a continuous population characterized by PDF g ( · ) and CDF G ( · ) . The joint likelihood function (LF) for this sample can be expressed as
L ( θ y ) i = 1 Ψ 2 g y i ; θ i = 1 Ψ 1 1 G y i ; θ S i 1 G T ; θ S ,
where y i Y i : m : n for simplicity. To distinguish, Table 1 summarizes the math operators defined in the LF (6). For further clarification, a flowchart of the IAT2-PC plan is designed in Figure 2.
Table 1. Censoring levels of T , S , Ψ i , i = 1 , 2 , and ( y i , S i ) .
Figure 2. Flowchart of the IAT2-PC procedure.
For clarity, we provide a simple example to illustrate the choices of Ψ 1 , Ψ 2 , and S in practice. Suppose y = ( 0.1 , 0.3 , 0.6 , 0.8 , 1.3 , 1.9 , 2.4 , 2.7 , 3.1 , 3.5 ) is a T2-PC sample with m = 10 , obtained using S = ( 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) from a continuous lifetime population with n = 20 . Consequently, the practitioner records one scenario for the three cases of the IAT2-PC plan based on several choices of τ 1 and τ 2 , as summarized in Table 2.
Table 2. Censoring levels of T , S , Ψ i , i = 1 , 2 , and ( y i , S i ) .
It should be noted here that the IAT2-PC design can be considered an extension for several common censoring schemes, including the following:
  • When t = τ i and i = 1 , 2 , then the T1-HPC scheme by Kundu and Joarder [] applies;
  • When τ 2 , then the AT2-PC scheme by Ng et al. [] applies;
  • When τ 1 , then the T2-PC scheme by Balakrishnan and Cramer [] applies;
  • When τ 1 0 , S i = 0 for i = 1 , 2 , , m 1 , and S m = n m , then the T2-C scheme by Bain and Engelhardt [] applies.
Remark 1. 
The IAT2-PC scheme can be considered an extension of the T1-HPC, AT2-PC, T2-PC, and T2-C schemes.
Proof. 
See Appendix A.    □
In the IAT2-PC mechanism, the first threshold τ 1 functions as an early warning for the elapsed experimental time, signaling that the study has reached a predefined interim stage. In contrast, the second threshold τ 2 defines the absolute maximum duration for the experiment. Should τ 2 be attained before the occurrence of the predetermined number of failures m, the study is forcibly terminated at time τ 2 . This refinement directly addresses the limitation of the AT2-PC design (by Ng et al. []), where no explicit upper bound on total test duration was guaranteed. By imposing the constraint τ 2 , the improved censoring scheme ensures that the experiment cannot exceed the designated maximum time, thereby enhancing its practical applicability. In the reliability literature, using likelihood-based (6) inference, analogous approaches have been applied to various lifetime distributions, including Weibull [], Burr Type-III [], Nadarajah-Haghighi [], power half-normal [], and inverted XLindley [] models.
In reliability theory and lifetime analysis, the development of efficient statistical models under complex censoring schemes has become increasingly important for modern engineering applications. Classical censoring methods often fail to balance experimental cost, information retention, and statistical precision, particularly when dealing with large-scale life tests in engineering systems. The MB distribution, though originally arising in statistical physics, has recently attracted attention in reliability modeling due to its flexibility in characterizing diverse failure mechanisms with monotonic hazard structures. However, its potential remains underexplored in censored reliability contexts. On the other hand, the IAT2-PC scheme has emerged as a promising design for life-testing experiments, allowing dynamic adjustments in the number of units removed during the test, thereby enhancing efficiency while maintaining the robustness of parameter inference. Motivated by these considerations, this study aims to integrate the MB model with an IAT2-PC plan, thereby providing a more powerful and cost-effective approach to engineering failure modeling and statistical inference. However, the objectives of this study can be summarized in six outlines:
  • This study is the first to investigate the MB distribution under the IAT2-PC scheme, extending its scope in reliability analysis.
  • Likelihood-based inference is developed for model parameters under IAT2-PC, and the existence and uniqueness of the maximum likelihood estimator (MLE) of MB( θ ) is established with numerical evidence.
  • Bayesian estimation methods are proposed using an inverted-gamma prior and Markov chain Monte Carlo techniques, with derivations of posterior summaries and Bayes credible intervals (BCIs).
  • Several interval estimation strategies were compared, including asymptotic (normal and log-normal) intervals and Bayesian (BCI and HPD) intervals, to provide robust inference tools.
  • A comprehensive Monte Carlo simulation study evaluating the precision and efficiency of the proposed estimators under various censoring designs was conducted, demonstrating the advantages of the Bayesian framework.
  • Two real engineering datasets were analyzed, confirming the practical applicability of the proposed modeling framework in real-world reliability experiments.
The structure of the rest of this study is organized as follows. Section 2 and Section 3 present the frequentist and Bayesian estimation methodologies, respectively. Section 5 provides Monte Carlo simulation results. Two genuine engineering applications are analyzed in Section 6. Finally, Section 7 summarizes the main conclusions and key insights derived from this work.

2. Likelihood Inference

This section is devoted to the ML estimation of the MB model parameters, namely θ , R ( x ) , and h ( x ) . Under the assumption of approximate normality for the corresponding estimators, ( 1 ε ) 100 % ACI estimators were obtained by employing the observed Fisher information (FI) matrix in combination with the delta approach. Utilizing the expressions in (1), (2), and (6), Equation (6) can be equivalently written as follows:
L ( θ | y ) θ 3 n 2 e 1 θ i = 1 Ψ 2 y i 2 S T , 2 , θ i = 1 Ψ 1 S i y i , 2 , θ .
Equivalently, the log-LF of (7) becomes
log L ( θ | y ) 3 n 2 log θ 1 θ i = 1 Ψ 2 y i 2 + S log T , 2 , θ + i = 1 Ψ 1 S i log y i , 2 , θ .
Thereafter, the MLE of θ , represented by θ ^ , is determined by maximizing (8), resulting in the following nonlinear expression:
1 θ 1 θ i = 1 Ψ 2 y i 2 3 n 2 + 1 θ 2 S T , 4 , θ T , 2 , θ + i = 1 Ψ 1 S i y i , 4 , θ y i , 2 , θ θ = θ ^ = 0 .
Remark 2. 
The MLE θ ^ of θ, based on the log-LF in (8), exists and is unique for θ > 0 .
Proof. 
See Appendix B.    □
From Equation (9), it is clear that the MLE θ ^ of θ does not admit a closed-form solution. Consequently, it becomes important to assess both the existence and uniqueness of θ ^ . Given the complexity of the score function in (9), an analytical verification is challenging. To overcome this, we investigated these properties numerically by generating an IAT2-PC sample from the MB distribution with parameters θ = ( 0.8 , 1.5 ) under the configuration n = 100 , m = 50 , ( τ 1 , τ 2 ) = ( 1 , 2 ) , and a uniform T2-PC scheme with S i = 1 for i = 1 , 2 , , m . The resulting MLE values of θ were 0.4833 and 2.4643 for the two respective cases. Figure 3 depicts the log-LF (8) alongside the score function from (9) as functions of θ over a selected range. As shown, the vertical lines corresponding to the MLE intersect the log-likelihood curve at its maximum and cross the score function at zero. These findings confirm that the MLE of θ exists and is unique for each considered scenario.
Figure 3. Plots of the log-LF versus its gradient function.
Although providing an analytical proof of concavity is challenging due to the nonlinear structure of the likelihood, the uniqueness of the MLE can be justified under general conditions requiring the observed information matrix to be positive definite. Our numerical experiments, including eigenvalue analyses from different initial values, consistently confirm concavity in practice and thus offer strong evidence for uniqueness (see Figure 3).
After obtaining θ ^ , the MLEs of the RF and HRF—denoted by R ^ ( x ) and h ^ ( x ) , respectively—can be readily determined from Equations (4) and (5) for x > 0 as follows:
R ^ ( x ) = 4 π 1 θ ^ 3 ^ ( x , 2 , θ ^ )
and
h ^ ( x ) = x 2 e x 2 θ ^ ^ ( x , 2 , θ ^ ) ,
respectively. The Newton–Raphson (NR) algorithm, implemented through the maxLik package (by Henningsen and Toomet []) in the R programming environment, was employed to efficiently compute the fitted estimates θ ^ , R ^ ( x ) , and h ^ ( x ) .

3. Bayesian Inference

This section is devoted to deriving both point and credible Bayesian estimates for θ , R ( x ) , and h ( x ) . Within the Bayesian framework, the specification of prior distributions and loss functions is fundamental. As highlighted by Bekker and Roux [] and Elshahhat et al. [], selecting an appropriate prior is often nontrivial with no universally accepted guideline. Considering that the MB parameter θ is strictly positive, the inverted-gamma Inv . G ( α , β ) distribution offers a natural and versatile prior choice. Accordingly, we assume θ Inv . G ( α , β ) , where α , β > 0 denote the hyperparameters. The Inv.G distribution (10) is a flexible prior for positive parameters, with heavier tails than the gamma and less mass near zero. Its shape varies with parameter values, and it yields simple posterior forms. Other conjugate priors, such as gamma, normal-inverse-gamma, and inverse-gamma-gamma, can also be considered.
To evaluate the sensitivity of the results to prior selection, we additionally considered a conjugate gamma prior. The posterior summaries derived under this alternative prior were nearly indistinguishable from those obtained with the proposed Inv.G prior, indicating that the Bayesian inference is robust to reasonable variations in prior specification.
Then, the prior PDF, denoted by ϑ ( · ) , is given by
ϑ θ ; α , β = β α Γ ( α ) θ ( α + 1 ) e β θ , θ > 0 ,
where Γ ( · ) denotes the gamma function, and α > 0 and β > 0 are known. From (7) and (10), the posterior PDF (say Θ ( · ) ) of θ is
Θ ( θ | y ) θ 3 n 2 + α + 1 e 1 θ β + i = 1 Ψ 2 y i 2 S T , 2 , θ i = 1 Ψ 1 S i y i , 2 , θ ,
where its normalized term (say, Ξ ) is given by
Ξ = 0 ϑ ( θ ) × L ( θ | y ) d θ .
We then adopted the squared-error loss (SEL) function as the primary loss criterion, owing to its widespread use and symmetric nature in Bayesian analysis. Nevertheless, the proposed approach can be readily extended to alternative loss functions. Levering the SEL, the Bayesian estimated (say θ ˜ ) of θ is acquired by the posterior-mean as follows:
θ ˜ = Ξ 1 0 θ × Θ ( θ | y ) d θ .
Given the posterior distribution in (11) and the nonlinear form of the LF in (7), the Bayes estimates of θ , R ( x ) , and h ( x ) under SEL are analytically intractable. Therefore, we employed the MCMC method to generate Markovian samples from (11), which were subsequently used to compute the Bayesian estimates and construct the corresponding BCI/HPD intervals for each parameter. Although the posterior density of θ in (11) does not correspond to any standard continuous distribution, using the same censoring setting provided in plotting Figure 3, Figure 4 indicates that its conditional form closely approximates a normal distribution. Consequently, following Algorithm 1, the Metropolis–Hastings (M–H) algorithm is applied to update the posterior samples of θ , after which the Bayes estimates for θ , R ( x ) , and h ( x ) are obtained.
Algorithm 1 The M-H Algorithm for Sampling θ , R ( x ) , and h ( x )
1:
Input: Initial estimate θ ^ , estimated variance V ^ ( θ ^ ) , total iterations , burn-in , confidence level ( 1 ε )
2:
Output: Posterior mean θ ˜
3:
Set θ ( 0 ) θ ^
4:
Set t 1
5:
while   t   do
6:
   Generate θ N ( θ ( t 1 ) , V ^ ( θ ^ ) )
7:
   Compute acceptance ratio:
W min 1 , Θ ( θ y ) Θ ( θ ( t 1 ) y )
8:
   Generate u U ( 0 , 1 )
9:
   if  u W  then
10:
     Set θ ( t ) θ
11:
   else
12:
     Set θ ( t ) θ ( t 1 )
13:
   end if
14:
   Update R ( x ) and h ( x ) using θ ( t ) in (4) and (5)
15:
   Increment t t + 1
16:
end while
17:
Discard the first samples as burn-in
18:
Define =
19:
Compute:
θ ˜ = 1 t = + 1 θ ( t ) , R ˜ ( x ) = 1 t = + 1 R ( t ) ( x ) , h ˜ ( x ) = 1 t = + 1 h ( t ) ( x )
Figure 4. Posterior density plots of θ .

4. Interval Estimation

This section develops both asymptotic and credible interval estimates for the parameters θ , R ( x ) , and h ( x ) . The asymptotic intervals—comprising the normal approximation (ACI[NA]) and normal approximation via logarithmic transformation (ACI[NL])—are constructed using the observed Fisher information. In parallel, Bayesian intervals—specifically the BCI and HPD—are obtained from the MCMC samples of the respective parameters.

4.1. Asymptotic Intervals

Beyond the point estimation, it is of considerable practical interest to construct the 100 ( 1 ε ) % ACI estimators for the parameters θ , R ( x ) , and h ( x ) . Relying on the asymptotic properties of the MLE of θ ^ , these ACIs can be obtained by noting that θ ^ is approximately normally distributed with the mean θ and variance–covariance (VC) matrix V ( · ) , which is typically derived from the FI matrix I ( · ) . However, due to the analytical intractability of the exact Fisher information, a more feasible approach is to approximate V ( · ) via the observed FI matrix evaluated at θ = θ ^ , which is denoted by I ( · ) | θ = θ ^ . Hence, the VC matrix can be estimated as follows:
V ( θ ^ ) = I 1 ( θ ) | θ = θ ^ ,
where
I ( θ ) = 1 θ 2 3 n 2 2 θ i = 1 Ψ 2 y i 2 + 2 θ 3 S T , 2 , θ T , 6 , θ 2 T , 4 , θ 2 T , 2 , θ + 2 θ 3 i = 1 Ψ 1 S i y i , 2 , θ y i , 6 , θ 2 y i , 4 , θ 2 y i , 2 , θ .
For the construction of the 100 ( 1 ε ) % ACIs corresponding to the RF R ( x ) and the HRF h ( x ) , it is first necessary to obtain estimates of the variances of their respective estimators, R ^ ( x ) and h ^ ( x ) . A common and effective approach for this purpose is the delta method, which provides variance approximations—denoted by V ^ R ^ and V ^ h ^ , respectively. Following the methodology of Greene [], the delta method relies on the assumption that R ^ ( x ) is approximately normally distributed with mean R ( x ) and variance V ^ R ^ , while h ^ ( x ) follows an approximate normal distribution with mean h ( x ) and variance V ^ h ^ . Accordingly, the associated variances V ^ R ^ and V ^ h ^ of R ( x ) and R ( x ) , are given by
V ^ ( R ^ ) [ A R V ( θ ) A R ] θ = θ ^ and V ^ ( h ^ ) [ A h V ( θ ) A h ] θ = θ ^ ,
where
A R = 4 π θ 5 1 θ ( x , 4 , θ ) 3 2 ( x , 2 , θ ) , and A h = h ( x ; θ ) x θ 2 1 2 θ ( x + 3 ) h ( x ; θ ) ,
respectively.
Thus, the respective ( 1 ε ) 100 % ACI[NA] estimators of θ , R ( x ) , and h ( x ) (at a significance level ε % ) are given by
θ ^ z ε 2 V ^ ( θ ^ ) , R ^ ( x ) z ε 2 V ^ ( R ^ ) , and h ^ ( x ) z ε 2 V ^ ( h ^ ) ,
where z ε 2 is the upper ( ε 2 ) th standard Gaussian percentile point.
A notable drawback of the traditional ACI[NA] approach is that it may produce negative lower bounds for parameters that are intrinsically restricted to positive values. In such situations, it is common practice to truncate negative bounds at zero; however, this constitutes a heuristic adjustment rather than a statistically rigorous solution. To overcome this limitation and enhance the robustness of interval estimation, Meeker and Escobar [] advocate the use of a log-transformation, leading to the ACI[NL].
Accordingly, the ( 1 ε ) 100 % ACI[NL] estimators of θ , R ( x ) , and h ( x ) are expressed as
θ ^ exp z ε 2 θ ^ 1 V ^ ( θ ^ ) , R ^ ( x ) exp z ε 2 R ^ 1 ( x ) V ^ ( R ^ ) , and h ^ ( x ) exp z ε 2 h ^ 1 ( x ) V ^ ( h ^ ) ,
respectively.

4.2. Bayesian Intervals

This part constructs the ( 1 ε ) 100 % BCI and HPD interval estimators for θ , R ( x ) , and h ( x ) based on the MCMC samples generated via Algorithm 1. The detailed steps for implementing the interval estimation procedure are outlined in Algorithm 2.
Algorithm 2 The Bayesian Interval Algorithm for θ , R ( x ) , and h ( x )
1:
Input: Confidence level ( 1 ε )
2:
Output: BCI and HPD intervals of θ , R ( x ) , and h ( x )
3:
Sort θ ( t ) for t = + 1 , , in ascending order
4:
Compute the ( 1 ε ) 100 % BCI of θ as:
θ ε 2 , θ 1 ε 2
5:
Compute the ( 1 ε ) 100 % HPD interval of θ as:
θ ( t ) , θ ( t + ( 1 ε ) )
where t is the index that minimizes:
θ t + ( 1 ε ) θ ( t ) for t = 1 , , ε
6:
Redo Steps 3–5 for R ( x ) and h ( x )

5. Monte Carlo Comparisons

To assess the accuracy and practical utility of the estimators for θ , R ( x ) , and h ( x ) derived in the preceding sections, we conducted a series of Monte Carlo simulation experiments. Following the steps outlined in Algorithm 3, the IAT2-PC procedure was executed 1000 times for each chosen value of the shape parameter θ (specifically, θ = 0.5 and 1.5 ), enabling the computation of both point estimates and interval estimates for the parameters of interest. For a fixed x = 0.25 , the corresponding reliability and hazard function estimates, ( R ( x ) , h ( x ) ) , are obtained as ( 0.96914 , 0.36327 ) for θ = 0.5 and ( 0.99375 , 0.07411 ) for θ = 1.5 . The simulations are carried out under multiple configurations defined by combinations of threshold parameters t i ( i = 1 , 2 ) , total sample sizes n, effective sample sizes m, and censoring schemes S . In particular, we considered τ 1 { 1 , 2 } , τ 2 { 2 , 3 } , and n { 30 , 50 , 80 } .
Table 3 lists, for each value of n, the selected values of m along with the corresponding T2–PC schemes S . For brevity, a notation such as ‘A1[1] ( 5 [ 4 ] , 0 [ 6 ] )’ indicates that five units are removed at each stage for the first four censoring stages. After this, the practitioner stopped the removal process for the remaining stages.
Algorithm 3 Simulate IAT2-PC Dataset from MB Lifespan Population.
1:
Input: Set values for n, m, τ i , i = 1 , 2 , and S
2:
Set the true value of MB( θ ) parameter.
3:
Generate m independent uniform random variables a 1 , a 2 , , a m U ( 0 , 1 )
4:
for  i = 1 to m do
5:
   Compute ϵ i = a i i + j = m i + 1 m S j 1
6:
end for
7:
for  i = 1 to m do
8:
   Compute U i = 1 j = m i + 1 m ϵ j
9:
end for
10:
for  i = 1 to m do
11:
   Compute y i = G 1 ( u i ; θ )
12:
end for
13:
Observe d 1 failures at time τ 1
14:
Remove observations y i , i = d 1 + 2 , , m
15:
Set truncated sample size: n d 1 1 i = 1 d 1 S i
16:
Generate m d 1 1 order statistics y d 1 + 2 , , y m from truncated distribution:
16:
    PDF: g ( y ) 1 G ( y d 1 + 1 )
17:
if  y m < τ 1   then
18:
   Case 1: End test at y m   t 1 < y m
19:
else if  t 1 < y m   then
20:
   Case 2: End test at y m   t 2 < y m
21:
else if  t 2 < y m   then
22:
   Case 3: End test at t 2
23:
end if
Table 3. Different T2-PC designs in the Monte Carlo simulations.
To further examine the sensitivity of the proposed estimation procedures within the Bayesian framework, we investigated two distinct sets of hyperparameters for each MB distribution. Adopting the prior elicitation approach outlined by Kundu [], the hyperparameters α and β in the Inv.G prior PDF are specified. Then, we adopted two prior groups (PGs) for ( α , β ) for each value of MB( θ ) as follows:
  • At θ = 0.5 : PG-1:(2.5,6) and PG-2:(5,11);
  • At θ = 1.5 : PG-1:(7.5,6) and PG-2:(15,11).
Bayesian inference always depends on the choice of prior beliefs. A sensitivity analysis checks how much the results change when different reasonable priors are used. This is especially important when prior knowledge is uncertain, partly subjective, or when guidelines require that results be shown to be reliable. To test the strength and trustworthiness of the findings, the sensitivity analysis was performed with four kinds of priors: informative (using PG-2 as a representative case), noninformative (when α = 1.5 , β = 1 ), weakly informative (when α = β = 0.001 ), and overdispersed (when α = 15 , β = 0.01 ). Figure 5 illustrates the sensitivity of posterior estimates for θ to four predefined prior types. It shows that posterior means are robust across different prior choices, while posterior uncertainty (credible interval width and standard deviation) is highly sensitive to prior informativeness. Informative priors yield the most concentrated posterior distributions, whereas improper and overdispersed priors produce wider spreads, reflecting higher uncertainty. For further details on the sensitivity analysis of posterior estimates under two (or more) different prior choices, see Alotaibi et al. [].
Figure 5. Plot for different prior distributions of θ from MB(1.5).
Once 1000 IAT2-PC datasets were generated, the frequentist estimates and their corresponding 95% ACI[NA] and ACI[NL] estimates for θ , R ( x ) , and h ( x ) were computed using the maxLik package (Henningsen and Toomet []) in the R software environment (version 4.2.2).
For the Bayesian analysis, 12,000 MCMC samples were drawn, with the first 2000 iterations discarded as burn-in. The resulting Bayesian estimates, along with their 95% BCI and HPD interval estimates for θ , R ( x ) , and h ( x ) , were computed using the coda package (Plummer et al. []) within the same R environment. For the Bayesian implementation, we employed the M–H algorithm with Gaussian proposal distributions centered at the current state. The proposal variances were tuned in preliminary runs to maintain acceptance rates between 0.80 and 0.95, ensuring good mixing properties. All parameters were restricted to the positive domain, consistent with the support of the MB distribution. Across all simulation settings, the monitored acceptance rates confirmed the stable performance of the chains.
Then, to evaluate the offered estimators obtained for the MB( θ ) parameter, we calculated the following metrics:
  • Mean Point Estimate: MPE ( θ ˇ ) = 1 1000 i = 1 1000 θ ˇ [ i ] ,
  • Root Mean Squared Error: RMSE ( θ ˇ ) = 1 1000 i = 1 1000 θ ˇ [ i ] θ 2 ,
  • Mean Relative Absolute Bias: ARAB ( θ ˇ ) = 1 1000 i = 1 1000 θ ˇ 1 θ ˇ [ i ] θ ,
  • Average Interval Length: AIL 95 % ( θ ) = 1 1000 i = 1 1000 U θ ˇ [ i ] L θ ˇ [ i ] ,
  • Coverage Percentage: CP 95 % ( θ ) = 1 1000 i = 1 1000 O L θ ˇ [ i ] ; U θ ˇ [ i ] θ ,
where θ ˇ [ i ] denotes the estimate of θ obtained from the ith sample, O ( · ) represents the indicator function, and ( L ( · ) , U ( · ) ) denotes the two-sided ( 1 ε ) × 100 % asymptotic (or credible) interval for θ . The same precision measures can analogously be applied to R ( x ) and h ( x ) .
In Table 4, Table 5 and Table 6, the point estimation results—including mean point estimates (MPEs; 1st column), root mean squared errors (RMSEs; 2nd column), and mean relative absolute biases (ARABs; 3rd column)—for θ , R ( x ) , and h ( x ) are reported. Correspondingly, Table 7, Table 8 and Table 9 present the average interval lengths (AILs; 1st column) and coverage probabilities (CPs; 2nd column) for the same parameters. In the Supplementary File, the simulation results associated with θ , R ( x ) , and h ( x ) when θ = 1.5 are provided. The key findings, emphasizing configurations with the lowest RMSEs, ARABs, and AILs alongside the highest CPs, are summarized as follows:
Table 4. Point estimations of θ .
Table 5. Point estimations of R ( x ) .
Table 6. Point estimations of h ( x ) .
Table 7. Interval estimations of θ .
Table 8. Interval estimations of R ( x ) .
Table 9. Interval estimations of h ( x ) .
  • Across all configurations, the estimation results for θ , R ( x ) , and h ( x ) exhibited satisfactory performance.
  • Increasing the total sample size n (or the effective sample size m) leads to improved estimation accuracy for all parameters. A comparable improvement was observed when the total number of removals, i = 1 m S i , was reduced.
  • Higher threshold values τ i ( i = 1 , 2 ) enhance estimation precision, as evidenced by reductions in RMSEs, ARABs, and AILs, along with corresponding increases in CPs.
  • When the value of θ increases, the following apply:
    Both the RMSEs and ARABs of θ and h ( x ) tend to increase while those of R ( x ) tend to decrease;
    The AILs of θ tend to increase while those of R ( x ) and h ( x ) tend to decrease;
    The CPs of R ( x ) increase while those of θ and h ( x ) decrease.
  • The Bayesian estimators obtained via MCMC, together with their credible intervals, demonstrated greater robustness compared to frequentist estimators, primarily due to the incorporation of informative Inv.G priors.
  • For all considered values of θ , the Bayesian estimator under PG-2 consistently outperformed other approaches. This superiority is attributable to the lower prior variance of PG-2 relative to PG-1. A similar advantage was observed when comparing credible intervals (BCI and HPD) to asymptotic intervals (ACI[NA] and ACI[NL]).
  • A comparative assessment of the T2–PC designs listed in Table 3 revealed the following:
    For θ and h ( x ) , configurations C i [ j ] ( i = 1 , 2 , 3 and j = 3 , 6 ) yielded the most accurate point and interval estimates;
    For R ( x ) , configurations A i [ j ] ( i = 1 , 2 , 3 and j = 1 , 4 ) provided the highest estimation accuracy.
  • Regarding interval estimation methods, the following apply:
    The ACI[NA] method outperforms ACI[NL] for estimating θ and h ( x ) , while the ACI[NL] method performs better for R ( x ) ;
    The HPD method yields superior interval estimates for all parameters compared to BCI;
    Overall, Bayesian interval estimators (BCI or HPD) surpass asymptotic interval estimators (ACI[NA] or ACI[NL]) in performance.
  • In conclusion, for analyzing the Maxwell–Boltzmann lifetimes under the proposed sampling strategy, the Bayesian framework implemented via the MCMC methodology—particularly using the M–H algorithm with Inv.G prior knowledge—is recommended as the most effective approach.

6. Data Analysis

This section aims to illustrate the practical applicability of the proposed estimators of θ , R ( x ) , and h ( x ) , as well as to verify the effectiveness of the developed estimation procedures. For this purpose, two real-world datasets, drawn from the domain of engineering, were analyzed.

6.1. Aircraft Windshield

The aircraft windshield is a critical safety component, protecting pilots and passengers from aerodynamic forces, environmental hazards, and bird strikes, while ensuring clear visibility for navigation. The windshield data reported by Murthy et al. [] provide a valuable empirical basis for assessing and improving reliability, consisting of eighty-four recorded failure times along with additional censored observations from in-service use.
For numerical stability, in Table 10, the original failure times in the windshield data were divided by a constant factor of ten to obtain a dimensionless scale. This transformation only rescales the parameter estimates without affecting the likelihood shape or the inferential conclusions.
Table 10. The failure times of 84 aircraft windshields.
Firstly, we assessed the validity of the proposed MB ( θ ) model using the point and interval estimates of θ , R ( x ) , and h ( x ) . To achieve this, we examined the MLE of θ , along with its standard error (SEr), and the corresponding 95% ACI[NA] and ACI[NL]—including their interval lengths (Int.Ws). Additionally, we reported the Kolmogorov–Smirnov (KS) test statistic and its associated p-value for goodness-of-fit evaluation.
From Table 10, the results are as follows: MLE(SEr) of θ : 0.0525(0.0046); 95% ACI[NA][Int.W]: (0.0435, 0.0614)[0.0179]; 95% ACI[NL][Int.W]: (0.0442, 0.0622)[0.0180]; and the KS statistic (p-value) is 0.0604(0.9056). These findings show that the MB model provides the best goodness of fit to the windshield data, as evidenced by the narrow confidence intervals and the large p-value in the KS distance result. Briefly, to evaluate the suitability of the MB lifetime distributions, the windshield data were fitted using the lognormal and gamma models. Model performance was assessed based on the maximized log-likelihood (LL) values, the Akaike information criterion (AIC), the Bayesian information criterion (BIC), and the corresponding p-value in the KS goodness-of-fit test. The results, summarized in Table 11, indicate that the MB distribution attained the highest LL and the lowest information criteria while also achieving the largest KS P-value, confirming its superiority among the considered candidates. As a result, the MB model provided the best fit to the observed windshield data, followed by the gamma model, whereas the lognormal model showed poor support.
Table 11. Comparison of the MB, gamma, and lognormal distributions from the windshield dataset.
To visually examine the numerical fitting results of the MB model, Figure 6 displays the estimated and empirical R ( y ) curves, the estimated and empirical probability–probability (PP) plots, the estimated and empirical quantile–quantile (QQ) plots, the data histograms with the estimated PDF curve, the scaled total time on test (TTT) transform curves, and a combined boxplot-within-violin plot. As anticipated, based on the complete windshield data summarized in Table 10, the subplots in Figure 6a–d indicate a satisfactory fit of the MB model, aligning well with the numerical goodness-of-fit results; Figure 6e shows that the data exhibit a non-monotone HRF, which is consistent with the theoretical MB failure rate; lastly, Figure 6f appears relatively symmetric with a moderate spread, lacking extreme outliers or secondary peaks, which is indicative of a more uniform underlying process.
Figure 6. Six fitting diagrams of the MB model from the windshield dataset.
Plotting the profile log-likelihood against its normal equation, Figure 7 confirms both the existence and uniqueness of the MLE of θ ^ . Accordingly, we suggest using θ ^ 0.525 as the initial value in subsequent estimation procedures.
Figure 7. Profile log-likelihood of the MB model from the windshield dataset.
From the complete windshield dataset, three artificial IAT2-PC samples, each with a fixed m = 44 but different choices of S and threshold values τ i ( i = 1 , 2 ) , were generated and are summarized in Table 12. In the absence of prior knowledge about the MB( θ ) model for the windshield data, an improper Inv.G prior with hyperparameters α = β = 0.001 was adopted for updating θ . The frequentist point estimates of θ were employed as initial values for the proposed MCMC sampler. After discarding the first = 5000 iterations from a total of = 30 , 000 , Bayesian point estimates and their corresponding credible interval estimates for θ , R ( x ) , and h ( x ) were obtained. For each configuration S [ i ] , i = 1 , 2 , 3 , Table 13 reports the point estimates (with standard errors) and the associated 95 % interval estimates (with widths) of θ , R ( x ) , and h ( x ) at x = 0.1 . The results indicate that the likelihood-based estimates of θ , R ( x ) , and h ( x ) are generally close to their Bayesian counterparts. However, Bayesian estimates consistently achieve smaller standard errors, and both the 95 % BCI and HPD intervals were narrower than the corresponding 95 % ACI[NA] and ACI[NL] intervals, reflecting higher precision.
Table 12. Different IAT2-PC samples from the windshield dataset.
Table 13. Estimates of θ , R ( x ) , and h ( x ) from the windshield data.
To examine the existence and uniqueness of the fitted MLE of θ ^ for θ , Figure 8 presents the log-likelihood curves together with their corresponding first derivative (score) functions for all samples S [ i ] , i = 1 , 2 , 3 , across varying values of θ . In each configuration, the plots confirm that the MLE of θ ^ exists and is unique, in agreement with the numerical results in Table 13, thereby justifying its use as the initial value in subsequent Bayesian inference procedures.
Figure 8. Plots of the log-likelihood/gradient functions of θ from the windshield data.
The convergence behavior of the MCMC samples for θ , R ( x ) , and h ( x ) was evaluated through trace plots and posterior density plots, as shown in Figure 9. For clarity, the posterior mean and the 95 % BCI bounds are indicated by red solid and red dashed lines, respectively. These plots demonstrate satisfactory convergence of the Markov chains, with posterior samples of θ being approximately symmetric, while R ( x ) and h ( x ) exhibit negative and positive skewness, respectively.
Figure 9. Plotting 25,000 MCMC iterations of θ , R ( x ) , and h ( x ) from the windshield dataset.
From the remaining = 25 , 000 post–burn-in MCMC iterations, several summary statistics for θ , R ( x ) , and h ( x ) were computed, including the posterior mean, mode, quartiles Q [ i ] ( i = 1 , 2 , 3 ), standard deviation (SD), and skewness (Sk.) (see Table 14). These results are consistent with those in Table 13 and the visual evidence from Figure 9.
Table 14. Statistics for the 25,000 MCMC iterations of θ , R ( x ) , and h ( x ) from the windshield data.
Finally, Figure 10 compares the 95 % ACI[NA]/ACI[NL] and BCI/HPD boundaries for the reliability indices R ( x ) and h ( x ) across all data points in S [ i ] , i = 1 , 2 , 3 from the windshield dataset. The BCI and HPD methods yielded shorter Int.W findings than the corresponding ACI[NA] and ACI[NL] approaches, confirming the greater precision suggested by the results in Table 13.
Figure 10. Plots for the 95% Int.W results of R ( x ) and h ( x ) from the windshield data.

6.2. Mechanical Components

Mechanical components form the fundamental building blocks of machinery and engineered systems, and their reliability directly influences operational safety, performance efficiency, and maintenance costs across industrial sectors. The failure times of twenty-four mechanical components, originally reported by Murthy et al. [], constitute a widely used benchmark dataset in reliability and lifetime analysis. The observations, ranging from 10.24 to 51.56 units of time, exhibit a positively skewed distribution with a median of approximately 19, indicating a higher frequency of early failures alongside a smaller number of prolonged lifetimes. Understanding the statistical behavior of these lifetimes enables engineers and researchers to anticipate wear-out patterns, design more durable components, and implement effective preventive maintenance strategies. To facilitate analysis, each time point in Table 15 was scaled by a factor of one hundred.
Table 15. The times to failure for 24 mechanical components.
Next, we assessed the adequacy of the MB model using the point and interval estimates of θ , R ( x ) , and h ( x ) . Specifically, we considered the MLE of θ , its associated Std.Er, and the corresponding 95% ACI[NA] and ACI[NL], along with their respective ILs. The KS statistic and its P-value have also been reported to evaluate the goodness-of-fit. Using Table 16, the estimates are as follows: MLE(Std.Er) of θ : 0.0426(0.0071); 95% ACI[NA][Int.W]: (0.0287, 0.0565)[0.0278]; 95% ACI[NL][Int.W]: (0.0307, 0.0590)[0.0283]; and the KS statistic (p-value): 0.1637(0.4904).
Table 16. Comparison of the MB, gamma, and lognormal distributions from the mechanical dataset.
Following the same comparison setup as in Table 11, the full mechanical dataset was analyzed to evaluate the fit of the MB distribution against its competitors, the gamma and lognormal distributions (see Table 16). The model comparison showed that the lognormal distribution provided the best fit, supported by the highest log-likelihood, the lowest AIC/BIC values, and the largest KS p-value. In contrast, the gamma and MB distributions yielded only a moderate fit.
Figure 11 provides a visual assessment of the MB model fit. For the complete mechanical data summarized in Table 15, panels Figure 11a–d show close agreement between the model and the observed data, corroborating the numerical goodness-of-fit results. Panel Figure 11e indicates a non-monotone HRF, consistent with the theoretical MB failure rate. Panel Figure 11f reveals a moderately skewed distribution with a primary concentration of values around 0.18 to 0.22, a secondary mode near 0.5, and a few high-value outliers, indicating possible heterogeneity or substructure in the data. The profile log-likelihood in Figure 12 further confirms the existence and uniqueness of the fitted MLE of θ ^ , motivating the choice of θ ^ 0.042 as the starting value for subsequent estimation.
Figure 11. Six fitting diagrams of the MB model from the mechanical dataset.
Figure 12. Profile log-likelihood of the MB model from the mechanical dataset.
From Table 15, three artificial IAT2-PC samples were generated to evaluate the performance of the proposed inferential procedures. Each sample was constructed with a fixed sample size m = 12 but employed distinct selections of the censoring scheme S and threshold values τ i ( i = 1 , 2 ) , as summarized in Table 17. In situations where no substantive prior information regarding the MB( θ ) model parameters for the mechanical data is available, it is reasonable to adopt a noninformative prior to ensure minimal subjective influence. Accordingly, an improper Inv.G prior, Inv . G ( α , β ) , with hyperparameters α = β = 0.001 was specified for θ .
Table 17. The different IAT2-PC samples from the mechanical dataset.
To initiate the Bayesian analysis, the maximum likelihood estimates of θ obtained under the frequentist approach were employed as starting values for the MCMC sampler, facilitating more rapid convergence. Setting = 30 , 000 and = 5000 in the MCMC algorithm, the posterior findings, including point estimates and corresponding credible intervals, were computed for θ , R ( x ) , and h ( x ) . For each experimental configuration S [ i ] , i = 1 , 2 , 3 , Table 18 reports the Bayesian posterior means together with standard errors, as well as the 95 % BCI and HPD bounds, alongside the analogous frequentist asymptotic confidence intervals ACI[NA] and ACI[NL]. The evaluation point was set to x = 0.05 for both R ( x ) and h ( x ) . A close agreement was observed between the likelihood-based and Bayesian point estimates across all parameters, suggesting robustness of the estimation framework. Nevertheless, the Bayesian approach systematically yielded smaller standard errors, and its corresponding 95 % BCI and HPD intervals were consistently shorter than the ACI[NA] and ACI[NL] counterparts, reflecting superior precision in parameter estimation.
Table 18. Estimates of θ , R ( x ) , and h ( x ) from the mechanical data.
To verify both the existence and uniqueness of the MLE of θ ^ , for each sample configuration S [ i ] with i = 1 , 2 , 3 , the curves plotted in Figure 13 reveal a single, well-defined peak of the likelihood function. This confirms that the obtained MLE exists and is unique, thereby ensuring the mathematical good posture of the estimation problem.
Figure 13. Plots of the log-likelihood/gradient functions of θ from the mechanical data.
All of the subplots in Figure 13 further support the selection of θ ^ as a robust initial value in subsequent Bayesian computations. Figure 14 shows that the posterior distribution of θ appears approximately symmetric, whereas those of R ( x ) and h ( x ) display slight left and right skewness, respectively. Using 25 , 000 post-burn-in MCMC samples, posterior summaries were computed for all three quantities of interest. These results, documented in Table 19, align closely with the earlier findings in Table 18 and are consistent with the graphical convergence diagnostics in Figure 14. Together, these numerical and graphical results confirm the reliability, stability, and internal consistency of the Bayesian estimation framework adopted in this study.
Figure 14. Plotting 25,000 MCMC iterations of θ , R ( x ) , and h ( x ) from the mechanical dataset.
Table 19. Statistics for 25,000 MCMC iterations of θ , R ( x ) , and h ( x ) from the mechanical data.
Figure 15 displays the 95 % confidence and credible interval bounds for the reliability measures R ( x ) and h ( x ) , which were constructed using the ACI[NA]/ACI[NL] methods, as well as the BCI/HPD approaches, for each mechanical sample configuration S [ i ] , i = 1 , 2 , 3 . Across all configurations, the Bayesian intervals, whether expressed as BCI or HPD, were consistently and substantially narrower than their classical asymptotic counterparts. This marked reduction in interval length reflects the increased estimation precision afforded by the Bayesian procedure. These graphical observations are fully consistent with the numerical results reported in Table 18, thereby reinforcing the conclusion that the Bayesian framework provides a more efficient and informative inference mechanism for R ( x ) and h ( x ) compared to traditional asymptotic methods.
Figure 15. Plots for the 95% Int.W results of R ( x ) and h ( x ) from the mechanical data.
In summary, the analyses performed on datasets derived from the improved Type-II adaptive progressive censoring scheme—encompassing both the aircraft windshield measurements and the mechanical components data—offer a thorough empirical assessment of the Maxwell–Boltzmann lifetime model. The empirical findings not only corroborate the conclusions drawn from the earlier simulation studies, but also underscore the practical utility and robustness of the proposed inferential framework for addressing real-world engineering reliability problems.

7. Concluding Remarks

In this study, we developed a comprehensive reliability analysis framework for engineering failure data under the Maxwell–Boltzmann distribution, incorporating the improved Type-II adaptive progressive censoring plan. The main contributions of this work can be summarized as follows. First, the Maxwell–Boltzmann distribution was shown to provide a flexible and analytically tractable model for characterizing engineering lifetimes, particularly in contexts where the underlying failure mechanism follows a non-monotonic hazard rate. Second, by embedding the proposed scheme, we addressed practical challenges of time and resource limitations in life-testing experiments, thereby enhancing the efficiency and adaptability of the censoring mechanism compared to traditional fixed schemes. From a methodological perspective, we derived the maximum likelihood and Bayes estimators for the unknown parameters of the Maxwell–Boltzmann model under the proposed sampling scheme and examined their asymptotic behavior. Approximate confidence intervals and Bayesian credible intervals were constructed to assess inferential accuracy, and interval estimation was further supported through estimated Fisher information and MCMC approaches, respectively. Extensive Monte Carlo simulations demonstrated the superiority of the proposed scheme in terms of parameter estimation precision and reliability assessment when compared with conventional censoring approaches. The numerical study emphasized the reduction in bias and mean squared error, as well as the improved coverage probabilities of the interval estimates. Furthermore, two real engineering datasets were analyzed to illustrate the practical relevance of the proposed methodology. While the proposed Bayesian framework exhibited strong performance, several limitations merit attention. These include its sensitivity to the specification of stress change points ( τ 1 , τ 2 ), the computational burden associated with MCMC implementation, and increased variability in very small samples. Such limitations highlight promising directions for future research, including data-driven estimation of change points, the development of more computationally efficient algorithms, and the incorporation of small-sample corrections. Collectively, these findings position the Maxwell–Boltzmann distribution as a versatile lifetime model under complex censoring, bridging statistical physics with reliability theory. Future research may extend this framework to multivariate reliability settings, competing risks, and accelerated life-testing designs, further enhancing its role in engineering decision-making and predictive reliability analytics.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/axioms14090712/s1, Table S1: Point estimations of θ when θ = 1.5 ; Table S2: Point estimations of R ( x ) when θ = 1.5 ; Table S3: Point estimations of h ( x ) when θ = 1.5 ; Table S4: Interval estimations of θ when θ = 1.5 ; Table S5: Interval estimations of R ( x ) when θ = 1.5 ; and Table S6: Interval estimations of h ( x ) when θ = 1.5 .

Author Contributions

Methodology, H.S.M., O.E.A.-K. and A.E.; Funding Acquisition, H.S.M.; Software, A.E.; Supervision O.E.A.-K.; Writing—Original Draft, H.S.M., O.E.A.-K. and A.E.; Writing—Review and Editing H.S.M. and A.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2025R175), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

The authors confirm that the data supporting the findings of this study are available within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Special Cases of IAT2-PC

The IAT2-PC scheme is proposed as a general framework from which common control schemes, such as T1-HPC, AT2-PC, T2-PC, and T2-C, can be derived as special cases of LF in (6) as follows:
  • T1-HPC: When t = τ i (for i = 1 , 2 ), the stopping point becomes the T1-HPC rule; in such a case, T = min ( t , Y m : m : n ) and the LF (6) specializes by replacing T with t and setting the corresponding S and Ψ -counts according to the T1-HPC pattern. In this case, the I ( T , 2 , θ ) S term is evaluated at T = t .
  • AT2-PC: When τ 2 , remove the forced truncation at τ 2 . In such a case, set S = 0 in the LF (6). Hence, the IAT2-PC scheme reduces to the AT2-PC form by setting S 0 and T Y m : m : n .
  • T2-PC: When τ 1 , the classical T2-PC scheme is obtained. This corresponds to changing the counting indices so that all removals S i (for i = 1 , , m 1 ) remain active up to the m-th failure and the Ψ -counts adjust to the T2-PC pattern (see Case 1 in Table 2).
  • T2-C: When τ 1 0 and set S i = 0 for i = 1 , , m 1 while S m = n m , no progressive removals occur during the test except the final censoring of the remaining n m units at the m-th failure.

Appendix B. Existence and Uniqueness of θ ^

Proof. 
To prove the existence and uniqueness of θ ^ , we first redefined the log-LF in (8) as
( θ ) : = 3 n 2 log θ A θ + S log I 1 ( θ ) + i = 1 Ψ 1 S i log I 2 i ( θ ) ,
where
A = i = 1 Ψ 2 y i 2 ,
I 1 ( θ ) = T t α e t 2 / θ d t ,
and
I 2 i ( θ ) = y i t 2 e t 2 / θ d t .
(a) Domain and Differentiability.
The function ( θ ) is well-defined and twice continuously differentiable for all θ ( 0 , ) . The integrals I 1 ( θ ) and I 2 i ( θ ) are finite for θ > 0 and differentiable under the integral sign due to smooth, positive integrands with exponential decay.
(b) Behavior at the Boundaries.
As θ 0 + , the terms 3 n 2 log θ and A θ diverge to and the integrals decay exponentially, making their logarithms also diverge to . As θ , we have log θ , but it is dominated by the negative logarithmic term. Therefore, ( θ ) at both ends, implying that a maximum exists in ( 0 , ) .
(c) Concavity and Uniqueness.
We computed the second derivative as follows:
( θ ) = 3 n 2 θ 2 2 A θ 3 + S · d 2 d θ 2 log I 1 ( θ ) + i = 1 Ψ 1 S i · d 2 d θ 2 log I 2 i ( θ ) .
The first two terms are strictly negative for sufficiently large A > 0 . To analyze the remaining terms, we observed that, for integrals of the form
I ( θ ) = a g ( t ) e t 2 / θ d t ,
where g ( t ) > 0 , the function log I ( θ ) is concave in θ . This follows, since
d 2 d θ 2 log I ( θ ) = I ( θ ) I ( θ ) I ( θ ) I ( θ ) 2 < 0 ,
implying each log I j ( θ ) is concave. Therefore, ( θ ) < 0 for all θ > 0 , and ( θ ) is strictly concave.
(d) Conclusion.
The function ( θ ) is continuous on ( 0 , ) , tends to at the boundaries, and is strictly concave. Hence, it admits a unique global maximizer θ ^ ( 0 , ) , which is the MLE. □

References

  1. Rowlinson, J.S. The Maxwell–Boltzmann distribution. Mol. Phys. 2005, 103, 2821–2828. [Google Scholar] [CrossRef]
  2. Krishna, H.; Malik, M. Reliability estimation in Maxwell distribution with Type-II censored data. Int. J. Qual. Reliab. Manag. 2009, 26, 184–195. [Google Scholar] [CrossRef]
  3. Krishna, H.; Malik, M. Reliability estimation in Maxwell distribution with progressively type-II censored data. J. Stat. Comput. Simul. 2012, 82, 623–641. [Google Scholar] [CrossRef]
  4. Tomer, S.K.; Panwar, M.S. Estimation procedures for Maxwell distribution under type-I progressive hybrid censoring scheme. J. Stat. Comput. Simul. 2015, 85, 339–356. [Google Scholar] [CrossRef]
  5. Chaudhary, S.; Tomer, S.K. Estimation of stress–strength reliability for Maxwell distribution under progressive type-II censoring scheme. Int. J. Syst. Assur. Eng. Manag. 2018, 9, 1107–1119. [Google Scholar] [CrossRef]
  6. Pathak, A.; Kumar, M.; Singh, S.K.; Singh, U.; Tiwari, M.K.; Kumar, S. Bayesian inference for Maxwell Boltzmann distribution on step-stress partially accelerated life test under progressive type-II censoring with binomial removals. Int. J. Syst. Assur. Eng. Manag. 2022, 13, 1976–2010. [Google Scholar] [CrossRef]
  7. Elshahhat, A.; Abo-Kasem, O.E.; Mohammed, H.S. Reliability analysis and applications of generalized type-II progressively hybrid Maxwell–Boltzmann censored data. Axioms 2023, 12, 618. [Google Scholar] [CrossRef]
  8. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring; Springer Birkhäuser: New York, NY, USA, 2014. [Google Scholar]
  9. Kundu, D.; Joarder, A. Analysis of Type-II progressively hybrid censored data. Comput. Stat. Data Anal. 2006, 50, 2509–2528. [Google Scholar] [CrossRef]
  10. Ng, H.K.T.; Kundu, D.; Chan, P.S. Statistical Analysis of Exponential Lifetimes under an Adaptive Type-II Progressive Censoring Scheme. Nav. Res. Logist. 2009, 56, 687–698. [Google Scholar] [CrossRef]
  11. Yan, W.; Li, P.; Yu, Y. Statistical inference for the reliability of Burr-XII distribution under improved adaptive Type-II progressive censoring. Appl. Math. Model. 2021, 95, 38–52. [Google Scholar] [CrossRef]
  12. Bain, L.J.; Engelhardt, M. Statistical Analysis of Reliability and Life-Testing Models, 2nd ed.; Marcel Dekker: New York, NY, USA, 1991. [Google Scholar]
  13. Nassar, M.; Elshahhat, A. Estimation procedures and optimal censoring schemes for an improved adaptive progressively type-II censored Weibull distribution. J. Appl. Stat. 2024, 51, 1664–1688. [Google Scholar] [CrossRef] [PubMed]
  14. Asadi, S.; Panahi, H.; Anwar, S.; Lone, S.A. Reliability Estimation of Burr Type III Distribution under Improved Adaptive Progressive Censoring with Application to Surface Coating. Maint. Reliab./Eksploat. Niezawodn. 2023, 25, 1–10. [Google Scholar] [CrossRef]
  15. El-Sherpieny, E.S.A.; Elshahhat, A.; Abdallah, N.M. Statistical analysis of improved Type-II adaptive progressive hybrid censored NH data. Sankhya A 2024, 86, 721–754. [Google Scholar] [CrossRef]
  16. Alqasem, O.A.; Elshahhat, A. Reliability Analysis of the Newly Power Half-Normal Model via Improving Adaptive Progressive Type II Censoring and Its Applications. Qual. Reliab. Eng. Int. 2025, in press. [Google Scholar] [CrossRef]
  17. Alotaibi, R.; Nassar, M.; Elshahhat, A. Reliability Analysis of Improved Type-II Adaptive Progressively Inverse XLindley Censored Data. Axioms 2025, 14, 437. [Google Scholar] [CrossRef]
  18. Henningsen, A.; Toomet, O. maxLik: A package for maximum likelihood estimation in R. Comput. Stat. 2011, 26, 443–458. [Google Scholar] [CrossRef]
  19. Bekker, A.J.J.J.; Roux, J.J.J. Reliability characteristics of the Maxwell distribution: A Bayes estimation study. Commun. Stat. Theory Methods 2005, 34, 2169–2178. [Google Scholar] [CrossRef]
  20. Greene, W.H. Econometric Analysis, 4th ed.; Prentice-Hall: New York, NY, USA, 2000. [Google Scholar]
  21. Meeker, W.Q.; Escobar, L.A. Statistical Methods for Reliability Data; John Wiley and Sons: New York, NY, USA, 2014. [Google Scholar]
  22. Kundu, D. Bayesian inference and life testing plan for the Weibull distribution in presence of progressive censoring. Technometrics 2008, 50, 144–154. [Google Scholar] [CrossRef]
  23. Alotaibi, R.; Nassar, M.; Elshahhat, A. Estimation of Inverted Weibull Competing Risks Model Using Improved Adaptive Progressive Type-II Censoring Plan with Application to Radiobiology Data. Symmetry 2025, 17, 1044. [Google Scholar] [CrossRef]
  24. Plummer, M.; Best, N.; Cowles, K.; Vines, K. CODA: Convergence diagnosis and output analysis for MCMC. R News 2006, 6, 7–11. [Google Scholar]
  25. Murthy, D.N.P.; Xie, M.; Jiang, R. Weibull Models; John Wiley: Hoboken, NJ, USA, 2004. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.