Next Article in Journal
About Uniqueness of Steady Ricci Schwarzschild Solitons
Previous Article in Journal
Stochastic Production Planning in Manufacturing Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inferences for the GKME Distribution Under Progressive Type-I Interval Censoring with Random Removals and Its Application to Survival Data

by
Ela Verma
1,
Mahmoud M. Abdelwahab
2,
Sanjay Kumar Singh
1 and
Mustafa M. Hasaballah
3,*
1
Department of Statistics, Banaras Hindu University, Varanasi 221005, India
2
Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
3
Department of Basic Sciences, Marg Higher Institute of Engineering and Modern Technology, Cairo 11721, Egypt
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(10), 769; https://doi.org/10.3390/axioms14100769
Submission received: 28 August 2025 / Revised: 2 October 2025 / Accepted: 16 October 2025 / Published: 17 October 2025

Abstract

The analysis of lifetime data under censoring schemes plays a vital role in reliability studies and survival analysis, where complete information is often difficult to obtain. This work focuses on the estimation of the parameters of the recently proposed generalized Kavya–Manoharan exponential (GKME) distribution under progressive Type-I interval censoring, a censoring scheme that frequently arises in medical and industrial life-testing experiments. Estimation procedures are developed under both classical and Bayesian paradigms, providing a comprehensive framework for inference. In the Bayesian setting, parameter estimation is carried out using Markov Chain Monte Carlo (MCMC) techniques under two distinct loss functions: the squared error loss function (SELF) and the general entropy loss function (GELF). For interval estimation, asymptotic confidence intervals as well as highest posterior density (HPD) credible intervals are constructed. The performance of the proposed estimators is systematically evaluated through a Monte Carlo simulation study in terms of mean squared error (MSE) and the average lengths of the interval estimates. The practical usefulness of the developed methodology is further demonstrated through the analysis of a real dataset on survival times of guinea pigs exposed to virulent tubercle bacilli. The findings indicate that the proposed methods provide flexible and efficient tools for analyzing progressively interval-censored lifetime data.

1. Introduction

In reliability and survival analysis, selecting an appropriate statistical model is crucial for accurately describing lifetime data. For the data of increasing and decreasing nature of hazards, we have various lifetime distributions existing in the literature like Weibull, exponentiated exponential, gamma, and others. In a recent contribution, Verma et al. [1] proposed a new two-parameter generalized Kavya–Manoharan exponential (GKME) distribution. This distribution was applied to model the survival times of guinea pigs, life of fatigue fracture of Kevlar 373/epoxy, and time of successive failures of the air conditioning system of jet air-planes. The GKME distribution exhibited a better fit compared to several sound models, including the Weibull, exponentiated exponential, gamma, and the alpha power transformed Weibull distribution. Despite its flexibility and superiority over other models, no further methodological development or inferential work has been carried out for the GKME distribution. Verma et al. [1] explored several statistical properties of the GKME distribution and conducted parameter estimation within the classical framework using the method of maximum likelihood. However, their analysis was limited to complete data scenarios. In practical applications, particularly in reliability and survival studies, the available data are often subject to censoring. Recognizing this limitation, the present study seeks to bridge this gap by developing a comprehensive inferential framework for the GKME distribution under censored data.
The nature and structure of censoring depend on the experimental protocol and data collection procedures. For example, in many practical scenarios, experiments may be terminated before the failure of all test units due to time constraints, budgetary limitations, or other logistical challenges. This has led to the development of various censoring mechanisms. Among the most frequently used are the classical Type-I and Type-II censoring schemes (see, Sirvanci and Yang [2]; Basu et al. [3]; Singh et al. [4]). Another important consideration in life-testing arises when the exact failure times of units are not directly observable. In many experimental settings, continuous monitoring of all units is infeasible due to limited manpower or technical constraints. Instead, the experiment is structured around periodic inspections, during which only the number of failures occurring within each time interval is recorded. This scenario gives rise to interval censoring, wherein failure times are only known to fall within certain intervals. Interval-censored data are frequently encountered in biomedical studies, especially in longitudinal trials and clinical follow-ups (Finkelstein [5]). As noted by Jianguo [6], such data structures are also prevalent in demography, epidemiology, sociology, finance, and engineering disciplines. In their work, Guure et al. [7] investigated Bayesian inference for survival data subject to interval censoring under the log-logistic model. Later, Sharma et al. [8] utilized Lindley’s approximation for Bayesian estimation of interval-censored lifetime data based on the Lindley distribution.
However, these approaches generally do not permit the withdrawal of experimental units before the study reaches completion. In real-world applications, maintaining all units under observation until the end is often impractical or even undesirable. Factors such as cost, time, ethical concerns, or logistical limitations may necessitate the withdrawal of units during the course of the experiment. To address this, Balakrishnan and Aggarwala [9] have developed a progressive censoring scheme that provides a structured yet flexible way to remove subjects partway through the study. Under this approach, the withdrawal of units is planned in advance and carried out systematically, either immediately following certain observed failures or at specified points in time.
Taking this limitation into account, Aggarwala [10] proposed a hybrid censoring scheme, known as progressive Type-I interval (PITI) censoring, which integrates the features of Type-I interval censoring with those of progressive censoring. This approach has been widely applied, particularly in clinical trial settings. Consider longitudinal study involving n bladder cancer patients, where the objective is to track the duration of remission. Rather than observing each patient continuously, medical evaluations are conducted at predetermined time points T 1 , T 2 , , T m . At the first check-up (time T 1 ), only n r 1 individuals attend the visit, implying that r 1 participants have exited the study within the interval ( 0 , T 1 ] . Of the n r 1 patients examined at this stage, the recurrence of cancer is confirmed in d 1 cases. However, the precise time of recurrence for these individuals is not known, but only that the event occurred within the given time interval. Subsequently, at time T 2 , the remaining cohort n r 1 d 1 is considered, and once again a portion, say r 2 , departs during the interval ( T 1 , T 2 ] . The remaining patients, n r 1 d 1 r 2 , are evaluated, and d 2 cases of recurrence are detected. This process continues through m successive inspection times, and at the final visit ( T m ), all surviving patients who have not dropped out or shown recurrence are removed from the trial, effectively concluding the experiment. In prior work, Chen and Lio [11] proposed an approach for parameter estimation under the considered censoring scheme using the generalized exponential distribution. Their method assumes that the proportions p 1 , p 2 , , p m of dropouts at each interval are fixed in advance. More precisely, they set r i = n i p i , where n i denotes the number of subjects at risk at the beginning of the i-th interval. While mathematically tractable, this deterministic assumption about dropouts may not reflect the inherent unpredictability of clinical trials. Patients often withdraw from treatment due to unforeseen events such as death from unrelated causes, side effects, personal preferences, or dissatisfaction with medical care, all of which lie outside the experimenter’s control. Recognizing these practical challenges, several researchers have advocated modeling the number of removals as random variables. For example, Tse et al. [12], introduced censoring schemes with binomially distributed removals. Ashour and Afify [13] extended this framework by applying PITI censoring with binomial removal under the assumption that the exact lifetime durations are observable. In the present study, however, we consider a more realistic scenario where exact failure times are unobservable. Instead, only the number of failures occurring within specified intervals is available. We adopt a binomial model for the dropout mechanism, wherein the number of removals at the i-th stage, denoted r i , follows a binomial distribution with parameters that depend on the remaining number of patients and a dropout probability p i . Under this more flexible framework, we aim to develop statistical estimators for the unknown shape and scale parameters of the underlying distribution, assuming a progressively Type-I interval censored structure with binomially distributed random removals and unobserved exact failure times. Several studies have extended its use to different lifetime models. For instance, Kaushik et al. [14,15] discussed Bayesian and classical estimation procedures under PITI censoring using generalized exponential and Weibull distribution. Lodhi and Tripathi [16] investigated inference for the truncated normal distribution in the presence of PITI censoring, while Alotaibi et al. [17] examined Bayesian estimation for the Dagum distribution. More recently, Roy et al. [18] studied the inverse Gaussian distribution under PITI censoring using the method of moments, maximum likelihood estimation, and Bayesian methods, and also proposed optimal censoring designs. Hasan et al. [19] considered four strategies for determining inspection times—predefined, equally spaced, optimally spaced, and equal probability—and derived optimal censoring plans by evaluating expected removals or their proportions. Building on these contributions, the present work aims to develop inferential procedures for the GKME distribution under the PITI censoring scheme.
Let X be a non-negative continuous random variable that follows the generalized Kavya–Manoharan exponential (GKME) distribution. Adopting the same notations as used in Verma et al. [1], the cumulative distribution function (CDF) of the GKME distribution is defined as
F ( x ) = λ λ 1 [ 1 λ ( 1 e β x ) ] if λ > 0 , λ 1 1 e β x if λ = 1
where λ and β are shape and rate parameters, respectively. Henceforth, the GKME distribution with parameters λ and β is denoted by GKME ( λ , β ) . The probability density function (PDF) and hazard function of the GKME distribution are, respectively,
f ( x ) = λ log λ λ 1 β e β x λ ( 1 e β x ) if λ > 0 , λ 1 β e β x if λ = 1
and
h ( x ) = ln λ λ G ( x ) + 1 λ G ( x ) + 1 1 g ( x ) if λ > 0 , λ 1 g ( x ) 1 G ( x ) if λ = 1 .
Figure 1 displays the PDF and hazard function of the GKME distribution for various parameter settings. When λ < 1 , the PDF takes a unimodal form, and the peak of the curve becomes progressively flatter as β increases. In contrast, for λ > 1 , the PDF resembles an exponential distribution. The behavior of the hazard function also depends on λ : it increases monotonically when λ < 1 , whereas it decreases monotonically when λ > 1 .
The remainder of this article is organized as follows. Section 2 provides a detailed description of the PITI scheme with binomial removals and presents the observed likelihood function under this censoring scheme. Section 3 focuses on the estimation of model parameters based on PITI-censored data using the ML method. Bayesian estimation using informative priors under SELF and GELF is discussed in Section 4. A Monte Carlo simulation is conducted in Section 5 to evaluate how various estimators perform when data are subject to PITI censoring. In Section 6, to illustrate the applicability of the proposed method, a real-world dataset involving survival times of guinea pigs is examined. Finally, conclusions are given in Section 7.

2. Censoring Scheme and Likelihood Function

To better understand the mechanism of PITI censoring scheme, let us suppose that a total of n units are placed under observation at the initial time T 0 = 0 . The experiment proceeds through a series of pre-specified inspection intervals ( T i 1 , T i ] , for i = 1 , 2 , , m , during which the experimenter records two pieces of information: the number of failures D i = ( d 1 , d 2 , , d i ) and the number of removals (or dropouts) R i = ( r 1 , r 2 , , r i ) . These observations are made on the cohort of units still active at the beginning of each interval, in accordance with the progressive Type-I interval censoring scheme described previously. It is important to emphasize that the dropouts occurring during the i-th interval, ( T i 1 , T i ] , are assumed to be randomly determined and occur independently across units. The likelihood of a unit dropping out in this interval is governed by a dropout probability p i , where 0 < p i < 1 . Consequently, the number of dropouts at the first inspection point, r 1 , has a binomial distribution with parameters ( n , p 1 ) , i.e.,  r 1 Binomial ( n , p 1 ) . For the subsequent intervals ( i = 2 , 3 , , m ), the number of units at risk at the start of the i-th interval is given by n j = 1 i 1 ( d j + r j ) . Therefore, the number of removals r i at the i-th interval is governed by binomial distribution with the form r i Binomial n j = 1 i 1 ( d j + r j ) , p i , for i = 2 , 3 , , m . Thus, for given p,
P ( r 1 D 1 , p 1 ) = n d 1 r 1 p 1 r 1 ( 1 p 1 ) n d 1 r 1
P ( r 2 R 1 , D 2 , p 2 ) = n d 1 d 2 r 1 r 2 p 2 r 2 ( 1 p 2 ) n d 1 d 2 r 1 r 2
P ( r i R i 1 , D i , p i ) = n j = 1 i d j + j = 1 i 1 r j r i p i r i ( 1 p i ) n j = 1 i d j + j = 1 i r j
for i = 1 , 2 , , m and r m = n j = 1 m d j j = 1 m 1 r j . This Figure 2 illustrates the process involved in this type of censored data. Now the likelihood function for the observed data can be written as
L ( λ , β | R , D , T ) i = 1 m F ( T i ) F ( T i 1 ) d i 1 F ( T i ) r i P ( r i R i 1 , D i , p i )

3. Maximum Likelihood Estimation

For estimating the parameters λ and β using ML estimation, the above likelihood function for GKME distribution is written as
L ( λ , β | R , D , T ) = i = 1 m λ λ 1 λ ( 1 e β T i 1 ) λ ( 1 e β T i ) d i λ λ 1 λ ( 1 e β T i ) r i × n j = 1 i d j + j = 1 i 1 r j r i p i r i ( 1 p i ) n j = 1 i d j + j = 1 i r j
The above expression can be decomposed as
L ( λ , β | R , D , T ) = L 1 ( λ , β | R , D , T ) L 2 ( P | R , D , T )
where
L 1 ( λ , β | R , D , T ) = i = 1 m F ( T i ) F ( T i 1 ) d i · 1 F ( T i ) r i
It can be observed that L 2 ( . ) does not involve the parameters λ and β . Therefore, for the purpose of obtaining the ML estimates of λ and β , it suffices to consider only the component L 1 ( . ) . The associated log-likelihood function is then given by:
log L 1 = i = 1 m d i log λ λ 1 + log λ ( 1 e β T i 1 ) λ ( 1 e β T i ) + i = 1 m r i log λ λ 1 ( 1 e β T i ) log λ
Hence, the ML estimators of the parameters λ ^ and β ^ are determined by jointly solving the following pair of nonlinear equations:
log L 1 λ = i = 1 m d i 1 λ ( λ 1 ) + ( 1 e β T i ) λ ( 2 e β T i ) ( 1 e β T i 1 ) λ ( 2 e β T i 1 ) λ ( 1 e β T i 1 ) λ ( 1 e β T i ) + i = 1 m r i 1 λ ( λ 1 ) 1 λ ( 1 e β T i ) = 0 .
log L 1 β = i = 1 m d i T i e β T i λ ( 1 e β T i ) log λ T i 1 e β T i 1 λ ( 1 e β T i 1 ) log λ λ ( 1 e β T i 1 ) λ ( 1 e β T i ) i = 1 m r i T i e β T i log λ = 0 .
From the above equations, the probability distributions of the estimators λ ^ and β ^ are analytically intractable. Therefore, it is impractical to derive the exact confidence intervals for these parameters. To address this, we utilize the asymptotic normality property of ML estimators to construct 100 ( 1 γ ) % ACIs for λ and β . These intervals are derived using the asymptotic variance-covariance matrix associated with their MLEs. The joint asymptotic distribution of ( λ , β ) follows a bivariate normal (BN) distribution: ( λ , β ) B N ( ( λ ^ , β ^ ) , I 1 ( λ ^ , β ^ ) ) where I ( λ ^ , β ^ ) represents the observed Fisher information matrix, given by
I ( λ ^ , β ^ ) = 2 log L λ 2 2 log L λ β 2 log L β λ 2 log L β 2 [ λ = λ ^ , β = β ^ ]
The components of the Fisher information matrix are given by
2 log L 1 λ 2 = n ( 2 λ 1 ) λ 2 ( λ 1 ) 2 + i = 1 m d i A 1 A 2 λ A 2 A 1 λ A 1 2 + 1 λ 2 i = 1 m r i ( 1 e β T i )
2 log L 1 β 2 = i = 1 m d i A 1 A 1 β β A 1 β 2 A 1 2 + log λ i = 1 m r i T i 2 e β T i .
2 log L 1 λ β = i = 1 m d i A 1 A 2 β A 2 A 1 β A 1 2 1 λ i = 1 m r i T i e β T i .
where
  • A 1 = ( λ ( 1 e β T i 1 ) λ ( 1 e β T i ) )
  • A 2 = ( 1 e β T i ) λ ( 2 e β T i ) ( 1 e β T i 1 ) λ ( 2 e β T i 1 )
  • A 1 λ = ( 1 e β T i ) λ ( 2 e β T i ) ( 1 e β T i 1 ) λ ( 2 e β T i 1 )
  • A 1 β = T i e β T i λ ( 1 e β T i ) log λ T i 1 e β T i 1 λ ( 1 e β T i 1 ) log λ
  • A 1 β β = T i 1 2 λ ( 1 e β T i 1 ) e β T i 1 log λ ( 1 + e β T i 1 log λ ) T i 2 λ ( 1 e β T i ) e β T i log λ ( 1 + e β T i log λ )
  • A 2 λ = ( 1 e β T i 1 ) ( 2 e β T i 1 ) λ ( 3 e β T i 1 ) ( 1 e β T i ) ( 2 e β T i ) λ ( 3 e β T i )
  • A 2 β = T i λ ( 2 e β T i ) e β T i 1 ( 1 e β T i ) log λ T i 1 λ ( 2 e β T i 1 ) e β T i 1 1 ( 1 e β T i 1 ) log λ .
Based on the observed Fisher information matrix, the two-sided symmetric 100 ( 1 γ ) % ACIs for λ and β are obtained as
λ ^ z γ / 2 var ( λ ^ ) and β ^ z γ / 2 var ( β ^ )
where z γ / 2 denotes the upper γ / 2 percentile of the standard normal distribution, and var ( λ ^ ) and var ( β ^ ) correspond to the estimated variances of λ ^ and β ^ , extracted from the principal diagonal of the asymptotic variance–covariance matrix, which is the inverse of the observed Fisher information matrix, I ( λ ^ , β ^ ) .

4. Bayesian Estimation

In this section, we present the Bayes estimation of the unknown parameters α and β using the MITI sample. Unlike classical estimation methods that rely only on the likelihood function, the Bayesian paradigm combines prior information about the parameters with the observed data through Bayes’ theorem. This results in the posterior distribution, which serves as the basis for inference. The Bayes estimator of a parameter is then obtained as a functional (such as the mean, median, or mode) of its posterior distribution, depending on the choice of loss function.
The loss function plays a crucial role, as it quantifies the penalty for the difference between the estimated and true values of the parameter. Different loss functions lead to different forms of Bayes estimators. In this study, we consider three widely used loss functions: SELF, LLF, and GELF. The SELF is symmetric and treats overestimation and underestimation equally. On the other hand, the LLF and GELF are asymmetric, making them more realistic for situations where the consequences of overestimation and underestimation are not the same.
SELF: In keeping with its name, SELF assigns a value equal to the square of the estimation error. Now, if the estimator estimates the true state ν by ν ^ then the estimation error is to be ( ν ν ^ ) , and accordingly, the incurred loss is
L ( ν , ν ^ ) = ( ν ν ^ ) 2
From (17), it is clear that the higher the divergence from the true state, the heavier the penalization. It is needless to mention that the Bayes estimator under the SELF is the posterior mean, i.e., ν ^ S E L F = E ( ν ) . GELF: The GELF proposed by Calabria and Pulcini [20] can be written as
L ( ν , ν ^ ) ν ^ ν w w log ν ^ ν 1
The Bayes estimator corresponding to GELF is [ E ( ν w ) ] 1 / w . The sign of loss parameter w signifies the direction of asymmetry, while its size characterizes the intensity of asymmetry.

Prior Distribution and Posterior Analysis

In Bayesian analysis, prior distributions play a central role by incorporating existing knowledge or assumptions about the parameters before observing the data. The choice of an appropriate prior is crucial, as it can influence both the posterior distribution and the resulting inferences. However, there is no universally accepted methodology for selecting “optimal” priors, particularly when dealing with newly developed distributions. For the GKME distribution, determining a joint conjugate prior is analytically challenging due to the presence of two unknown parameters, which complicates the derivation of closed-form posterior distributions. To address this, it is common in Bayesian practice to adopt flexible and computationally convenient families of priors. In this study, we consider independent gamma priors for both parameters. The gamma distribution is a natural choice because it is defined on the positive real line, aligns well with the support of the parameters, and allows straightforward computation within the MCMC framework. This choice strikes a balance between flexibility and tractability, enabling us to obtain reliable posterior estimates while keeping the computational burden manageable. The gamma densities for λ and β with hyperparameters ( b 1 , c 1 ) and ( b 2 , c 2 ) are
π 1 ( λ ) λ b 1 1 e c 1 λ ; λ > 0 , b 1 > 0 , c 1 > 0
and
π 2 ( β ) β b 2 1 e c 2 β ; β > 0 , b 2 > 0 , c 2 > 0
Sometimes, we do not have enough prior information about the phenomenon; in such cases, we use non-informative priors. Here, we have used uniform prior as a non-informative prior. The joint prior density of parameters ( λ , β ) is given by
π ( λ , β ) λ b 1 1 β b 2 1 e ( c 1 λ + c 2 β )
The joint posterior density of ( λ , β ) based on the likelihood function will be
π * ( λ , β x ) = K 1 λ b 1 1 β b 2 1 e ( c 1 λ + c 2 β ) i = 1 m λ λ 1 λ ( 1 e β T i 1 ) λ ( 1 e β T i ) d i λ λ 1 λ ( 1 e β T i ) r i × n j = 1 i d j + j = 1 i 1 r j r i p i r i ( 1 p i ) n j = 1 i d j + j = 1 i r j
where
K = 0 0 λ b 1 1 β b 2 1 e ( c 1 λ + c 2 β ) i = 1 m λ λ 1 λ ( 1 e β T i 1 ) λ ( 1 e β T i ) d i λ λ 1 λ ( 1 e β T i 1 ) r i × n j = 1 i d j + j = 1 i 1 r j r i p i r i ( 1 p i ) n j = 1 i d j + j = 1 i r j d λ d β
The Bayes estimate of any function ϕ ( λ , β ) under SELF and GELF can be written as
ϕ ^ S L = K 1 0 0 ϕ ( λ , β ) λ b 1 1 β b 2 1 e ( c 1 λ + c 2 β ) i = 1 m λ λ 1 λ ( 1 e β T i 1 ) λ ( 1 e β T i ) d i λ λ 1 λ ( 1 e β T i ) r i n j = 1 i d j + j = 1 i 1 r j r i p i r i ( 1 p i ) n j = 1 i d j + j = 1 i r j d λ d β
and
ϕ ^ G L = ( K 1 0 0 ϕ ( λ , β ) w 2 λ b 1 1 β b 2 1 e ( c 1 λ + c 2 β ) i = 1 m λ λ 1 λ ( 1 e β T i 1 ) λ ( 1 e β T i ) d i λ λ 1 λ ( 1 e β T i ) r i n j = 1 i d j + j = 1 i 1 r j r i p i r i ( 1 p i ) n j = 1 i d j + j = 1 i r j d λ d β ) 1 / w 2
The Bayes estimates of parameters λ and β are obtained by substituting λ and β in place of general function ϕ ( λ , β ) . The double integral involved in (23) and (24) are analytically intractable. Thus, we compute the Bayes estimates λ and β via the MCMC approach. MCMC methods serve as essential tools for Bayesian computation. Notably, the Metropolis–Hastings (M-H) algorithm and Gibbs sampling (Hastings, [21]; Smith and Roberts [22]) are among the most commonly employed techniques for generating MCMC samples. For the implementation of the Gibbs sampling algorithm, it is necessary to derive the full conditional posterior densities of λ and β , which are given by:
π * ( λ β , x ) = λ b 1 1 e c 1 λ i = 1 m λ λ 1 λ ( 1 e β T i 1 ) λ ( 1 e β T i ) d i λ λ 1 λ ( 1 e β T i ) r i × n j = 1 i d j + j = 1 i 1 r j r i p i r i ( 1 p i ) n j = 1 i d j + j = 1 i r j
π * ( β λ , x ) = β b 2 1 e c 2 β i = 1 m λ λ 1 λ ( 1 e β T i 1 ) λ ( 1 e β T i ) d i λ λ 1 λ ( 1 e β T i ) r i × n j = 1 i d j + j = 1 i 1 r j r i p i r i ( 1 p i ) n j = 1 i d j + j = 1 i r j
The detailed algorithm to generate posterior samples and to calculate Bayes estimates is given below in Algorithm 1:
Algorithm 1 MCMC algorithm for Bayesian estimation of λ and β .
1:
Initialize λ 0 = λ ^ and β 0 = β ^
2:
for  t = 1 to M do
3:
      Generate λ t from π 1 ( λ β t 1 , x ) using M-H:
  •     Generate candidate λ * N ( λ t 1 , 1 ) and U 1 U ( 0 , 1 ) .
  •     Compute acceptance probability
    ρ 1 ( λ t 1 , λ * ) = min 1 , π * ( λ * β t 1 , x ) π * ( λ t 1 β t 1 , x ) .
  •     If ρ 1 U 1 , set λ t = λ * ; else λ t = λ t 1 .
4:
      Generate β t from π 2 ( β λ t , x ) using M-H:
  •     Generate candidate β * N ( β t 1 , 1 ) and U 2 U ( 0 , 1 ) .
  •     Compute acceptance probability
    ρ 2 ( β t 1 , β * ) = min 1 , π * ( β * λ t , x ) π * ( β t 1 λ t , x ) .
  •     If ρ 2 U 2 , set β t = β * ; else β t = β t 1 .
5:
end for
6:
Discard the first M 0 samples (burn-in).
7:
Bayes Estimates:
λ ^ S L = 1 M M 0 t = M 0 + 1 M λ t , β ^ S L = 1 M M 0 t = M 0 + 1 M β t
λ ^ G L = 1 M M 0 t = M 0 + 1 M ( λ t ) w 2 1 / w 2 , β ^ G L = 1 M M 0 t = M 0 + 1 M ( β t ) w 2 1 / w 2
8:
Credible Intervals:
9:
Order the post burn-in samples as ( λ ( 1 ) , , λ ( M M 0 ) ) and ( β ( 1 ) , , β ( M M 0 ) ) . Then the 100 ( 1 γ ) % symmetric credible intervals are
λ [ ( M M 0 ) γ / 2 ] , λ [ ( M M 0 ) ( 1 γ / 2 ) ] , β [ ( M M 0 ) γ / 2 ] , β [ ( M M 0 ) ( 1 γ / 2 ) ]
The intervals with the shortest length are taken as the HPD intervals.

5. Simulation Study

This section presents a comprehensive Monte Carlo simulation designed to evaluate the performance of the proposed estimators under the PITI censoring framework. The assessment involves both point and interval estimation criteria, measured using the MSE of the estimators and average width (AW) and coverage probability (CP) of the confidence and credible intervals. For generating PITI censored samples from a given distribution, we adopted Algorithm 2 as suggested by Balakrishnan and Cramer [23]. Data samples of size n = 20 , 30 , 40 , and 50 were generated from the GKME distribution with two different parameter combinations. To explore the effect of censoring time T m , five variations of T m = 3 , 4 , 5 , 6 and 7 are considered and for censoring intervals, m, three different values of m are considered, i.e., m = 5 , 8 , and, 10. For observing the behavior of removal scheme, we have considered four random removal schemes, the first scheme considers no withdrawal of units or say, the risk of dropping at all the intermediate stages to be zero i.e., probability or removing the units p is zero at all the stages before final termination. The second scheme assumes an equal and non-zero probability of removal at each intermediate stage before termination, particularly, we have taken p = 0.2 . The third scheme assigns a higher probability p of removal at the beginning, which then decreases as the process advances. This situation can be seen in the fitness programs where many participants register initially but drop out early if they lose interest; those who persist beyond early stages are more likely to complete the program. Conversely, in the fourth scheme, probability of removal p is small in the initial stages and gradually increases in the later stages. This removal scheme can be seen in clinical trials where patients rarely drop out early because they are motivated or monitored closely at the beginning. As the trial progresses, fatigue, side effects, hospital services, or long follow-up may cause higher dropout rates in later stages. In particular, removal schemes are taken as R 1 = ( 0 m 1 , 1 ) ; R 2 = ( 0 . 2 m 1 , 1 ) , R 3 = ( 0 . 2 [ m / 2 ] , 0 . 2 [ ( m 1 ) / 2 ] , 1 ) , and R 4 = ( 0 . 1 [ m / 2 ] , 0 . 2 [ ( m 1 ) / 2 ] , 1 ) , where 0 4 = ( 0 , 0 , 0 , 0 ) and [.] denotes the greatest integer number. Under each configuration, ML estimates and Bayes estimates are computed alongside their MSEs. Confidence intervals for 95 % level of significance were derived using ACI methods, are their corresponding AW and CP are also reported. Bayes estimates of the parameters λ and β are calculated for informative prior, under SELF and GELF. The weight parameters used in GELF were w = 1.5 and 1.5 . For IP, we propose the use of independent gamma priors for λ and β . The hyper-parameters of prior distribution are selected using the method of moments as outlined in Singh et al. [4] and Yadav et al. [24]. For NIP, we have used uniform prior. The MSEs of Bayes estimators is calculated along with AW and CP of 95 % highest posterior density credible intervals (HDIs) under both prior settings are also calculated. All analyses are conducted using R software (https://www.r-project.org/) along with relevant additional packages and computational tools required for estimation, simulation, and plotting.
Results from the simulation are summarized in Table 1, Table 2, Table 3 and Table 4. From these tables, the following conclusions can be drawn:
1.
Table 1 represents the average estimates and MSEs for various combinations of ( λ , β ) for fixed m = 5 , T m = 5 and n = 30 for a fixed removal scheme. It is observed that the MSE of λ ^ decreases as β ^ increases, suggesting that higher scale parameter values lead to more precise estimation of λ . Conversely, the MSE of λ ^ increases as λ increases, indicating that shape parameter variation introduces more estimation variability. For β ^ , the MSE increases with both λ and β , implying that higher parameter values make shape estimation relatively more difficult.
2.
Bayes estimators under GELF for loss parameter w = + 1.5 exhibit the lowest MSE across all the parameter combinations. This indicates that Bayesian inference with asymmetric loss functions can effectively reduce estimation error.
3.
Table 2 represents the average estimate and MSE for varying n, removal schemes, R and keeping m = 5 , T m = 5 , λ = 0.5 and β = 0.5 fixed. From Table 2, the MSE of the estimators λ ^ and β ^ is the least for removal scheme 1 followed by 4, 3 and 2 for a fixed n. And the MSE of estimators decreases as the sample size increases.
4.
Table 3 represents the ACI and HDI with their AW and CP for varying n, R and fixed m = 5 , T m = 5 . The AW of HDI is lower than ACI for both λ and β , meaning Bayesian HDIs provide narrower intervals, reflecting greater precision. CP associated with HDI is higher than that of ACI, showing that HDI not only gives narrower intervals but also maintains better coverage.
5.
AW corresponding to removal scheme R 1 is lowest, followed by the AW corresponding to removal schemes R 4 , R 3 and R 2 in most of the cases. CP is also highest for removal scheme 2 in most cases.
6.
Table 4 represents the average estimates and their MSEs (in parentheses) for varying m, T m for λ = 0.5 , β = 0.5 and n = 30 . MSE decreases as censoring time T m increases, because extending the study period allows more failures to be observed, leading to improved estimation. Conversely, MSE increases as the number of intervals m increases, because more intervals mean more chances of incomplete information due to multiple censoring points.
7.
The Bayes estimators can be seen to have a smaller MSEs than the classical estimators for all the considered cases.
Algorithm 2 Simulation of Progressive Interval Type-I Censored Sample
1:
Specify the value of n , m , ( T 1 , T 2 , , T m ) ; model parameters λ , β .
2:
 
3:
Set i 0 , d s u m 0 , r s u m 0 .
4:
 
5:
Set i i + 1 . If i = m + 1 , exit the algorithm.
6:
 
7:
Generate d i Bin n d s u m r s u m , F ( T i ) F ( T i 1 ) 1 F ( T i 1 ) .
8:
Update d s u m d s u m + d i .
9:
if  i < m   then
10:
 
11:
     Generate r i Bin ( n d s u m r s u m , p i ) .
12:
 
13:
else
14:
      Set r i n d s u m r s u m .
15:
 
16:
end if
17:
Update r s u m r s u m + r i and go to Step 3.

6. Real Data Analysis

In this section, we analyze a real-life dataset concerning the survival times of patients diagnosed with plasma cell myeloma, originally reported by Carbone et al. [25]. The study investigated the association between clinical symptoms, the type of abnormal protein, and survival outcomes, as well as the response to therapy, for 112 individuals treated at the National Cancer Institute. The data are given in Table 5.
For the analysis, we assume that the survival times follow a GKME distribution. The ML estimates of the parameters were obtained using the numerical routine method. Given the absence of prior knowledge about the dataset, a non-informative prior is used to obtain the Bayes estimates under SELF and GELF. Also, 95 % approximate CIs and HPD intervals corresponding to ML and Bayes estimates are constructed. The resulting point and interval estimates for λ and β are reported in Table 6. The trace plots and posterior densities of the parameters λ and β , based on the generated MCMC samples, are displayed in Figure 3 and Figure 4, which illustrate their distributional behavior.

7. Conclusions

In this article, we develop comprehensive inferential procedures under both classical and Bayesian paradigms for the GKME distribution in the presence of PITI censoring. The work includes the derivation of point estimators as well as the construction of interval estimates for the unknown parameters, providing a complete framework for statistical inference in such censoring scenarios. Under the classical inference framework, ML estimators are computed through an iterative numerical procedure, and their corresponding ACIs are constructed at the 95 % confidence level, relying on the asymptotic normality property of the MLEs. For the Bayesian analysis, the M-H algorithm is implemented to generate MCMC samples, which are then utilized to obtain Bayes estimates under both SELF and GELF. From these posterior samples, 95 % HDIs are also derived to quantify parameter uncertainty. To evaluate the performance of the proposed estimators, an extensive Monte Carlo simulation study is conducted, and the findings indicate that the estimators perform satisfactorily across various scenarios. Finally, the practical applicability of the proposed methodology is demonstrated through the analysis of a real dataset involving the survival times of guinea pigs under multiple removal schemes within the PITI censoring framework.

Author Contributions

Conceptualization, E.V. and S.K.S.; methodology, E.V. and S.K.S.; software, E.V. and S.K.S.; validation, E.V. and S.K.S.; formal analysis, S.K.S. and M.M.H.; investigation, M.M.A. and M.M.H.; resources, M.M.A.; data curation, M.M.A. and M.M.H.; writing—original draft, E.V.; writing—review and editing, M.M.H.; visualization, M.M.H.; supervision, S.K.S.; funding acquisition, M.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) (grant number IMSIU-DDRSP2502).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data sources are given in the paper with their related references.

Acknowledgments

The authors extend their appreciation to the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) for funding this work through Research Group: IMSIU-DDRSP2502.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Verma, E.; Singh, S.K.; Yadav, S. A new generalized class of kavya–manoharan distributions: Inferences and applications. Life Cycle Reliab. Saf. Eng. 2025, 14, 79–91. [Google Scholar] [CrossRef]
  2. Sirvanci, M.; Yang, G. Estimation of the weibull parameters under type i censoring. J. Am. Stat. Assoc. 1984, 79, 183–187. [Google Scholar] [CrossRef]
  3. Basu, S.; Singh, S.K.; Singh, U. Parameter estimation of inverse lindley distribution for type-i censored data. Comput. Stat. 2017, 32, 367–385. [Google Scholar] [CrossRef]
  4. Singh, S.K.; Singh, U.; Sharma, V.K. Bayesian estimation and prediction for flexible weibull model under type-ii censoring scheme. J. Probab. Stat. 2013, 2013, 146140. [Google Scholar] [CrossRef]
  5. Finkelstein, D.M. A proportional hazards model for interval-censored failure time data. Biometrics 1986, 42, 845–854. [Google Scholar] [CrossRef]
  6. Sun, J. The Statistical Analysis of Interval-Censored Failure Time Data; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  7. Guure, C.B.; Ibrahim, N.A.; Dwomoh, D.; Bosomprah, S. Bayesian statistical inference of the loglogistic model with interval-censored lifetime data. J. Stat. Comput. Simul. 2015, 85, 1567–1583. [Google Scholar] [CrossRef]
  8. Sharma, V.K.; Singh, S.K.; Singh, U.; Ul-Farhat, K. Bayesian estimation on interval censored lindley distribution using lindley’s approximation. Int. J. Syst. Assur. Eng. Manag. 2017, 8 (Suppl. S2), 799–810. [Google Scholar] [CrossRef]
  9. Balakrishnan, N.; Arwala, R. Progressive Censoring: Theory, Methods, and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  10. Aggarwala, R. Progressive interval censoring: Some mathematical results with applications to inference. Commun.-Stat.-Theory Methods 2001, 30, 1921–1935. [Google Scholar] [CrossRef]
  11. Chen, D.-G.; Lio, Y. Parameter estimations for generalized exponential distribution under progressive type-i interval censoring. Comput. Stat. Data Anal. 2010, 54, 1581–1591. [Google Scholar] [CrossRef]
  12. Tse, S.K.; Yang, C.; Yuen, H.-K. Statistical analysis of weibull distributed lifetime data under type ii progressive censoring with binomial removals. J. Appl. Stat. 2000, 27, 1033–1043. [Google Scholar] [CrossRef]
  13. Ashour, S.; Afify, W. Statistical analysis of exponentiated weibull family under type-I progressive interval censoring with random removals. J. Appl. Sci. Res. 2007, 3, 1851–1863. [Google Scholar]
  14. Kaushik, A.; Pandey, A.; Maurya, S.K.; Singh, U.; Singh, S.K. Estimations of the parameters of generalised exponential distribution under progressive interval type-i censoring scheme with random removals. Austrian J. Stat. 2017, 46, 33–47. [Google Scholar] [CrossRef]
  15. Kaushik, A.; Singh, U.; Singh, S.K. Bayesian inference for the parameters of weibull distribution under progressive type-i interval censored data with beta-binomial removals. Commun.-Stat.-Simul. Comput. 2017, 46, 3140–3158. [Google Scholar] [CrossRef]
  16. Lodhi, C.; Tripathi, Y.M. Inference on a progressive type i interval-censored truncated normal distribution. J. Appl. Stat. 2020, 47, 1402–1422. [Google Scholar] [CrossRef]
  17. Alotaibi, R.; Rezk, H.; Dey, S.; Okasha, H. Bayesian estimation for dagum distribution based on progressive type i interval censoring. PLoS ONE 2021, 16, e0252556. [Google Scholar] [CrossRef]
  18. Roy, S.; Pradhan, B.; Purakayastha, A. On inference and design under progressive type-i interval censoring scheme for inverse gaussian lifetime model. Int. J. Qual. Reliab. Manag. 2022, 39, 1937–1962. [Google Scholar] [CrossRef]
  19. Hasan, R.; Al-Mosawi, R.R.; Qader, A.A. Inspection times and optimal censoring scheme for generalized invertedexponential distribution with progressive type i interval censored data. Iraqi J. Sci. 2024, 65, 2114–2131. [Google Scholar] [CrossRef]
  20. Calabria, R.; Pulcini, G. An engineering approach to bayes estimation for the weibull distribution. Microelectron. Reliab. 1994, 34, 789–802. [Google Scholar] [CrossRef]
  21. Hastings, W.K. Monte carlo sampling methods using markov chains and their applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
  22. Smith, A.F.; Roberts, G.O. Bayesian computation via the gibbs sampler and related markov chain monte carlo methods. J. R. Stat. Soc. Ser. 1993, 55, 3–23. [Google Scholar] [CrossRef]
  23. Balakrishnan, N.; Cramer, E. Art of Progressive Censoring; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  24. Yadav, S.; Singh, S.K.; Kaushik, A. Parameter estimation of burr type-iii distribution under generalized progressive hybrid censoring scheme. Jpn. J. Stat. Data Sci. 2024, 8, 93–144. [Google Scholar] [CrossRef]
  25. Carbone, P.P.; Kellerhouse, L.E.; Gehan, E.A. Plasmacytic myeloma: A study of the relationship of survival to various clinical manifestations and anomalous protein type in 112 patients. Am. J. Med. 1967, 42, 937–948. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (i) PDF and (ii) hazard plot of GKME distribution.
Figure 1. (i) PDF and (ii) hazard plot of GKME distribution.
Axioms 14 00769 g001
Figure 2. Graphical representation of progressive interval type-I censoring scheme.
Figure 2. Graphical representation of progressive interval type-I censoring scheme.
Axioms 14 00769 g002
Figure 3. (i) Trace plot, (ii) density plot of λ for real dataset.
Figure 3. (i) Trace plot, (ii) density plot of λ for real dataset.
Axioms 14 00769 g003
Figure 4. (i) Trace plot and (ii) density plot of β for real dataset.
Figure 4. (i) Trace plot and (ii) density plot of β for real dataset.
Axioms 14 00769 g004
Table 1. Average estimates and MSEs (in parentheses) for various combinations of ( λ , β ) for fixed m = 5 , T m = 5 and n = 30 .
Table 1. Average estimates and MSEs (in parentheses) for various combinations of ( λ , β ) for fixed m = 5 , T m = 5 and n = 30 .
λ β λ ^ ML λ ^ SELF λ ^ GELF β ^ ML β ^ SELF β ^ GELF
w = 1.5 w = + 1.5 w = 1.5 w = + 1.5
0.500.500.7160.70770.69880.70890.67130.63780.65060.6666
(0.9483)(0.7437)(0.7405)(0.7359)(0.1831)(0.1704)(0.1678)(0.1641)
0.750.70060.65620.69670.62520.88390.80921.89791.7666
(0.9336)(0.7247)(0.7309)(0.7275)(0.1997)(0.1864)(0.1792)(0.1792)
1.000.60170.58070.5710.53441.29981.28081.27111.2844
(0.9265)(0.7184)(0.729)(0.717)(0.2226)(0.2034)(0.2087)(0.209)
1.500.60440.520.54750.55311.55481.55451.54531.5593
(0.922)(0.7109)(0.7206)(0.7124)(0.2484)(0.2321)(0.2334)(0.2319)
2.50.6030.51690.51440.54782.83442.81892.80962.8272
(0.9166)(0.7152)(0.7121)(0.7121)(0.2824)(0.2698)(0.2695)(0.2695)
0.750.500.97410.96730.96580.97510.75820.73870.7330.7456
(0.9714)(0.8639)(0.8647)(0.8644)(0.2205)(0.1761)(0.173)(0.1733)
0.750.9610.96270.95650.96641.02841.0121.01051.0161
(0.9499)(0.8419)(0.8497)(0.8455)(0.2414)(0.1943)(0.1948)(0.1935)
1.000.95140.9570.94730.95611.29181.27411.27071.2794
(0.9355)(0.8281)(0.8351)(0.8303)(0.2653)(0.2166)(0.2219)(0.2198)
1.500.95250.95650.9460.96151.85941.75531.8481.5618
(0.9299)(0.8191)(0.8214)(0.816)(0.2935)(0.246)(0.2502)(0.2469)
2.500.9510.95460.94490.96092.82852.82012.80932.82
(0.9245)(0.813)(0.8218)(0.821)(0.326)(0.2754)(0.2839)(0.2822)
1.500.501.61861.6931.6081.71540.76230.75550.74530.7545
(1.3109)(1.0013)(1.003)(1.0013)(0.2503)(0.1916)(0.1958)(0.1983)
0.751.48771.69921.68021.72951.03051.02841.01691.0345
(1.275)(0.971)(0.9683)(0.975)(0.2758)(0.2178)(0.2159)(0.2187)
1.001.47061.66311.65311.74621.31481.30471.30111.3043
(1.2522)(0.9516)(0.9529)(0.9454)(0.3077)(0.2497)(0.2567)(0.2549)
1.501.47561.66511.65551.73671.65661.75591.77261.6591
(1.2436)(0.9377)(0.9384)(0.9411)(0.2896)(0.2804)(0.2815)(0.2789)
2.501.47061.66211.66131.73772.82022.8122.80912.8178
(1.2334)(0.9286)(0.9325)(0.9335)(0.2753)(0.3191)(0.3192)(0.3199)
20.502.24112.13132.23012.14020.56310.55950.55590.5599
(2.1918)(1.3874)(1.3884)(1.3881)(0.282)(0.2038)(0.2527)(0.2427)
0.752.22222.11542.20852.1130.83950.83610.83190.8324
(2.1254)(1.3213)(1.3155)(1.3208)(0.3179)(0.2622)(0.2675)(0.2673)
1.002.21242.10512.00022.10741.10051.09521.08971.0958
(2.0934)(1.2842)(1.295)(1.2908)(0.3538)(0.2968)(0.2937)(0.2954)
1.502.20962.20631.99412.11551.46421.48281.52781.4707
(2.0656)(1.2608)(1.2649)(1.2614)(0.396)(0.3421)(0.3433)(0.3402)
2.502.02012.01872.0092.02332.63572.63042.62862.6329
(2.0509)(1.2422)(1.2504)(1.2391)(0.4462)(0.3879)(0.391)(0.3862)
2.500.502.69352.68632.48132.59170.58380.58190.57280.5857
(4.4482)(2.5288)(2.5247)(2.4389)(0.3117)(0.2765)(0.2717)(0.273)
0.752.68762.62562.52552.53060.8510.85020.8410.8571
(4.187)(2.1803)(2.1786)(2.1867)(0.3495)(0.2983)(0.3009)(0.302)
1.002.67692.53522.53422.5391.11061.10781.09931.1095
(4.1387)(2.1315)(2.134)(2.1352)(0.3975)(0.3599)(0.3632)(0.3637)
1.502.63842.53522.52542.53571.47811.48661.46751.4481
(4.1061)(2.1047)(2.1032)(2.0957)(0.4545)(0.42)(0.4138)(0.4135)
2.502.63972.53532.5322.53842.64222.6312.63032.6391
(4.0908)(2.0859)(2.0912)(2.0891)(0.5204)(0.4898)(0.4909)(0.4887)
Table 2. Average estimates and their MSEs (in parentheses) for varying n, R and fixed m = 5 , T m = 5 for λ = 0.5 and β = 0.5 and removal scheme R 2 .
Table 2. Average estimates and their MSEs (in parentheses) for varying n, R and fixed m = 5 , T m = 5 for λ = 0.5 and β = 0.5 and removal scheme R 2 .
nR λ ^ ML λ ^ SELF λ ^ GELF β ^ ML β ^ SELF β ^ GELF
w = 1.5 w = + 1.5 w = 1.5 w = + 1.5
20 R 1 0.57150.55420.55030.52410.56040.55050.54990.5249
(0.6705)(0.4135)(0.5143)(0.2602)(0.2705)(0.2135)(0.2043)(0.1902)
R 2 0.57820.55240.55260.53210.56830.56210.56540.5418
(0.8600)(0.5421)(0.6747)(0.3372)(0.3092)(0.2510)(0.2435)(0.2226)
R 3 0.57220.54790.55970.53060.57170.54790.56930.5407
(0.8053)(0.4960)(0.6209)(0.3277)(0.3553)(0.2960)(0.2909)(0.2476)
R 4 0.57490.54470.55430.52650.56380.55920.55970.5276
(0.7392)(0.4610)(0.5635)(0.3026)(0.4100)(0.3421)(0.34478)(0.2772)
30 R 1 0.55920.52680.53290.52610.54080.52670.53260.5164
(0.4517)(0.2793)(0.3632)(0.1926)(0.2017)(0.1593)(0.1532)(0.1626)
R 2 0.56260.53500.54380.53160.55110.53460.54260.5218
(0.5893)(0.3729)(0.4470)(0.2327)(0.2384)(0.2081)(0.1872)(0.1837)
R 3 0.55720.53230.53980.53980.54790.53190.54000.5204
(0.5453)(0.3304)(0.4171)(0.2229)(0.2953)(0.2304)(0.2171)(0.2029)
R 4 0.55380.53130.53720.52720.54480.52860.53690.5178
(0.4884)(0.3181)(0.3872)(0.2037)(0.3393)(0.2729)(0.2570)(0.2227)
40 R 1 0.53970.52380.52430.51290.53170.51960.52490.5126
(0.3399)(0.2251)(0.2622)(0.1452)(0.1399)(0.1251)(0.1022)(0.1452)
R 2 0.53990.52620.53180.51720.53990.52590.53190.5166
(0.4467)(0.2814)(0.3395)(0.1923)(0.1830)(0.1554)(0.1423)(0.1554)
R 3 0.53540.52340.53090.51570.53620.52320.52920.5157
(0.4131)(0.2591)(0.3146)(0.1798)(0.2331)(0.1791)(0.1746)(0.1698)
R 4 0.53230.52140.52750.51410.53360.52250.52830.5143
(0.3830)(0.2454)(0.3023)(0.1554)(0.2767)(0.2014)(0.2095)(0.1823)
50 R 1 0.52320.52640.51960.52080.52490.53590.52860.5261
(0.2747)(0.1824)(0.2286)(0.1213)(0.0947)(0.0924)(0.0686)(0.1313)
R 2 0.53210.52030.52590.5130.53030.52540.52570.5329
(0.3560)(0.2194)(0.2789)(0.1479)(0.1319)(0.1104)(0.0974)(0.1418)
R 3 0.53780.53640.52470.51110.52870.51850.52430.5128
(0.3291)(0.2044)(0.2617)(0.1462)(0.1791)(0.1344)(0.1217)(0.1562)
R 4 0.52670.53740.52190.51070.5260.51770.52230.512
(0.3019)(0.1904)(0.2374)(0.1318)(0.2160)(0.1594)(0.1489)(0.1679)
Table 3. ACI and HDI with their AW (in parentheses) and CP for varing n, R and fixed m = 5 , T m = 5 for λ = 0.5 and β = 0.5 .
Table 3. ACI and HDI with their AW (in parentheses) and CP for varing n, R and fixed m = 5 , T m = 5 for λ = 0.5 and β = 0.5 .
λ β
n R ACIHDIACIHDI
20 R 1 (0.158, 0.92903)(0.22748, 0.77724)(0.23006, 0.82585)(0.3652, 0.68335)
(0.77103) 0.947(0.54976) 0.987(0.59579) 0.939(0.31815) 0.988
R 2 (0.15659, 0.93573)(0.22013, 0.77645)(0.22941, 0.83262)(0.37457, 0.69132)
(0.77914) 0.951(0.55632) 0.99(0.60321) 0.957(0.31675) 0.992
R 3 (0.15776, 0.92973)(0.22685, 0.77757)(0.23213, 0.83059)(0.37152, 0.6867)
(0.77197) 0.951(0.55072) 0.987(0.59846) 0.953(0.31518) 0.991
R 4 (0.15793, 0.92917)(0.22754, 0.77773)(0.22968, 0.82611)(0.36141, 0.6781)
(0.77124) 0.947(0.55019) 0.989(0.59643) 0.947(0.31669) 0.99
30 R 1 (0.19023, 0.84791)(0.30951, 0.71002)(0.24322, 0.78554)(0.39303, 0.61598)
(0.65768) 0.949(0.40051) 0.989(0.54232) 0.948(0.22295) 0.989
R 2 (0.19013, 0.85235)(0.3044, 0.70963)(0.23304, 0.77827)(0.39348, 0.62119)
(0.66222) 0.95(0.40523) 0.99(0.54523) 0.955(0.22771) 0.991
R 3 (0.18998, 0.84813)(0.30898, 0.71)(0.24416, 0.78855)(0.39456, 0.61857)
(0.65815) 0.951(0.40102) 0.99(0.54439) 0.954(0.22401) 0.99
R 4 (0.19037, 0.84807)(0.30976, 0.71048)(0.23133, 0.77657)(0.39861, 0.61969)
(0.6577) 0.952(0.40072) 0.99(0.54524) 0.952(0.22108) 0.989
40 R 1 (0.2074, 0.81914)(0.33743, 0.66583)(0.25406, 0.76284)(0.41769, 0.60364)
(0.61174) 0.947(0.3284) 0.987(0.50878) 0.942(0.18595) 0.987
R 2 (0.20738, 0.82272)(0.33263, 0.66467)(0.24954, 0.7655)(0.41346, 0.59651)
(0.61534) 0.948(0.33204) 0.991(0.51596) 0.95(0.18305) 0.99
R 3 (0.20762, 0.82)(0.33649, 0.66511)(0.25336, 0.76672)(0.41524, 0.60369)
(0.61238) 0.947(0.32862) 0.99(0.51336) 0.946(0.18845) 0.989
R 4 (0.20721, 0.81894)(0.33744, 0.66594)(0.25357, 0.76285)(0.41692, 0.60408)
(0.61173) 0.947(0.3285) 0.99(0.50928) 0.95(0.18716) 0.99
50 R 1 (0.23963, 0.73802)(0.41946, 0.59861)(0.26722, 0.72253)(0.44552, 0.53627)
(0.49839) 0.946(0.17915) 0.986(0.45531) 0.946(0.09075) 0.986
R 2 (0.24092, 0.73934)(0.4169, 0.59785)(0.25317, 0.71115)(0.43237, 0.52638)
(0.49842) 0.955(0.18095) 0.99(0.45798) 0.952(0.09401) 0.986
R 3 (0.23984, 0.7384)(0.41862, 0.59754)(0.26539, 0.72468)(0.43828, 0.53556)
(0.49856) 0.949(0.17892) 0.988(0.45929) 0.95(0.09728) 0.985
R 4 (0.23965, 0.73784)(0.41966, 0.59869)(0.25522, 0.71331)(0.45412, 0.54567)
(0.49819) 0.946(0.17903) 0.97(0.45809) 0.95(0.09155) 0.987
Table 4. Average estimates and their MSEs (in parentheses) for varing m, T m for λ = 0.5 , β = 0.5 and n = 30 and removal scheme R 2 .
Table 4. Average estimates and their MSEs (in parentheses) for varing m, T m for λ = 0.5 , β = 0.5 and n = 30 and removal scheme R 2 .
m T m λ ^ ML λ ^ SELF λ ^ GELF β ^ ML β ^ SELF β ^ GELF
w = 1.5 w = + 1.5 w = 1.5 w = + 1.5
530.62840.61220.62090.57830.53460.52040.53630.5191
(0.9793)(0.7747)(0.7715)(0.7669)(0.1964)(0.1858)(0.1811)(0.1774)
40.62010.60050.61720.56740.51910.51230.54350.508
(0.9657)(0.7611)(0.7579)(0.7533)(0.1885)(0.1753)(0.1732)(0.1695)
50.59580.57020.59670.55340.51740.5070.50910.5021
(0.9483)(0.7437)(0.7405)(0.7359)(0.1831)(0.1704)(0.1678)(0.1641)
60.58780.57430.5820.56980.51520.4940.51480.4923
(0.9314)(0.7268)(0.7236)(0.719)(0.1795)(0.1686)(0.1642)(0.1605)
70.57270.56270.58360.53150.51080.50190.51260.5017
(0.9185)(0.7139)(0.7107)(0.7061)(0.1671)(0.1524)(0.1518)(0.1481)
830.63770.59760.61290.57620.54530.53550.5430.5316
(0.9888)(0.7856)(0.7843)(0.7782)(0.2172)(0.2083)(0.2009)(0.2009)
40.6080.58730.60440.58070.5270.52120.53630.5189
(0.9618)(0.7586)(0.7573)(0.7512)(0.1933)(0.1844)(0.1791)(0.177)
50.57890.58270.5690.53010.51010.50450.52620.5037
(0.9497)(0.7465)(0.7452)(0.7391)(0.1845)(0.1756)(0.1703)(0.1682)
60.57290.58250.58370.56080.51210.50770.51750.5066
(0.9354)(0.7322)(0.7309)(0.7248)(0.1771)(0.1682)(0.1629)(0.1608)
70.55930.55840.59270.57320.50780.49970.50060.4957
(0.9286)(0.7254)(0.7241)(0.718)(0.1688)(0.1599)(0.1546)(0.1525)
1030.63360.61650.64020.60530.61770.57080.55190.5165
(0.9895)(0.7863)(0.7857)(0.7789)(0.2112)(0.2016)(0.1963)(0.1937)
40.61090.59260.6130.55510.52780.51720.52110.5159
(0.9739)(0.7707)(0.7701)(0.7633)(0.1961)(0.1865)(0.1812)(0.1786)
50.60750.59610.60770.58990.51680.4960.50930.4946
(0.9526)(0.7494)(0.7488)(0.742)(0.1868)(0.1772)(0.1719)(0.1693)
60.60020.59040.59860.5590.51630.49440.51860.4916
(0.9447)(0.7415)(0.7409)(0.7341)(0.1772)(0.1676)(0.1623)(0.1501)
70.58540.58990.60630.58520.51410.50080.51930.4994
(0.9371)(0.7339)(0.7333)(0.7265)(0.1628)(0.1532)(0.1479)(0.1453)
Table 5. Survival times for patients with plasma cell myeloma.
Table 5. Survival times for patients with plasma cell myeloma.
IntervalNumber of FailuresNumber of Withdrawn
(Months)(d)(r)
[ 0 , 5.5 ) 181
[ 5.5 , 10.5 ) 161
[ 10.5 , 15.5 ) 183
[ 15.5 , 20.5 ) 100
[ 20.5 , 25.5 ) 110
[ 25.5 , 30.5 ) 81
[ 30.5 , 40.5 ) 132
[ 40.5 , 50.5 ) 43
[ 50.5 , 60.5 ) 12
[ 60.5 , ) 00
Table 6. ML estimates (standard error) and Bayes estimates of λ and β along with 95 % approximate confidence/HPD intervals for real dataset.
Table 6. ML estimates (standard error) and Bayes estimates of λ and β along with 95 % approximate confidence/HPD intervals for real dataset.
MLSELFGELF
w = 1.5 w = + 1.5
λ ^ 0.2151 (0.1330)0.30020.33480.1785
(0.0705, 0.6562)(0.0657, 0.3846)
β ^ 0.0649 (0.0093)0.06340.06120.0631
(0.0492, 0.0858)(0.0436, 0.0705)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Verma, E.; Abdelwahab, M.M.; Singh, S.K.; Hasaballah, M.M. Inferences for the GKME Distribution Under Progressive Type-I Interval Censoring with Random Removals and Its Application to Survival Data. Axioms 2025, 14, 769. https://doi.org/10.3390/axioms14100769

AMA Style

Verma E, Abdelwahab MM, Singh SK, Hasaballah MM. Inferences for the GKME Distribution Under Progressive Type-I Interval Censoring with Random Removals and Its Application to Survival Data. Axioms. 2025; 14(10):769. https://doi.org/10.3390/axioms14100769

Chicago/Turabian Style

Verma, Ela, Mahmoud M. Abdelwahab, Sanjay Kumar Singh, and Mustafa M. Hasaballah. 2025. "Inferences for the GKME Distribution Under Progressive Type-I Interval Censoring with Random Removals and Its Application to Survival Data" Axioms 14, no. 10: 769. https://doi.org/10.3390/axioms14100769

APA Style

Verma, E., Abdelwahab, M. M., Singh, S. K., & Hasaballah, M. M. (2025). Inferences for the GKME Distribution Under Progressive Type-I Interval Censoring with Random Removals and Its Application to Survival Data. Axioms, 14(10), 769. https://doi.org/10.3390/axioms14100769

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop