Next Article in Journal
Monotone Picard Maps in Relational Metric Spaces
Previous Article in Journal
A Fast Integration Method of Analysis and Optimization for the Contact Performance Design of a Face Gear Split-Torque Transmission
Previous Article in Special Issue
Testing the Temperature-Mortality Nonparametric Function Change with an Application to Chicago Mortality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Inferential Study of Discrete One-Parameter Linear Exponential Distribution Under Randomly Right-Censored Data

Department of Statistics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(21), 3520; https://doi.org/10.3390/math13213520
Submission received: 26 August 2025 / Revised: 27 October 2025 / Accepted: 29 October 2025 / Published: 3 November 2025
(This article belongs to the Special Issue Mathematical Statistics and Nonparametric Inference)

Abstract

Counting data play a critical role in various real-life applications across different scientific fields. This study handles the classical and Bayesian estimation of the one-parameter discrete linear exponential distribution under randomly right-censored data. Maximum likelihood estimators, both point and interval, are derived for the unknown parameter. In addition, Bayesian estimators are gained using informative and non-informative priors, assessed under three distinct loss functions: squared error loss, linear exponential loss, and generalized entropy loss. An algorithm for generating randomly right-censored data from the proposed model is also developed. To evaluate the efficiency of the estimators, considerable simulation studies are conducted, revealing that the maximum likelihood and the Bayesian approach under the generalized entropy loss function with a positive weight consistently outperform other methods across all sample sizes, achieving the lowest root mean squared errors. Finally, the discrete linear exponential distribution demonstrates strong applicability in modeling discrete count lifetime data in physical and medical sciences, outperforming related alternative distributions.
MSC:
65H10; 65Z05; 60-08; 60E05; 62E10; 62F15; 62F25; 62N01; 62N05; 62P35; 65C05; 65H10; 65K05

1. Introduction

Lifetime distributions play a crucial role in analyzing and modeling data across a wide range of applied fields, including economics, engineering, finance, and the medical and biological sciences. In practice, data collection is frequently constrained by time or budget limitations, often resulting in incomplete datasets. Such incomplete datasets are commonly referred to as censored data. Various censoring schemes are available in the literature to examine these data sets, such as left, right, type I, type II, random, and hybrid censoring schemes. One of the most important censoring techniques in the literature is random censoring. Random right-censored data occur when the precise event time is unknown, as a subject is yet to test the desired event at the end of the study or drops out of the study before the event occurs, and this decision to drop out is not influenced by the event itself. This incomplete information forms a time interval, rather than a specific point, where the event’s occurrence is “censored” at the right side [1]. Randomly censored lifetime data are commonly used in multiple applications such as medical science, biology, reliability studies, etc.
The literature on censoring has been largely developed in a continuous setting. Classical contributions include the Kaplan–Meier estimator proposed by [2], kernel-based approaches for censored data introduced by [3], and prediction methods under random censoring by [4], with further developments addressing complex schemes such as twice-censored data by [5,6]. These works highlight the theoretical depth of continuous censoring methods. Many real-world datasets are inherently discrete, which limits the direct application of continuous methods. There remains a need to develop novel discrete distributions that can effectively model censored data and accommodate diverse types of lifetime discrete data. In this regard, several studies have made notable contributions. For instance, refs. [7,8,9,10,11,12,13] introduced multi-parameter discrete models, whereas refs. [14,15,16] proposed one-parameter models for discrete lifetime data. The challenge of random censoring in discrete settings has also been addressed, notably by [17,18,19,20], who developed methodological frameworks for practical implementation.
One of the recently proposed discrete lifetime models is the Discrete Linear Exponential (DLE) distribution [16], which provides the flexibility to fit over-dispersed, positively skewed, and increasing failure data. The probability mass function (PMF) of DLE distribution is given by:
P ( X = x i , ϕ ) = e ϕ x ( 1 + ϕ x + ϕ 3 ) e ϕ ( x + 1 ) ( 1 + ϕ ( x + 1 ) + ϕ 3 ) 1 + ϕ 3 ; x = 0 , 1 , 2 , ϕ > 0 .
The corresponding CDF is given by:
F X ( x , ϕ ) = 1 e ϕ ( x + 1 ) 1 + ϕ 3 ( 1 + ϕ ( x + 1 ) + ϕ 3 ) x = 0 , 1 , 2 , ϕ > 0 .
While the original work by [16] examined the fundamental properties and estimation of the DLE distribution under complete data, the current study extends this model to the case of randomly right-censored data, which frequently occurs in medical and reliability studies. New inferential procedures are developed for parameter estimation using both maximum likelihood (ML) and Bayesian methods under various loss functions. Furthermore, an algorithm for generating randomly right-censored samples from the DLE model is proposed, and extensive simulation experiments are conducted to evaluate the performance of the estimators. Finally, an analysis of real data is carried out to validate and illustrate the practical applicability of the proposed methods.

2. Maximum Likelihood Estimations

The ML method is one of the most widely used classical estimation techniques. In this Section, both point and interval estimation methods under randomly right-censored data are considered.

2.1. Point Estimation

The ML estimation of the parameter ϕ is derived for the DLE distribution under random right-censored sample. In this case, each observation contributes to the likelihood function through a random sample ( x i , d i ) of size nm defined as:
L i = [ P ( x i ) ] d i [ S ( x i ) ] 1 d i ,
where x i denotes the observed lifetime, P ( x i ) is the probability mass function, S ( x i ) is the survival function and d i is a censoring indicator, taking the value d i = 1 if the lifetime is observed and d i = 0 if it is censored ( i = 1 , 2 , . . . , n ) , see [1]. Then, for the DLE distribution under random right-censoring, the likelihood function of ϕ is given by:
L ( x ̲ , ϕ , d ̲ ) = i = 1 n [ e ϕ x i 1 + ϕ 3 ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) d i × 1 + ϕ x i + ϕ 3 1 + ϕ 3 e ϕ x i ( 1 d i ) ] .
The corresponding log-likelihood function is:
L L ( x ̲ , ϕ , d ̲ ) = i = 1 n [ d i ( ϕ x i log ( 1 + ϕ 3 ) + log ( ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) ) ) + ( 1 d i ) ϕ x i + log ( 1 + ϕ x i + ϕ 3 ) log ( 1 + ϕ 3 ) ] .
The nonlinear likelihood equation corresponding to the parameter ϕ is derived by differentiating Equation (4) with respect to ϕ , as shown below:
L L ϕ = i = 1 n [ x i 3 ϕ 2 1 + ϕ 3 + d i x i + 3 ϕ 2 + e ϕ ϕ ( x i + 1 ) + ϕ 3 x i 3 ϕ 2 ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) + ( 1 d i ) x i + 3 ϕ 2 1 + ϕ x i + ϕ 3 ] .
The detailed derivation of Equation (5) can be found in Appendix A. Now, the ML estimator of parameters ϕ can be found by equating Equation (5) to zero. However, due to the complexity of this likelihood equation, deriving a closed-form solution for the ML estimator of ϕ is challenging. Consequently, numerical methods, such as the Newton–Raphson iteration technique, are employed to approximate the ML estimate of ϕ . An R program has been developed to solve the ML equations numerically.

2.2. Interval Estimation

The ML estimation of the unknown parameter ϕ does not have a closed-form solution; therefore, the exact sampling distribution of the ML estimator cannot be derived. As a result, it is not feasible to construct an exact confidence interval for ϕ . Instead, an asymptotic confidence interval (ACI) for ϕ is developed based on the asymptotic distribution of its ML estimator. It is well known that the ML estimator ϕ ^ of ϕ is consistent asymptotically and follows a normal distribution such that n ( ϕ ^ ϕ ) N ( 0 , I 1 ( ϕ ) ) , where I ( ϕ ) is the expected Fisher information defined as:
I ( ϕ ) = E ( 2 L L ϕ 2 ) .
In many practical situations, it is difficult to obtain an explicit form of this expectation. Therefore, following [21], the observed Fisher information is employed instead. The observed Fisher information is given by:
J ( ϕ ) = 2 L L ϕ 2 ,
with ϕ replaced by its estimate ϕ ^ , where the second-order partial derivative of the log-likelihood (LL) function can be expressed as:
2 L L ϕ 2 = i = 1 n [ 6 ϕ ( 1 + ϕ 3 ) 9 ϕ 2 ( 1 + ϕ 3 ) 2 + d i [ 6 ϕ + e ϕ ( ( x i + 1 ) + 3 ϕ 2 6 ϕ ) e ϕ ( ϕ ( x i + 1 ) + ϕ 3 x i 3 ϕ 2 ) ] ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) x i + 3 ϕ 2 e ϕ ( x i + 1 + 3 ϕ 2 ) + e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) [ x i + 3 ϕ 2 + e ϕ ( ϕ ( x i + 1 ) + ϕ 3 x i 3 ϕ 2 ) ] / ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) 2 ] + ( 1 d i ) 6 ϕ ( 1 + ϕ x i + ϕ 3 ) ( x i + 3 ϕ 2 ) 2 / 1 + ϕ x i + ϕ 3 2 ] .
The ACI for the parameter ϕ is given by:
ϕ ^ ± Z γ / 2 v ( ϕ ^ ) ,
where Z γ / 2 is the upper γ / 2 quantile of the standard normal distribution, and v ( ϕ ^ = I 1 ( ϕ ^ ) , where I ( ϕ ^ ) = 2 L L ϕ 2 | ϕ = ϕ ^ .

3. Bayesian Estimation

In this section, Bayesian estimation procedures are developed for the parameter of the DLE distribution under randomly right-censored samples. The analysis is conducted using several loss functions, including the squared error loss function (SELF), the linear exponential (LINEX) loss function, and the general entropy loss function (GELF). Corresponding Bayesian credible intervals are also constructed. The estimators are derived under two cases of prior distributions: informative and non-informative priors.
case I
Assume the parameter ϕ flowing Gamma priors with shape parameter α and rate parameter 1. The Gamma prior is employed due to its flexibility and capacity to encapsulate a wide range of prior beliefs, making it a suitable choice for Bayesian analysis. The hyperparameter of the Gamma prior was selected in such a way that the Gamma prior mean (shape/rate) was the same as the original mean (parameter value); see [22,23,24]. The corresponding prior density function of ϕ is given by:
π 1 ( ϕ , α , β ) = Γ ( α ) ϕ α 1 e ϕ ; ϕ > 0 , α > 0 ,
where ϕ is a positive parameter. The posterior density function of ϕ given the data x = ( x 1 , x 2 , x n ) is expressed as follows:
f 1 ( ϕ | x ̲ ) L ( x ̲ | ϕ ) π 1 ( ϕ ) ,
f 1 ( ϕ | x ̲ ) ϕ α 1 e ϕ Γ α i = 1 n [ d i [ ϕ x i l o g ( 1 + ϕ 3 ) + l o g ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) ] + ( 1 d i ) ϕ x i + l o g ( 1 + ϕ x i + ϕ 3 ) l o g ( 1 + ϕ 3 ) ] .
case II
In the case of a non-informative prior, little or no prior information is available about the unknown parameter. For ϕ , an improper non-informative prior is adopted, which can be expressed in the form of a uniform density over its parameter space, with probability density function given by:
π 2 ( ϕ ) = 1 ϕ ; ϕ > 0 .
The uniform prior was preferred for its simplicity and computational stability. The posterior density function of ϕ can be expressed as follows:
f 2 ( ϕ | x ̲ ) 1 ϕ i = 1 n [ d i [ ϕ x i l o g ( 1 + ϕ 3 ) + l o g ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) ] + ( 1 d i ) ϕ x i + l o g ( 1 + ϕ x i + ϕ 3 ) l o g ( 1 + ϕ 3 ) ] .
For the two prior cases described above, the Bayes estimator of ϕ is derived under three different loss functions, as detailed in the following subsections.

3.1. Bayesian Estimator Under SELF

The SELF is one of the most commonly used symmetric loss functions and is defined as:
S E ( ϕ ^ , ϕ ) = ( ϕ ^ ϕ ) 2 .
Under this criterion, the Bayesian estimator corresponds to the posterior mean and can be expressed, for both prior cases, as:
ϕ ^ S E L F ϕ ϕ f ( ϕ | x ̲ ) d ϕ .
For the gamma prior, the estimator takes the following integral form:
ϕ 1 ^ S E L F 0 ϕ α e ϕ Γ α i = 1 n [ d i [ ϕ x i l o g ( 1 + ϕ 3 ) + l o g ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) ] + ( 1 d i ) ϕ x i + l o g ( 1 + ϕ x i + ϕ 3 ) l o g ( 1 + ϕ 3 ) ] d ϕ .
For the non-informative prior, it is represented as:
ϕ 2 ^ S E L F 0 i = 1 n [ d i [ ϕ x i l o g ( 1 + ϕ 3 ) + l o g ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) ] + ( 1 d i ) ϕ x i + l o g ( 1 + ϕ x i + ϕ 3 ) l o g ( 1 + ϕ 3 ) ] d ϕ .

3.2. Bayesian Estimator Under LINEX Loss Function

The LINEX loss function proposed by [25] allows for asymmetric penalization of overestimation and underestimation and is defined as:
L I N ( ϕ ^ , ϕ ) = ( e c ( ϕ ^ ϕ ) c ( ϕ ^ ϕ ) 1 ) ,
where a shape parameter c 0 determines both the direction and the degree of asymmetry. The Bayesian estimator of ϕ under this function is expresed as:
ϕ L I N ^ 1 c l n [ ϕ e c ϕ f ( ϕ | x ̲ ) ] .
For the two prior assumptions, the estimators can be formulated as follows:
ϕ 1 ^ L I N 1 c l n ( 0 ϕ α 1 e ϕ ( c + 1 ) Γ α i = 1 n [ d i [ ϕ x i l o g ( 1 + ϕ 3 ) + l o g ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) ] + ( 1 d i ) ϕ x i + l o g ( 1 + ϕ x i + ϕ 3 ) l o g ( 1 + ϕ 3 ) ] ) ,
ϕ 2 ^ L I N 1 c l n ( 0 e c ϕ ϕ i = 1 n [ d i [ ϕ x i l o g ( 1 + ϕ 3 ) + l o g ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) ] + ( 1 d i ) ϕ x i + l o g ( 1 + ϕ x i + ϕ 3 ) l o g ( 1 + ϕ 3 ) ] ) .

3.3. Bayesian Estimation Under GELF

The GELF introduced by [26] is an asymmetric loss function which can be given as:
G E ( ϕ ^ , ϕ ) = q 2 2 ( l n ϕ ^ l n ϕ ) 2 ,
where q controls the asymmetry. The corresponding Bayesian estimator is
ϕ G E ^ [ ϕ ϕ q f ( ϕ | x ̲ ) ] 1 / q .
For the two prior structures, the estimators can be formulated as follows:
ϕ 1 ^ G E ( 0 ϕ α 1 q e ϕ Γ α i = 1 n [ d i [ ϕ x i l o g ( 1 + ϕ 3 ) + l o g ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) ] + ( 1 d i ) ϕ x i + l o g ( 1 + ϕ x i + ϕ 3 ) l o g ( 1 + ϕ 3 ) ] ) 1 / q ,
ϕ 2 ^ G E ( 0 ϕ ( q + 1 ) i = 1 n [ d i [ ϕ x i l o g ( 1 + ϕ 3 ) + l o g ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) ] + ( 1 d i ) ϕ x i + l o g ( 1 + ϕ x i + ϕ 3 ) l o g ( 1 + ϕ 3 ) ] ) 1 / q .
The 100 ( 1 γ ) % Bayesian credible interval for ϕ is constructed using the following formula:
P ( L L < ϕ < U L ) = L L U L f ( ϕ | X ̲ ) d ϕ = 1 γ , 0 < γ < 1 ,
where L L and U L denote the lower and upper bounds of the credible interval, respectively, and f ( ϕ X ̲ ) is the posterior density of ϕ .
The Bayesian estimation of the parameter ϕ with different loss functions does not provide straightforward formulas, as shown in Equations (9) and (11). Therefore, advanced Bayesian Markov Chain Monte Carlo (MCMC) techniques are employed to approximate these estimators by generating samples from the posterior distributions. It’s important to note that the full conditional posterior density functions of ϕ in Equations (9) and (11) do not match up with any common distributions. This leads to the use of the Metropolis-Hastings algorithm to generate MCMC samples. Graphs of these posterior distributions are included to highlight their similarity to the normal distribution (see Figure 1). This similarity supports the application of the Metropolis–Hastings method with normal proposal distributions. The main steps of the Metropolis–Hastings procedure are summarized as follows:
1
Set the initial values ϕ ( 0 ) = ϕ ^ .
2
Generate ϕ ( ) from a normal distribution centered at the current estimate as q ( ϕ ) = N ϕ ^ , V a r ^ ( ϕ ^ ) .
3
The acceptance probability is computed as h = m i n 1 , f ( ϕ | x ̲ ) f ( ϕ ( i 1 ) | x ̲ ) .
4
The chain is iterated M times to obtain posterior samples, ϕ ( i ) , i = 1 , , M .
For more details, see [16].
Figure 1. Posterior density for two cases.
Figure 1. Posterior density for two cases.
Mathematics 13 03520 g001

4. Simulation

Simulation studies offer a powerful tool for assessing the performance of statistical estimators under controlled conditions, particularly when analytical solutions are difficult to obtain or unavailable. In this study, a Monte Carlo simulation is conducted to assess the performance of the proposed estimation methods for the parameter of the DLE distribution under random right-censoring. The main objectives are:
*
To evaluate the accuracy and precision of parameter estimates for the DLE distribution based on random right-censored samples.
*
To compare the performance of the ML and Bayesian approaches under different loss functions.
*
To investigate the impact of sample size and prior assumptions on the performance of the estimation methods.
A total of N = 1000 random right-censored samples are generated from the DLE distribution for various sample sizes n = 20 , 100 , 500 , 1000 . The simulations are conducted under four true parameter values: ϕ = 0.01 , 0.1 , 0.5 , 1 For each generated dataset, parameter estimation is performed using both ML and Bayesian approaches. Bayesian estimation is implemented under both symmetric and asymmetric loss functions, including SELF, LINEX, and GELF, with weights c , q { 1.5 , 1.5 } . Two prior specifications are considered: a gamma prior (informative) and a uniform prior (non-informative). The random right-censored samples are generated using the following algorithm:
1.
Fix the value of the parameter ϕ .
2.
Obtain x i from DLE distribution; i = 1,…, n.
3.
Draw n random pseudo C i from uniform distribution i.e., C i U ( 0 , x i ) ; i = 1 , , n . This distribution controls the censorship mechanism.
4.
Construct the observed data as follows:
( x i , d i ) = ( x i , 1 ) , if x i C i , ( C i , 0 ) , otherwise , i = 1 , , n .
Consequently, the pairs ( x 1 , d 1 ) , ( x 2 , d 2 ) , , ( x n , d n ) form the random right-censored dataset.
For more details on this algorithm, see [27]. For each configuration, both the ML and Bayesian estimates are obtained, along with their associated asymptotic confidence intervals (ACI) or credible intervals, computed at a 95% confidence level. For the Bayesian approach, posterior inference is carried out using the Metropolis–Hastings algorithm with a chain of 12,000 samples, discarding the first 2000 as burn-in. The effectiveness of the proposed point estimation methods is assessed using two key measures: the bias and the root mean squared error (RMSE), defined respectively as b i a s ( ϕ ^ ) = ϕ ^ ϕ , and R M S E ( ϕ ^ ) = V a r ( ϕ ^ ( b i a s ϕ ^ ) 2 . The performance of the interval estimation procedures is evaluated by considering the average lengths of the confidence intervals for ML estimates and the credible intervals for Bayesian estimates. All computational procedures were implemented in the R 4.5.1 software.
Table 1 and Table 2 summarize the simulation outcomes, and Figure 2 provides a heat-map representation of RMSE values across different scenarios. Several important findings emerge:
  • As the sample size increases, the RMSE of both ML and Bayesian estimators declines, reflecting enhanced accuracy and convergence toward unbiasedness.
  • An increase in sample size leads to a reduction in the RMSE for both ML and Bayesian estimators, confirming their improved precision and asymptotic unbiased behavior.
  • The Bayesian estimators with informative priors generally outperform those with non-informative priors.
  • Bayesian estimates under asymmetric loss functions with a positive weight yield smaller RMSE values and shorter credible intervals compared to those with a negative weight.
  • Among the Bayesian methods, GELF with positive weight consistently yields the smallest RMSE and the shortest interval lengths.
  • The ML estimator and the Bayesian estimator under GELF with positive weight show superior and consistent performance across all sample sizes.
Figure 2 further confirms these findings. Overall, the results demonstrate that both ML and Bayesian estimation, when combined with informative priors and asymmetric loss functions with positive weights, substantially enhance estimation efficiency in the presence of censoring.
Figure 2 presents heat maps illustrating the RMSE results for two cases: Case 1 corresponds to the Gamma prior, and Case 2 corresponds to the Uniform prior. Here, “P” denotes a positive weight and “N” denotes a negative weight. In the heat maps, darker colors indicate higher RMSE values, while lighter colors indicate lower RMSE values.
Table 1. The ML and Bayesian estimates (SELF, LINEX, and GELF) of ϕ using Case I: Gamma prior under random right-censored samples.
Table 1. The ML and Bayesian estimates (SELF, LINEX, and GELF) of ϕ using Case I: Gamma prior under random right-censored samples.
ϕ nMLBayesian Estimation
SELFLINEXGELF
c = −1.5c = 1.5q = −1.5q=1.5
Estimate
Bias
MSE
Length
Estimate
Bias
MSE
Length
Estimate
Bias
MSE
Length
Estimate
Bias
MSE
Length
Estimate
Bias
MSE
Length
Estimate
Bias
MSE
Length
0.0120 1.012 × 10 2
1.225 × 10 4
2.046 × 10 3
6.701 × 10 3
1.010 × 10 2
1.006 × 10 4
2.820 × 10 3
3.889 × 10 3
1.022 × 10 2
2.226 × 10 4
2.054 × 10 3
3.014 × 10 3
1.022 × 10 2
2.165 × 10 4
2.051 × 10 3
3.011 × 10 3
1.032 × 10 2
3.158 × 10 4
2.079 × 10 3
3.033 × 10 3
9.732 × 10 3
2.679 × 10 4
1.991 × 10 3
2.926 × 10 3
100 1.002 × 10 2
2.452 × 10 5
8.408 × 10 4
2.758 × 10 3
1.008 × 10 2
7.647 × 10 5
1.175 × 10 3
1.983 × 10 3
1.004 × 10 2
4.066 × 10 5
8.249 × 10 4
1.318 × 10 3
1.004 × 10 2
3.966 × 10 5
8.247 × 10 4
1.318 × 10 3
1.006 × 10 2
5.669 × 10 5
8.268 × 10 4
1.320 × 10 3
9.957 × 10 3
4.261 × 10 5
8.194 × 10 4
1.310 × 10 3
500 1.003 × 10 2
2.862 × 10 5
3.601 × 10 4
1.178 × 10 3
1.003 × 10 2
3.007 × 10 5
5.023 × 10 4
8.812 × 10 4
1.001 × 10 2
7.438 × 10 6
3.530 × 10 4
6.573 × 10 4
1.001 × 10 2
7.252 × 10 6
3.530 × 10 4
6.572 × 10 4
1.001 × 10 2
1.044 × 10 5
3.532 × 10 4
6.576 × 10 4
9.992 × 10 3
8.126 × 10 6
3.526 × 10 4
6.554 × 10 4
1000 1.002 × 10 2
1.610 × 10 5
2.522 × 10 4
8.258 × 10 4
1.002 × 10 2
1.752 × 10 5
3.586 × 10 4
6.781 × 10 4
1.002 × 10 2
2.177 × 10 5
2.381 × 10 4
4.442 × 10 4
1.002 × 10 2
2.168 × 10 5
2.381 × 10 4
4.442 × 10 4
1.002 × 10 2
2.325 × 10 5
2.382 × 10 4
4.442 × 10 4
1.001 × 10 2
1.413 × 10 5
2.373 × 10 4
4.438 × 10 4
0.120 1.028 × 10 1
2.773 × 10 3
2.116 × 10 2
6.883 × 10 2
1.002 × 10 1
1.921 × 10 4
2.735 × 10 2
4.171 × 10 2
1.016 × 10 1
1.643 × 10 3
1.992 × 10 2
3.215 × 10 2
1.011 × 10 1
1.050 × 10 3
1.967 × 10 2
3.210 × 10 2
1.023 × 10 1
2.287 × 10 3
2.001 × 10 2
3.225 × 10 2
9.658 × 10 2
3.424 × 10 3
1.941 × 10 2
3.128 × 10 2
100 1.013 × 10 1
1.311 × 10 3
8.391 × 10 3
2.720 × 10 2
1.003 × 10 1
3.223 × 10 4
1.169 × 10 2
1.886 × 10 2
1.005 × 10 1
4.679 × 10 4
8.441 × 10 3
1.498 × 10 2
1.004 × 10 1
3.682 × 10 4
8.420 × 10 3
1.494 × 10 2
1.006 × 10 1
5.824 × 10 4
8.452 × 10 3
1.499 × 10 2
9.959 × 10 2
4.057 × 10 4
8.373 × 10 3
1.486 × 10 2
500 1.006 × 10 1
6.139 × 10 4
3.492 × 10 3
1.128 × 10 2
1.004 × 10 1
3.964 × 10 4
5.207 × 10 3
9.628 × 10 3
1.005 × 10 1
4.759 × 10 4
3.680 × 10 3
6.157 × 10 3
1.005 × 10 1
4.574 × 10 4
3.676 × 10 3
6.157 × 10 3
1.005 × 10 1
4.974 × 10 4
3.683 × 10 3
6.155 × 10 3
1.003 × 10 1
3.127 × 10 4
3.657 × 10 3
6.154 × 10 3
1000 1.004 × 10 1
4.193 × 10 4
2.453 × 10 3
7.933 × 10 3
1.005 × 10 1
4.688 × 10 4
3.512 × 10 3
6.592 × 10 3
1.005 × 10 1
4.668 × 10 4
2.550 × 10 3
4.525 × 10 3
1.005 × 10 1
4.577 × 10 4
2.548 × 10 3
4.525 × 10 3
1.005 × 10 1
4.773 × 10 4
2.552 × 10 3
4.524 × 10 3
1.004 × 10 1
3.867 × 10 4
2.535 × 10 3
4.529 × 10 3
0.520 5.192 × 10 1
1.919 × 10 2
9.031 × 10 2
2.896 × 10 1
5.124 × 10 1
1.237 × 10 2
1.190 × 10 1
1.845 × 10 1
5.244 × 10 1
2.435 × 10 2
8.994 × 10 2
1.510 × 10 1
5.130 × 10 1
1.299 × 10 2
8.497 × 10 2
1.469 × 10 1
5.222 × 10 1
2.221 × 10 2
8.831 × 10 2
1.494 × 10 1
5.002 × 10 1
1.991 × 10 4
8.423 × 10 2
1.462 × 10 1
100 5.137 × 10 1
1.366 × 10 2
4.007 × 10 2
1.236 × 10 1
5.130 × 10 1
1.300 × 10 2
5.493 × 10 2
9.437 × 10 2
5.152 × 10 1
1.515 × 10 2
4.175 × 10 2
7.094 × 10 2
5.131 × 10 1
1.313 × 10 2
4.085 × 10 2
7.076 × 10 2
5.148 × 10 1
1.479 × 10 2
4.154 × 10 2
7.082 × 10 2
5.109 × 10 1
1.087 × 10 2
4.017 × 10 2
7.106 × 10 2
500 5.099 × 10 1
9.867 × 10 3
1.895 × 10 2
5.310 × 10 2
5.101 × 10 1
1.011 × 10 2
2.383 × 10 2
4.183 × 10 2
5.101 × 10 1
1.009 × 10 2
1.838 × 10 2
2.942 × 10 2
5.097 × 10 1
9.714 × 10 3
1.816 × 10 2
2.938 × 10 2
5.100 × 10 1
1.003 × 10 2
1.834 × 10 2
2.940 × 10 2
5.093 × 10 1
9.284 × 10 3
1.793 × 10 2
2.938 × 10 2
1000 5.084 × 10 1
8.435 × 10 3
1.412 × 10 2
3.716 × 10 2
5.095 × 10 1
9.451 × 10 3
1.857 × 10 2
2.736 × 10 2
5.094 × 10 1
9.441 × 10 3
1.472 × 10 2
2.049 × 10 2
5.093 × 10 1
9.255 × 10 3
1.460 × 10 2
2.048 × 10 2
5.094 × 10 1
9.409 × 10 3
1.470 × 10 2
2.049 × 10 2
5.090 × 10 1
9.043 × 10 3
1.446 × 10 2
2.049 × 10 2
120 9.672 × 10 1
3.280 × 10 2
6.719 × 10 2
1.924 × 10 1
1.099 × 10 0
9.897 × 10 2
2.983 × 10 1
3.598 × 10 1
1.126 × 10 0
1.257 × 10 1
2.433 × 10 1
2.850 × 10 1
1.076 × 10 0
7.650 × 10 2
1.961 × 10 1
2.655 × 10 1
1.107 × 10 0
1.068 × 10 1
2.223 × 10 1
2.766 × 10 1
1.066 × 10 0
6.556 × 10 2
1.939 × 10 1
2.656 × 10 1
100 9.891 × 10 1
1.092 × 10 2
2.684 × 10 2
8.046 × 10 2
1.045 × 10 0
4.486 × 10 2
1.079 × 10 1
1.658 × 10 1
1.050 × 10 0
4.984 × 10 2
8.492 × 10 2
1.259 × 10 1
1.043 × 10 0
4.267 × 10 2
7.991 × 10 2
1.236 × 10 1
1.047 × 10 0
4.737 × 10 2
8.308 × 10 2
1.250 × 10 1
1.041 × 10 0
4.058 × 10 2
7.884 × 10 2
1.235 × 10 1
500 9.982 × 10 1
1.835 × 10 3
6.339 × 10 3
1.991 × 10 2
1.030 × 10 0
3.016 × 10 2
5.127 × 10 2
7.090 × 10 2
1.032 × 10 0
3.199 × 10 2
4.317 × 10 2
5.031 × 10 2
1.031 × 10 0
3.067 × 10 2
4.214 × 10 2
5.013 × 10 2
1.032 × 10 0
3.154 × 10 2
4.281 × 10 2
5.024 × 10 2
1.030 × 10 0
3.027 × 10 2
4.184 × 10 2
5.011 × 10 2
1000 9.993 × 10 1
7.324 × 10 4
3.288 × 10 3
1.052 × 10 2
1.029 × 10 0
2.891 × 10 2
4.170 × 10 2
5.292 × 10 2
1.029 × 10 0
2.885 × 10 2
3.636 × 10 2
4.274 × 10 2
1.028 × 10 0
2.820 × 10 2
3.583 × 10 2
4.266 × 10 2
1.029 × 10 0
2.863 × 10 2
3.618 × 10 2
4.271 × 10 2
1.028 × 10 0
2.800 × 10 2
3.567 × 10 2
4.266 × 10 2
Table 2. The ML and Bayesian estimates (SELF, LINEX, and GELF) of ϕ using Case II: Uniform prior under random right-censored samples.
Table 2. The ML and Bayesian estimates (SELF, LINEX, and GELF) of ϕ using Case II: Uniform prior under random right-censored samples.
ϕ nMLBayesian Estimation
SELFLINEXGELF
c = −1.5c = 1.5q = −1.5q=1.5
Estimate
Bias
MSE
Length
Estimate
Bias
MSE
Length
Estimate
Bias
MSE
Length
Estimate
Bias
MSE
Length
Estimate
Bias
MSE
Length
Estimate
Bias
MSE
Length
0.0120 1.012 × 10 2
1.225 × 10 4
2.046 × 10 3
6.701 × 10 3
1.050 × 10 2
4.991 × 10 4
2.958 × 10 3
4.162 × 10 3
1.059 × 10 2
5.877 × 10 4
2.174 × 10 3
3.048 × 10 3
1.058 × 10 2
5.814 × 10 4
2.170 × 10 3
3.045 × 10 3
1.543 × 10 2
5.432 × 10 3
6.420 × 10 3
4.402 × 10 3
1.449 × 10 2
4.492 × 10 3
5.483 × 10 3
4.099 × 10 3
100 1.002 × 10 2
2.452 × 10 5
8.408 × 10 4
2.758 × 10 3
1.011 × 10 2
1.077 × 10 4
1.168 × 10 3
1.971 × 10 3
1.009 × 10 2
9.397 × 10 5
8.327 × 10 4
1.324 × 10 3
1.009 × 10 2
9.296 × 10 5
8.325 × 10 4
1.323 × 10 3
1.262 × 10 2
2.617 × 10 3
2.890 × 10 3
1.981 × 10 3
1.249 × 10 2
2.485 × 10 3
2.763 × 10 3
1.941 × 10 3
500 1.003 × 10 2
2.862 × 10 5
3.601 × 10 4
1.178 × 10 3
1.003 × 10 2
3.071 × 10 5
4.945 × 10 4
8.663 × 10 4
1.001 × 10 2
9.989 × 10 6
3.529 × 10 4
6.471 × 10 4
1.001 × 10 2
9.803 × 10 6
3.529 × 10 4
6.471 × 10 4
1.174 × 10 2
1.743 × 10 3
1.824 × 10 3
9.129 × 10 4
1.172 × 10 2
1.720 × 10 3
1.801 × 10 3
9.089 × 10 4
1000 1.002 × 10 2
1.610 × 10 5
2.522 × 10 4
8.258 × 10 4
1.001 × 10 2
1.167 × 10 5
3.542 × 10 4
6.871 × 10 4
1.002 × 10 2
1.919 × 10 5
2.377 × 10 4
4.404 × 10 4
1.002 × 10 2
1.909 × 10 5
2.376 × 10 4
4.403 × 10 4
1.153 × 10 2
1.526 × 10 3
1.576 × 10 3
6.820 × 10 4
1.151 × 10 2
1.515 × 10 3
1.565 × 10 3
6.811 × 10 4
0.120 1.028 × 10 1
2.773 × 10 3
2.116 × 10 2
6.883 × 10 2
1.017 × 10 1
1.721 × 10 3
2.740 × 10 2
4.257 × 10 2
1.032 × 10 1
3.241 × 10 3
2.004 × 10 2
3.242 × 10 2
1.027 × 10 1
2.656 × 10 3
1.975 × 10 2
3.210 × 10 2
1.512 × 10 1
5.124 × 10 2
6.141 × 10 2
4.023 × 10 2
1.422 × 10 1
4.219 × 10 2
5.250 × 10 2
3.779 × 10 2
100 1.013 × 10 1
1.311 × 10 3
8.391 × 10 3
2.720 × 10 2
9.969 × 10 2
3.104 × 10 4
1.131 × 10 2
1.895 × 10 2
9.980 × 10 2
1.969 × 10 4
8.326 × 10 3
1.473 × 10 2
9.971 × 10 2
2.945 × 10 4
8.314 × 10 3
1.472 × 10 2
1.258 × 10 1
2.576 × 10 2
2.841 × 10 2
2.019 × 10 2
1.245 × 10 1
2.446 × 10 2
2.716 × 10 2
1.988 × 10 2
500 1.006 × 10 1
6.139 × 10 4
3.492 × 10 3
1.128 × 10 2
9.955 × 10 2
4.496 × 10 4
5.118 × 10 3
9.199 × 10 3
9.963 × 10 2
3.700 × 10 4
3.628 × 10 3
6.257 × 10 3
9.961 × 10 2
3.882 × 10 4
3.628 × 10 3
6.255 × 10 3
1.171 × 10 1
1.711 × 10 2
1.794 × 10 2
9.212 × 10 3
1.169 × 10 1
1.689 × 10 2
1.772 × 10 2
9.164 × 10 3
1000 1.004 × 10 1
4.193 × 10 4
2.453 × 10 3
7.933 × 10 3
9.968 × 10 2
3.166 × 10 4
3.441 × 10 3
6.321 × 10 3
9.965 × 10 2
3.511 × 10 4
2.499 × 10 3
4.436 × 10 3
9.964 × 10 2
3.601 × 10 4
2.500 × 10 3
4.435 × 10 3
1.151 × 10 1
1.506 × 10 2
1.556 × 10 2
6.916 × 10 3
1.149 × 10 1
1.495 × 10 2
1.545 × 10 2
6.898 × 10 3
0.520 5.192 × 10 1
1.919 × 10 2
9.031 × 10 2
2.896 × 10 1
4.832 × 10 1
1.685 × 10 2
1.113 × 10 1
1.828 × 10 1
4.960 × 10 1
4.031 × 10 3
8.116 × 10 2
1.432 × 10 1
4.860 × 10 1
1.395 × 10 2
8.021 × 10 2
1.398 × 10 1
7.054 × 10 1
2.054 × 10 1
2.426 × 10 1
1.794 × 10 1
6.741 × 10 1
1.741 × 10 1
2.122 × 10 1
1.728 × 10 1
100 5.137 × 10 1
1.366 × 10 2
4.007 × 10 2
1.236 × 10 1
4.893 × 10 1
1.067 × 10 2
5.204 × 10 2
8.678 × 10 2
4.906 × 10 1
9.354 × 10 3
3.804 × 10 2
6.759 × 10 2
4.888 × 10 1
1.119 × 10 2
3.835 × 10 2
6.702 × 10 2
6.096 × 10 1
1.096 × 10 1
1.211 × 10 1
8.879 × 10 2
6.049 × 10 1
1.049 × 10 1
1.166 × 10 1
8.887 × 10 2
500 5.099 × 10 1
9.867 × 10 3
1.895 × 10 2
5.310 × 10 2
4.903 × 10 1
9.722 × 10 3
2.308 × 10 2
4.298 × 10 2
4.906 × 10 1
9.375 × 10 3
1.762 × 10 2
2.947 × 10 2
4.903 × 10 1
9.727 × 10 3
1.780 × 10 2
2.944 × 10 2
5.752 × 10 1
7.517 × 10 2
7.838 × 10 2
4.105 × 10 2
5.743 × 10 1
7.432 × 10 2
7.756 × 10 2
4.102 × 10 2
1000 5.084 × 10 1
8.435 × 10 3
1.412 × 10 2
3.716 × 10 2
4.917 × 10 1
8.314 × 10 3
1.757 × 10 2
2.756 × 10 2
4.917 × 10 1
8.259 × 10 3
1.372 × 10 2
2.037 × 10 2
4.916 × 10 1
8.434 × 10 3
1.382 × 10 2
2.037 × 10 2
5.668 × 10 1
6.677 × 10 2
6.893 × 10 2
3.168 × 10 2
5.664 × 10 1
6.636 × 10 2
6.853 × 10 2
3.166 × 10 2
120 9.672 × 10 1
3.280 × 10 2
6.719 × 10 2
1.924 × 10 1
9.793 × 10 1
2.071 × 10 2
2.339 × 10 1
3.134 × 10 1
9.998 × 10 1
1.674 × 10 4
1.649 × 10 1
2.429 × 10 1
9.663 × 10 1
3.369 × 10 2
1.551 × 10 1
2.313 × 10 1
1.612 × 10 0
6.118 × 10 1
1.314 × 10 0
3.972 × 10 1
1.503 × 10 0
5.033 × 10 1
1.049 × 10 0
3.708 × 10 1
100 9.891 × 10 1
1.092 × 10 2
2.684 × 10 2
8.046 × 10 2
9.692 × 10 1
3.083 × 10 2
9.331 × 10 2
1.437 × 10 1
9.725 × 10 1
2.747 × 10 2
6.784 × 10 2
1.090 × 10 1
9.668 × 10 1
3.320 × 10 2
6.967 × 10 2
1.082 × 10 1
1.216 × 10 0
2.159 × 10 1
2.388 × 10 1
1.569 × 10 1
1.206 × 10 0
2.065 × 10 1
2.296 × 10 1
1.545 × 10 1
500 9.982 × 10 1
1.835 × 10 3
6.339 × 10 3
1.991 × 10 2
9.720 × 10 1
2.805 × 10 2
4.765 × 10 2
7.115 × 10 2
9.735 × 10 1
2.654 × 10 2
3.806 × 10 2
5.095 × 10 2
9.723 × 10 1
2.765 × 10 2
3.880 × 10 2
5.081 × 10 2
1.140 × 10 0
1.396 × 10 1
1.461 × 10 1
7.471 × 10 2
1.138 × 10 0
1.381 × 10 1
1.445 × 10 1
7.451 × 10 2
1000 9.993 × 10 1
7.324 × 10 4
3.288 × 10 3
1.052 × 10 2
9.758 × 10 1
2.416 × 10 2
3.686 × 10 2
5.007 × 10 2
9.757 × 10 1
2.430 × 10 2
3.150 × 10 2
3.900 × 10 2
9.751 × 10 1
2.485 × 10 2
3.192 × 10 2
3.897 × 10 2
1.123 × 10 0
1.229 × 10 1
1.277 × 10 1
6.342 × 10 2
1.122 × 10 0
1.222 × 10 1
1.270 × 10 1
6.331 × 10 2
Figure 2. Heat-map of RMSE under random right-censored samples for ML estimation and Bayesian estimations at different values for parameter ϕ ( ϕ 1 = 0.01 , ϕ 2 = 0.1 , ϕ 3 = 0.5 , ϕ 4 = 1 ) and different sample sizes ( n 1 = 20 , n 2 = 100 , n 3 = 500 , n 4 = 1000 ) .
Figure 2. Heat-map of RMSE under random right-censored samples for ML estimation and Bayesian estimations at different values for parameter ϕ ( ϕ 1 = 0.01 , ϕ 2 = 0.1 , ϕ 3 = 0.5 , ϕ 4 = 1 ) and different sample sizes ( n 1 = 20 , n 2 = 100 , n 3 = 500 , n 4 = 1000 ) .
Mathematics 13 03520 g002

5. Applications in Random Right-Censored Data

5.1. Real Data Analysis for Comparing Competing Discrete Models

In this subsection, three real datasets are analyzed to illustrate the applicability of the proposed distribution to censored data. Model performance is assessed using the maximum log-likelihood (-LL) and several goodness-of-fit criteria, including the Kolmogorov–Smirnov (KS) test with its corresponding p-value, as well as the Akaike information criterion (AIC) and Bayesian information criterion (BIC) to compare the DLE distribution against other competing one-parameter discrete distributions, as listed in Table 3.

5.1.1. Dataset I

A sample of the failure time of 20 epoxy insulation samples at the voltage level of 57.5 Kv. The failure times, in minutes, are 510, 1000*, 252, 408, 528, 690, 900*, 714, 348, 546, 174, 696, 294, 234, 288, 444, 390, 168, 558, and 288 (see [1]). Here, the censoring times are indicated with asterisks *. Table 4 presents the ML estimates along with their standard errors for the parameter of the DLE distribution and other competing models. Additionally, the corresponding goodness-of-fit criteria are provided.
From Table 4, the DLE distribution provided a satisfactory fit, with AIC = 254.761 and a non-significant K–S test (p = 0.784), indicating good agreement with the observed data. Although the Discrete Rayleigh distribution achieved slightly lower AIC and BIC values, the DLE distribution demonstrated comparable performance, confirming its suitability and flexibility for modeling the data. The results presented in Figure 3 further support the findings of Table 4.

5.1.2. Dataset II

The dataset below reports the remission times, in weeks, for a group of 30 leukemia patients who received similar treatment (see [1]): 1, 1, 2, 4, 4, 6, 6, 6, 7, 8, 9, 9, 10, 12, 13, 14, 18, 19, 24, 26, 29, 31*, 42, 45*, 50*, 57, 60, 71*, 85*, and 91. Here, the censoring times are indicated with asterisks *. Table 5 presents the ML estimates and their standard errors for the parameter of the DLE distribution, alongside estimates for other competing models. The corresponding goodness-of-fit criteria are also provided.
Regarding Table 5, the DLE distribution achieved an adequate fit with AIC = 237.817 and a K–S p-value = 0.042, indicating reasonable agreement with the observed data. Although the geometric distribution showed slightly lower AIC and BIC values, the DLE distribution provided comparable performance and better flexibility in capturing the data pattern, supporting its applicability as an alternative discrete distribution. Figure 4 further illustrates and supports the findings presented in Table 5.

5.1.3. Dataset III

The following data represent the survival times (in months) of individuals with Hodgkin’s disease who were treated with nitrogen mustards and had received extensive prior therapy (see [1]): 1, 2, 3, 4, 4, 6, 7, 9, 9, 14*, 16, 18*, 26*, 30*, 41*. Here, the censoring times are indicated with asterisks *. Table 6 presents the ML estimates along with their standard errors for the parameter of the DLE distribution, as well as for other competing models. The corresponding goodness-of-fit criteria are also provided.
As shown in Table 6, the DLE distribution achieved a good overall fit to the data, with AIC = 86.358 and a non-significant K–S test (p = 0.276). This result indicates that the DLE distribution adequately represents the observed distribution. Although the Geometric and DITL distributions exhibited marginally smaller AIC values, the DLE distribution provided a competitive and more flexible fit, supporting its effectiveness as a suitable alternative among the considered discrete models. The results presented in Figure 5 further support the findings of Table 6.

5.2. Real Data Analysis for Assessing Classical and Bayesian Estimation Techniques

In this subsection, the random right-censored datasets described in the previous subsections are analyzed to compare classical and Bayesian estimation methods. Model performance is assessed using the AIC, BIC, and the Kolmogorov–Smirnov (K–S) test along with its corresponding p-value, allowing a comprehensive evaluation of the estimation approaches.

5.2.1. Dataset I

The estimators obtained using classical and Bayesian methods, along with the AIC, BIC, and Kolmogorov–Smirnov (K–S) test with its p-value for Dataset I, are presented in Table 7.
Table 7 shows that the parameter estimate ϕ is consistent across all methods at 0.004. The AIC and BIC values are very close, with the lowest values obtained under the ML and N-GELF estimators, indicating comparable performance. The results of the K-S test confirm a good fit for all methods, with the P-GELF estimator for case II achieving the best fit (lowest statistic 0.143 and highest p-value 0.805). Overall, both classical and Bayesian approaches perform similarly, with P-GELF showing a slight advantage.

5.2.2. Dataset II

The classical and Bayesian estimators, together with AIC, BIC, KS statistics, and p-values for Dataset II, are summarized in Table 8.
As Table 8 shows that the parameter estimate ϕ for dataset II is stable across methods. The AIC and BIC values are nearly identical for all estimators, with ML and N-GELF producing the lowest values (AIC = 237.817, BIC = 239.218), indicating similar model performance. The best fit within Dataset II is obtained under N-GELF for Case II.

5.2.3. Dataset III

For Dataset III, Table 9 presents the estimates obtained using both classical and Bayesian methods, along with the corresponding AIC, BIC, and KS test statistics, including their p-values.
According to Table 9, the parameter estimate ϕ for dataset III varies slightly between 0.113 and 0.127 across methods. The AIC and BIC values remain very close, with the lowest values obtained under ML and N-GELF (AIC = 86.358, BIC = 87.066), suggesting comparable performance. Among the Bayesian estimators, the N-GELF method in Case II achieved the best fit.
The Geweke diagnostic test [35] was applied to evaluate the convergence of the MCMC chains. This diagnostic compares the means of the initial and final segments of each Markov chain; convergence is achieved when the difference between these means is approximately zero, with Z-scores falling within ±1.96 at the 95% confidence level. As reported in Table 10, the calculated Z-scores for all three chains under the gamma prior are close to zero, indicating satisfactory convergence. Convergence was also assessed by trace plots, autocorrelation plots (ACF), and posterior density plots for the three datasets (Figure 6, Figure 7 and Figure 8). The ACF indicates that the posterior samples are almost independent. Trace plots confirm the chains behave stably across iterations. These diagnostics confirm appropriate convergence and a sufficient burn-in period.

6. Conclusions

In this study, we extend the DLE distribution originally introduced by [16] to the case of randomly right-censored data. The focus is on developing inferential procedures and evaluating the distribution’s performance under incomplete lifetime observations. The model parameter is estimated using classical ML and Bayesian approaches under SELF, LINEX, and GELF loss functions, with both informative and non-informative priors. Since closed-form posterior distributions are not available, MCMC techniques are employed for sampling. Extensive simulation studies are performed to evaluate the performance of classical and Bayesian methods across various parameter values and sample sizes. The results indicate that both the ML estimator and the Bayesian estimator under GELF with a positive weight ( q = 1.5 ) consistently achieve the lowest RMSE. The practical utility of the extended DLE distribution is further demonstrated using real-world physical and medical lifetime datasets. Statistical measures, including AIC, BIC, and the Kolmogorov–Smirnov test, confirm the superior fit of the proposed distribution compared to other discrete alternatives. Overall, this work offers a significant extension of the DLE distribution to censored data scenarios, thereby enhancing its flexibility and applicability in reliability and medical analyses. Future work may extend this study by exploring other censoring techniques and incorporating regression models.

Author Contributions

Conceptualization, A.F.; Methodology, K.A.-H.; Software, H.B. and K.A.-H.; Formal analysis, K.A.-H. and A.F.; Investigation, K.A.-H.; Resources, K.A.-H.; Data curation, K.A.-H.; Writing—original draft, K.A.-H.; Writing—review & editing, H.B. and A.F.; Visualization, H.B.; Supervision, H.B. and A.F.; Project administration, A.F.; Funding acquisition, H.B. All authors have read and agreed to the published version of the manuscript.

Funding

This project was funded by the Deanship Scientific Research (DSR), King Abdulaziz University, Jeddah, under Grant No. IPP: 1616-247-2025. The authors acknowledge the DSR for the technical and financial support.

Data Availability Statement

The data presented in this study are openly available in [1].

Acknowledgments

The authors sincerely thank King Abdulaziz University for their kind support.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The Derivative of log-Likelihood

L L ( x ̲ , ϕ , d ̲ ) = i = 1 n [ d i ( ϕ x i log ( 1 + ϕ 3 ) + log ( ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) ) ) + ( 1 d i ) ϕ x i + log ( 1 + ϕ x i + ϕ 3 ) log ( 1 + ϕ 3 ) ] .
L L ϕ = i = 1 n [ d i x i 3 ϕ 3 1 + ϕ 3 + x i + 3 ϕ 3 ( e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) ( x i + 1 + 3 ϕ 2 ) ) ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) + ( 1 d i ) x i + x i + 3 ϕ 2 1 + ϕ x i + ϕ 3 3 ϕ 2 ( 1 + ϕ 3 ) ] .
L L ϕ = i = 1 n [ d i x i 3 ϕ 3 1 + ϕ 3 + x i + 3 ϕ 3 ( e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) ( x i + 1 + 3 ϕ 2 ) ) ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) + ( x i + x i + 3 ϕ 2 1 + ϕ x i + ϕ 3 3 ϕ 2 ( 1 + ϕ 3 ) ) d i x i + x i + 3 ϕ 2 1 + ϕ x i + ϕ 3 3 ϕ 2 ( 1 + ϕ 3 ) ] .
L L ϕ = i = 1 n [ d i x i 3 ϕ 3 1 + ϕ 3 + x i + 3 ϕ 3 ( e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) ( x i + 1 + 3 ϕ 2 ) ) ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) + ( x i 3 ϕ 3 1 + ϕ 3 ) d i x i 3 ϕ 3 1 + ϕ 3 ( 1 d i ) x i + 3 ϕ 2 1 + ϕ x i + ϕ 3 ] .
L L ϕ = i = 1 n [ x i 3 ϕ 3 1 + ϕ 3 + d i x i + 3 ϕ 2 + e ϕ ( ϕ ( x i + 1 ) + ϕ 3 x i 3 ϕ 2 ) ( 1 + ϕ x i + ϕ 3 ) e ϕ ( 1 + ϕ ( x i + 1 ) + ϕ 3 ) + ( 1 d i ) x i + 3 ϕ 2 1 + ϕ x i + ϕ 3 ] .

References

  1. Lawless, J.F. Statistical Models and Methods for Lifetime Data; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  2. Kaplan, E.L.; Meier, P. Nonparametric estimation from incomplete observations. J. Am. Stat. Assoc. 1958, 53, 457–481. [Google Scholar] [CrossRef]
  3. Giné, E.; Guillou, A. On consistency of kernel density estimators for randomly censored data: Rates holding uniformly over adaptive intervals. In Proceedings of the Annales de l’IHP Probabilités et Statistiques, Paris, France, 18–20 June 2001; Volume 37, pp. 503–522. [Google Scholar]
  4. Kohler, M.; Máthé, K.; Pintér, M. Prediction from randomly right censored data. J. Multivar. Anal. 2002, 80, 73–100. [Google Scholar] [CrossRef]
  5. Khardani, S. Relative error prediction for twice censored data. Math. Methods Stat. 2019, 28, 291–306. [Google Scholar] [CrossRef]
  6. Khardani, S.; Lemdani, M.; Saïd, E.O. Some asymptotic properties for a smooth kernel estimator of the conditional mode under random censorship. J. Korean Stat. Soc. 2010, 39, 455–469. [Google Scholar] [CrossRef]
  7. El-Morshedy, M.; Eliwa, M.; Nagy, H. A new two-parameter exponentiated discrete Lindley distribution: Properties, estimation and applications. J. Appl. Stat. 2020, 47, 354–375. [Google Scholar] [CrossRef]
  8. Eliwa, M.S.; Altun, E.; El-Dawoody, M.; El-Morshedy, M. A new three-parameter discrete distribution with associated INAR (1) process and applications. IEEE Access 2020, 8, 91150–91162. [Google Scholar] [CrossRef]
  9. ul Haq, M.A.; Babar, A.; Hashmi, S.; Alghamdi, A.S.; Afify, A.Z. The discrete type-II half-logistic exponential distribution with applications to COVID-19 data. Pak. J. Stat. Oper. Res. 2021, 17, 921–932. [Google Scholar] [CrossRef]
  10. Eliwa, M.; Altun, E.; Alhussain, Z.A.; Ahmed, E.A.; Salah, M.M.; Ahmed, H.H.; El-Morshedy, M. A new one-parameter lifetime distribution and its regression model with applications. PLoS ONE 2021, 16, e0246969. [Google Scholar] [CrossRef] [PubMed]
  11. Alghamdi, A.S.; Ahsan-ul Haq, M.; Babar, A.; Aljohani, H.M.; Afify, A.Z.; Cell, Q.E. The discrete power-Ailamujia distribution: Properties, inference, and applications. AIMS Math. 2022, 7, 8344–8360. [Google Scholar] [CrossRef]
  12. Eldeeb, A.S.; Ahsan-ul Haq, M.; Eliwa, M.S.; Cell, Q.E. A discrete Ramos-Louzada distribution for asymmetric and over-dispersed data with leptokurtic-shaped: Properties and various estimation techniques with inference. AIMS Math. 2022, 7, 1726–1741. [Google Scholar] [CrossRef]
  13. Almetwally, E.M.; Abdo, D.A.; Hafez, E.; Jawa, T.M.; Sayed-Ahmed, N.; Almongy, H.M. The new discrete distribution with application to COVID-19 Data. Results Phys. 2022, 32, 104987. [Google Scholar] [CrossRef] [PubMed]
  14. Afify, A.Z.; Ahsan-ul Haq, M.; Aljohani, H.M.; Alghamdi, A.S.; Babar, A.; Gómez, H.W. A new one-parameter discrete exponential distribution: Properties, inference, and applications to COVID-19 data. J. King Saud Univ.-Sci. 2022, 34, 102199. [Google Scholar] [CrossRef]
  15. Shamlan, D.; Baaqeel, H.; Fayomi, A. A Discrete Odd Lindley Half-Logistic Distribution with Applications. J. Phys. Conf. Ser. 2024, 2701, 012034. [Google Scholar] [CrossRef]
  16. Al-Harbi, K.; Fayomi, A.; Baaqeel, H.; Alsuraihi, A. A Novel Discrete Linear-Exponential Distribution for Modeling Physical and Medical Data. Symmetry 2024, 16, 1123. [Google Scholar] [CrossRef]
  17. Krishna, H.; Goel, N. Maximum likelihood and Bayes estimation in randomly censored geometric distribution. J. Probab. Stat. 2017, 2017. [Google Scholar] [CrossRef]
  18. Achcar, J.A.; Martinez, E.Z.; de Freitas, B.C.L.; de Oliveira Peres, M.V. Classical and Bayesian inference approaches for the exponentiated discrete Weibull model with censored data and a cure fraction. Pak. J. Stat. Oper. Res. 2021, 17, 467–481. [Google Scholar] [CrossRef]
  19. Pandey, A.; Singh, R.P.; Tyagi, A. An Inferential Study of Discrete Burr-Hatke Exponential Distribution under Complete and Censored Data. Reliab. Theory Appl. 2022, 17, 109–122. [Google Scholar]
  20. Tyagi, A.; Singh, B.; Agiwal, V.; Nayal, A.S. Analysing Random Censored Data from Discrete Teissier Model. Reliab. Theory Appl. 2023, 18, 403–411. [Google Scholar]
  21. Cohen, A.C. Maximum likelihood estimation in the Weibull distribution based on complete and on censored samples. Technometrics 1965, 7, 579–588. [Google Scholar] [CrossRef]
  22. Basu, S.; Singh, S.K.; Singh, U. Estimation of inverse Lindley distribution using product of spacings function for hybrid censored data. Methodol. Comput. Appl. Probab. 2019, 21, 1377–1394. [Google Scholar] [CrossRef]
  23. Kurdi, T.; Nassar, M.; Alam, F.M.A. Bayesian Estimation Using Product of Spacing for Modified Kies Exponential Progressively Censored Data. Axioms 2023, 12, 917. [Google Scholar] [CrossRef]
  24. Alkhairy, I. Classical and Bayesian inference for the discrete Poisson Ramos-Louzada distribution with application to COVID-19 data. Math. Biosci. Eng. 2023, 20, 14061–14080. [Google Scholar] [CrossRef]
  25. Varian, H.R. A Bayesian approach to real estate assessment. In Studies in Bayesian Econometrics and Statistics in Honor of Leonard J. Savage; Fienberg, S.E., Zellner, A., Eds.; North-Holland Publishing Company: Amsterdam, The Netherlands, 1975; pp. 195–208. [Google Scholar]
  26. Calabria, R.; Pulcini, G. An engineering approach to Bayes estimation for the Weibull distribution. Microelectron. Reliab. 1994, 34, 789–802. [Google Scholar] [CrossRef]
  27. Ramos, P.L.; Guzman, D.C.; Mota, A.L.; Rodrigues, F.A.; Louzada, F. Sampling with censored data: A practical guide. arXiv 2020, arXiv:2011.08417. [Google Scholar] [CrossRef]
  28. Roy, D. Discrete rayleigh distribution. IEEE Trans. Reliab. 2004, 53, 255–260. [Google Scholar] [CrossRef]
  29. Poisson, S.D. Recherches sur la Probabilité des Jugements en Matière Criminelle et en Matière Civile: Précédées des Règles GénéRALES du Calcul des Probabilités; Bachelier: Oslo, Norway, 1837; pp. 206–207. [Google Scholar]
  30. Krishna, H.; Pundir, P.S. Discrete Burr and discrete Pareto distributions. Stat. Methodol. 2009, 6, 177–188. [Google Scholar] [CrossRef]
  31. El-Morshedy, M.; Eliwa, M.S.; Altun, E. Discrete Burr-Hatke distribution with properties, estimation methods and regression model. IEEE Access 2020, 8, 74359–74370. [Google Scholar] [CrossRef]
  32. Eldeeb, A.S.; Ahsan-Ul-Haq, M.; Babar, A. A discrete analog of inverted Topp-Leone distribution: Properties, estimation and applications. Int. J. Anal. Appl. 2021, 19, 695–708. [Google Scholar]
  33. de Laplace, P.S. Théorie Analytique des Probabilités; Courcier: Paris, France, 1820; Volume 7. [Google Scholar]
  34. de Montmort, P.R.; Bernoulli, J.; Bernoulli, N. Essai d’Analyse sur les Jeux de Hazards; Seconde Édition Revue & augmentée de Plusieurs Lettres; Chez Claude Jombert: Paris, France, 1714. [Google Scholar]
  35. Geweke, J. Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments. Bayesian Stat. 1992, 4, 169–193. [Google Scholar]
Figure 3. Fitted CDFs (red) compared with empirical CDFs (black) for Dataset I.
Figure 3. Fitted CDFs (red) compared with empirical CDFs (black) for Dataset I.
Mathematics 13 03520 g003
Figure 4. Fitted CDFs (red) compared with empirical CDFs (black) for Dataset II.
Figure 4. Fitted CDFs (red) compared with empirical CDFs (black) for Dataset II.
Mathematics 13 03520 g004
Figure 5. Fitted CDFs (red) compared with empirical CDFs (black) for Dataset III.
Figure 5. Fitted CDFs (red) compared with empirical CDFs (black) for Dataset III.
Mathematics 13 03520 g005
Figure 6. MCMC diagnostic plots for Dataset I: trace, ACF, and histogram of sampled values.
Figure 6. MCMC diagnostic plots for Dataset I: trace, ACF, and histogram of sampled values.
Mathematics 13 03520 g006
Figure 7. MCMC diagnostic plots for Dataset II: trace, ACF, and histogram of sampled values.
Figure 7. MCMC diagnostic plots for Dataset II: trace, ACF, and histogram of sampled values.
Mathematics 13 03520 g007
Figure 8. MCMC diagnostic plots for Dataset III: trace, ACF, and histogram of sampled values.
Figure 8. MCMC diagnostic plots for Dataset III: trace, ACF, and histogram of sampled values.
Mathematics 13 03520 g008
Table 3. Competing models for the DLE distribution.
Table 3. Competing models for the DLE distribution.
ModelsAbbreviationAuthor(s)
Discrete RaleighDR[28]
PoissonPois[29]
Discrete ParetoDP[30]
Discrete Burr–HatkeDBH[31]
Discrete Inverted Topp–LeoneDITL[32]
GeometricGEOM[33]
Negative BinomialNbinom[34]
Table 4. ML Estimates, log-likelihood (-LL) and associated goodness-of-fit criteria for Dataset I.
Table 4. ML Estimates, log-likelihood (-LL) and associated goodness-of-fit criteria for Dataset I.
ModelsML
(S.E.)
-LLAICBICK-Sp-Value
DLE0.0039
(0.0006)
126.381254.761255.7570.1460.784
DR0.999
(0.000)
125.856253.712254.7080.1090.971
Pois471.699
(4.857)
−1159.512321.012322.000.4990.000
DITL0.168
(0.072)
157.164316.329317.3250.5270.000
DP0.149
(0.035)
159.328320.657321.6530.5350.000
DBH6.4 × 10 9
(0.0005)
227.841457.681458.6770.9940.000
Geom0.0024
(0.0006)
126.887255.774256.7690.3390.019
Nbinom0.040
(0.0019)
148.331298.662299.6580.3190.034
Table 5. ML Estimates, log-likelihood (-LL) and associated goodness-of-fit criteria for Dataset II.
Table 5. ML Estimates, log-likelihood (-LL) and associated goodness-of-fit criteria for Dataset II.
ModelsML
(S.E.)
-LLAICBICK-Sp-Value
DLE0.069
(0.009)
102.85237.817239.2190.2540.042
DR0.999
(0.000)
130.114262.229263.6290.5060.000
Pois25.675
(0.932)
412.885827.769829.1710.5240.000
DITL0.383
(0.129)
115.205232.409233.8100.2650.029
DP0.296
(0.059)
119.915241.831243.2320.3120.006
DBH6.47 × 10−9
(0.007)
148.893299.785301.1860.7330.000
Geom0.052
(0.009)
99.826201.653203.0540.1640.396
Nbinom0.537
(0.012)
260.541523.083524.4840.4970.000
Table 6. ML Estimates, log-likelihood (-LL) and associated goodness-of-fit criteria for Dataset III.
Table 6. ML Estimates, log-likelihood (-LL) and associated goodness-of-fit criteria for Dataset III.
ModelsML
(S.E.)
-LLAICBICK-Sp-Value
DLE0.121
(0.025)
42.17986.35887.0660.2570.276
DR0.998
(0.000)
47.14296.28496.9930.6960.000
Pois13.375
(0.971)
98.929199.859200.5670.4570.004
DITL0.392
(0.174)
39.24980.49981.2070.2990.136
DP0.287
(0.091)
41.25784.51485.2220.3390.062
DBH6.47 ×10−9
(0.036)
53.621109.243109.9510.6830.000
Geom0.147
(0.039)
29.04160.08260.7890.3490.051
Nbinom0.520
(0.025)
67.114136.228136.9360.3950.019
Table 7. Classical and Bayesian estimators and corresponding goodness-of-fit criteria for Dataset I.
Table 7. Classical and Bayesian estimators and corresponding goodness-of-fit criteria for Dataset I.
Method ϕ ^ AICBICK-Sp-Value
ML0.004254.761255.7570.1460.784
Case ISELF0.004254.764255.7590.1440.798
P-LINEX0.004254.764255.7590.1440.798
N-LINEX0.004254.764255.7590.1440.798
P-GELF0.004254.832255.8270.1440.799
N-GELF0.004254.761255.7570.1460.788
Case IISELF0.004254.777255.7720.1510.750
P-LINEX0.004254.776255.7720.1510.751
N-LINEX0.004254.777255.7720.1510.750
P-GELF0.004254.767255.7630.1430.805
N-GELF0.004254.788255.7840.1530.739
Table 8. Classical and Bayesian estimators and corresponding goodness-of-fit criteria for Dataset II.
Table 8. Classical and Bayesian estimators and corresponding goodness-of-fit criteria for Dataset II.
Method ϕ ^ AICBICK-Sp-Value
ML0.069237.817239.2190.2540.042
Case ISELF0.069237.819239.2200.2550.040
P-LINEX0.069237.819239.2210.2550.040
N-LINEX0.069237.818239.2910.2550.040
P-GELF0.068237.862239.2640.2630.031
N-GELF0.069237.817239.2180.2540.042
Case IISELF0.071237.826239.2270.2500.047
P-LINEX0.071237.825239.2260.2500.046
N-LINEX0.071237.827239.2290.2500.047
P-GELF0.069237.823239.2240.2560.039
N-GELF0.071237.833239.2340.2490.049
Table 9. Classical and Bayesian estimators and corresponding goodness-of-fit criteria for Dataset III.
Table 9. Classical and Bayesian estimators and corresponding goodness-of-fit criteria for Dataset III.
Method ϕ ^ AICBICK-Sp-Value
ML0.12186.35887.0660.2570.276
Case ISELF0.12086.36187.0690.2620.256
P-LINEX0.11986.36387.0710.2630.249
N-LINEX0.12186.35987.0670.2600.263
P-GELF0.11386.46287.1700.2860.172
N-GELF0.12186.35887.0660.2570.275
Case IISELF0.12586.38287.0900.2430.340
P-LINEX0.12586.37687.0840.2440.332
N-LINEX0.12686.38887.0960.2410.349
P-GELF0.11986.36987.0770.2660.237
N-GELF0.12786.39987.1070.2380.363
Table 10. Geweke diagnostic Z-scores of MCMC result for datasets.
Table 10. Geweke diagnostic Z-scores of MCMC result for datasets.
DatasetIIIIII
Z-score−0.7998−0.7516−0.847
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Baaqeel, H.; Al-Harbi, K.; Fayomi, A. An Inferential Study of Discrete One-Parameter Linear Exponential Distribution Under Randomly Right-Censored Data. Mathematics 2025, 13, 3520. https://doi.org/10.3390/math13213520

AMA Style

Baaqeel H, Al-Harbi K, Fayomi A. An Inferential Study of Discrete One-Parameter Linear Exponential Distribution Under Randomly Right-Censored Data. Mathematics. 2025; 13(21):3520. https://doi.org/10.3390/math13213520

Chicago/Turabian Style

Baaqeel, Hanan, Khlood Al-Harbi, and Aisha Fayomi. 2025. "An Inferential Study of Discrete One-Parameter Linear Exponential Distribution Under Randomly Right-Censored Data" Mathematics 13, no. 21: 3520. https://doi.org/10.3390/math13213520

APA Style

Baaqeel, H., Al-Harbi, K., & Fayomi, A. (2025). An Inferential Study of Discrete One-Parameter Linear Exponential Distribution Under Randomly Right-Censored Data. Mathematics, 13(21), 3520. https://doi.org/10.3390/math13213520

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop