Next Article in Journal
New Results on Idempotent Operators in Hilbert Spaces
Previous Article in Journal
Transfer Learning of High-Dimensional Stochastic Frontier Model via Elastic Net
Previous Article in Special Issue
Statistical Analysis of a Generalized Linear Model for Bilateral Correlated Data Under Donner’s Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analyzing Competing Risks with Progressively Type-II Censored Data in Dagum Distributions

Statistics Program, Department of Mathematics and Statistics, College of Arts and Sciences, Qatar University, Doha 2713, Qatar
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(7), 508; https://doi.org/10.3390/axioms14070508
Submission received: 30 May 2025 / Revised: 25 June 2025 / Accepted: 27 June 2025 / Published: 30 June 2025

Abstract

Competing risk models are essential in survival analysis for studying systems with multiple mutually exclusive failure events. This study investigates the application of competing risk models in the presence of progressively Type-II censored data for the Dagum distribution, a flexible distribution suited for modeling data with heavy tails and varying skewness and kurtosis. The methodology includes maximum likelihood estimation of the unknown parameters, with a focus on the special case of a common shape parameter, which allows for a closed-form expression of the relative risks. A hypothesis test is developed to assess the validity of this assumption, and both asymptotic and bootstrap confidence intervals are constructed. The performance of the proposed methods is evaluated through Monte Carlo simulations, and their applicability is demonstrated with a real-world example.

1. Introduction

Competing risk models are valuable tools in survival analysis that address scenarios where individuals or systems face multiple mutually exclusive failure events. In such models, the event of interest is influenced by competing events that can prevent or modify the occurrence of the primary event. Competing risk models enable a more comprehensive understanding of the probabilities and cumulative incidences associated with different failure types. They provide insights into the relative risks and dynamics of competing events over time. These models have diverse applications across various fields, including healthcare, epidemiology, actuarial science, and engineering. By accounting for competing risks, researchers and practitioners can make informed decisions, develop appropriate risk management strategies, and gain a deeper understanding of complex systems’ failure patterns.
For example, in healthcare, competing risk models can be applied to study patient outcomes where different causes (e.g., death from different diseases) might prevent the observation of the event of primary interest, such as disease recurrence (see, e.g., [1]). In epidemiology, these models allow for the analysis of disease outcomes where various risks, like co-occurring conditions, compete (see, e.g., [2]). Actuarial science uses them to predict life insurance risks and policy lapses (see, e.g., [3]), while engineering applies them to assess system reliability where different failure modes compete for priority (see, e.g., [4]).
Censoring is a widely recognized and effective approach for cost reduction in experimental settings. In the field of reliability and survival analysis, numerous censoring strategies have been developed and employed by researchers. The two primary censoring plans, namely Type-I and Type-II, allow for the control of the overall duration of the experiment and the number of observed failures, respectively.
Progressive Type-II censoring is a censoring scheme that has gained considerable attention in the field of life testing and reliability analysis that was first introduced by [5,6]. It is designed to enhance the efficiency of experiments where the objective is to study the lifetimes or failure times of a set of items. In this censoring plan, a predetermined number of failures, denoted as m, is specified before the experiment begins. The censoring scheme, represented by R = ( R 1 , R 2 , , R m ) , is carefully devised such that the sum of the individual censoring values, R 1 + R 2 + + R m , equals the difference between the total number of items, n, and the specified number of failures, m, i.e., n m . Throughout the course of the experiment, items are progressively removed in stages, starting with the elimination of R 1 items after the first failure occurs. This sequential removal process continues until the mth failure, where the remaining R m items are also removed from the experiment. The progressive Type-II censoring scheme holds promise for optimizing the efficiency of experimental designs and has been extensively explored by researchers aiming to extract meaningful insights from reliability and survival analysis. For an excellent overview and in-depth discussions on progressive censoring, interested readers are encouraged to refer to the monograph authored by [7] (see also [8,9]).
Several authors have studied the competing risk models for common lifetime distributions based on progressive Type-II censored data. Ref. [10] studied the exponential model. Refs. [11,12] considered the Weibull model. Refs. [13,14,15,16] focused on the competing risk data for the Lomax, Kumaraswamy, Burr Type-XII and generalized Rayleigh distributions.
Ref. [17] analyzed the two-parameter Log-Normal model for hybrid Type-II progressive censored data. Ref. [18] introduced a competing risk exponential model utilizing a generalized Type I hybrid censoring method. The generalized exponential and the Weibull competing risk models under adaptive Type-II progressive censored data were studied, respectively, by [16,19]. Ref. [20] studied the statistical inference for a class of exponential distributions with Type-I progressively interval-censored competing risk data. Ref. [21] considered generalized progressive hybrid censoring for Burr XII competing distributions.
Bayesian inference for competing risk models has been explored by [12,22] for the Weibull model, and by [23] for the Gompertz model (see also [24,25,26,27], among others).
The case of dependent competing risks was studied by [28] for Marshall–Olkin bivariate Kumaraswamy distribution, and [29] for the Burr-XII model (see also [30] among others).
The Dagum distribution, alternatively referred to as Burr Type-III, was initially introduced by [31] as the third solution to the Burr differential equation, which characterizes the various types of Burr distributions. The Dagum distribution, introduced by [32], is widely applied in diverse fields such as finance, economics, and hydrology. It is characterized by its flexibility in modeling a wide range of data, particularly those with heavy tails and skewness. The Dagum distribution offers a flexible framework for analyzing data with varying shapes and tail behaviors, making it a valuable tool for researchers and practitioners seeking to understand and model complex phenomena.
The probability density function (pdf) of the two-parameter Dagum distribution, denoted as D ( α , β ) , is given by the following expression:
f ( α , β ; x ) = α β x ( β + 1 ) ( 1 + x β ) ( α + 1 ) , x > 0
where α > 0 and β > 0 are both shape parameters. The cumulative distribution function can be expressed in a simple form as follows:
F ( α , β ; x ) = ( 1 + x β ) α .
The Dagum distribution is commonly referred to as the inverse Burr distribution. This connection arises from the fact that if a random variable X follows the Dagum distribution, its reciprocal 1 X follows the Burr Type-XII distribution, or simply the Burr distribution. For a comprehensive examination of the Dagum distribution, interested readers are encouraged to refer to the extensive study conducted by [33].
Despite the extensive literature on competing risk models under common lifetime distributions, such as exponential and Weibull, there is limited research on the application of the Dagum distribution with progressively Type-II censored data. This work addresses this gap by developing an inferential framework for the Dagum distribution under competing risks and various censoring schemes. Additionally, distributions with finite support such as Kies-style models (see [34]) could be explored as alternatives to the Dagum model for bounded failure times (see also [35]).
Moreover, while several heavy-tailed distributions (e.g. Pareto, Weibull, log-normal) have been employed in reliability analysis, the Dagum distribution offers unique advantages that make it especially attractive for modeling component lifetimes under complex censoring schemes. First, unlike the one-parameter Pareto model, whose mean is infinite when the shape parameter does not exceed unity, the Dagum distribution admits finite moments even in pronounced heavy-tail regimes, ensuring well-defined reliability measures [36]. Second, whereas the Weibull distribution enforces a monotonic hazard and the log-normal produces a strictly increasing-then-decreasing failure rate, the Dagum’s hazard function can assume bathtub, unimodal, or monotonic shapes, thereby capturing a wider range of empirical failure-mechanism behaviors observed in engineering applications [37]. Third, all key functions of the Dagum model (PDF, CDF, and hazard) have closed-form expressions, which greatly facilitates both classical and Bayesian inference under various types of censoring schemes. These combined features, flexible tail behavior, versatile hazard shapes, and analytic tractability underscore the Dagum distribution’s suitability for competing risks systems, as explored in the present work.
This paper is structured as follows. Section 2 introduces the competing risk models and their structure for the Dagum distribution. Section 3 discusses the maximum likelihood estimation of the unknown Dagum parameters, considering both the general case of arbitrary distribution parameters and the special case of a common shape parameter, β . Additionally, a likelihood ratio test procedure for testing the validity of the common shape parameter β is presented. Section 4 is devoted to the asymptotic confidence intervals based on the Fisher information matrix, along with bootstrap confidence intervals. In Section 5, we evaluate the performance of our inference procedure through extensive Monte Carlo simulations. Finally, we illustrate the proposed methods with a real data example in Section 6.

2. Competing Risk Model Description

Suppose there are n units involved in an experiment, and there are K known causes of failure. To implement progressive Type-II censoring, the experiment is conducted until a certain number of failures occur. Let m be the predetermined number of failures.
When the first failure is observed, R 1 units under each cause of failure are randomly chosen and progressively removed from the experiment. This process continues until m failures are observed, at which point the remaining n m units are also removed.
The lifetimes of these units can be represented by independent and identically distributed (i.i.d.) random variables ( X 1 : m : n , δ 1 ) , ( X 2 : m : n , δ 2 ) , , ( X m : m : n , δ m ) , where δ i { 1 , 2 , , K } indicates the failure cause for each observation, for i = 1 , , m .
Furthermore, we introduce the indicator variable I ( δ i = k ) as
I ( δ i = k ) = 1 if δ i = k , 0 otherwise ,
for k = 1 , 2 , , K . Then, m k = i = 1 m I ( δ i = k ) , represents the count of units that failed due to cause k.
For ease of notation, we will show the progressively Type-II censored sample ( X 1 : m : n , δ 1 ) , ( X 2 : m : n , δ 2 ) , , ( X m : m : n , δ m ) , by ( X 1 , δ 1 ) , ( X 2 , δ 2 ) , , ( X m , δ m ) .
Let X k j represent the latent failure time of the jth unit under the kth failure mode. In this case, we can determine the actual failure time X j as the minimum value among X 1 j , X 2 j , …, X K j , where j ranges from 1 to m, i.e., X j = min { X 1 j , X 2 j , …, X K j } .
Additionally, to avoid identifiability issues in the underlying model, it is often assumed that the failure modes are independent. This means that X k j ’s, representing the latent failure time of the jth unit under the kth failure mode, are also independent and identically distributed for j = 1 , 2 , . . . , m . Furthermore, we assume each X k j follows a Dagum distribution with parameters α k and β k .
In this paper, we specifically focus on the case where K = 2 , meaning there are two failure modes considered. Hence, for each observation j, X j = min { X 1 j , X 2 j } , where X 1 j and X 2 j represent the latent failure times under the first and second failure modes, respectively. Moreover, we study the special case of β 1 = β 2 = β in more detail. This will enable us to derive a simple formula for the relative risk p = P ( X 1 j X 2 j ) through the following remark, which was also derived by [38,39].
Remark 1.
Let X k j for j = 1 , 2 , , m and k = 1 , 2 , follow D ( α k , β ) , and m 1 and m 2 let be the number of failures due to failure causes 1 and 2, respectively. Then, m 1 b ( m , p ) and m 2 b ( m , 1 p ) , where
p = P ( X 1 j X 2 j ) = α 2 α 1 + α 2 ,
is the relative risk due to failure cause 1.
Proof. 
The result follows by noting that m i is the sum of independent Bernoulli random variables I ( δ i = k ) with parameter p given by the following:
p = P ( X 1 j X 2 j ) = 0 F X 1 ( x ; α 1 , β ) f X 2 ( x ; α 2 , β ) d x = 0 α 2 β x ( β + 1 ) ( 1 + x β ) ( α 2 + 1 ) ( 1 + x β ) α 1 d x = α 2 α 1 + α 2 0 ( α 1 + α 2 ) β x ( β + 1 ) ( 1 + x β ) ( α 1 + α 2 + 1 ) d x = α 2 α 1 + α 2 .
The last integral is equal to one, as the integrand is the pdf of D ( α 1 + α 2 , β ) . □
Figure 1 shows the plot of the pdfs of the Dagum distribution for α 1 = 0.5 , α 2 = 0.8 and β = 2.5 under different causes of failure.

3. Maximum Likelihood Estimation

In this section, we address the construction of the maximum likelihood estimators (MLEs) and the confidence intervals for the unknown parameters of the Dagum distribution using the progressively Type-II competing risks censored data.
We consider two cases, the general case of arbitrary parameters α 1 , α 2 , β 1 and β 2 and the special case of β 1 = β 2 = β . The latter case enables us to present the relative failure risk due to cause 1 in a closed form of Theorem 1. We also discuss the likelihood ratio test for testing the validity of the special case of β 1 = β 2 = β .

3.1. Estimation of Parameters Under General Case

For a given competing risk sample ( x 1 , δ 1 ) , ( x 2 , δ 2 ) , , ( x m , δ m ) , from D ( α k , β k ) distribution, where δ i { 1 , 2 } , i = 1 , 2 , , m and k = 1 , 2 , the likelihood function of α 1 , α 2 , β 1 and β 2 can be expressed as follows (see, e.g., Kundu et al., 2003 [10]):
L ( α k , β k | x ) = C k = 1 2 ( α k β k ) m k i = 1 m k = 1 2 x i ( β k + 1 ) I ( δ i = k ) × i = 1 m k = 1 2 { 1 + x i β k } ( α k + 1 ) I ( δ i = k ) × i = 1 m k = 1 2 { 1 ( 1 + x i β k ) α k } I ( δ i = 3 k ) + r i , k = 1 , 2 ,
where C = n ( n R 1 1 ) ( n R 1 R 2 R m 1 m + 1 ) and r i denotes the number of units removed at the ith failure time.
The log-likelihood function ignoring the additive constant term is given by the following:
l = l ( α k , β k ) = log L ( α k , β k | X ) = k = 1 2 m k { log ( α k ) + log ( β k ) } k = 1 2 ( β k + 1 ) I ( δ i = k ) i = 1 m log ( x i ) k = 1 2 ( α k + 1 ) I ( δ i = k ) i = 1 m log ( 1 + x i β k ) + i = 1 m k = 1 2 { I ( δ i = 3 k ) + r i } log ( 1 ( 1 + x i β k ) α k ) , k = 1 , 2 .
Upon differentiating Equation (2) with respect to α k and β k , for k = 1 , 2 we get
α k l ( α k , β k ) = m k α k i = 1 m I ( δ i = k ) log ( B k i ) + i = 1 m { I ( δ i = 3 k ) + r i } B k i α k log ( B k i ) 1 B k i α k ,
and
β k l ( α k , β k ) = m k β k i = 1 m I ( δ i = k ) log ( x i ) + i = 1 m I ( δ i = k ) ( α k + 1 ) x i β k log ( x i ) B k i i = 1 m { I ( δ i = 3 k ) + r i } × α k B k i ( α k + 1 ) x i β log ( x i ) 1 B k i α k ,
where B k i = { 1 + x i β k } .
The maximum likelihood estimators of the parameters α k and β k can be found by the simultaneous solution of the system of nonlinear equations α k l ( α k , β k ) = 0 and β k l ( α k , β k ) = 0 for k = 1 , 2 . However, since there is no closed-form solution for these equations, a numerical method is needed to find the MLEs. In our simulation study and data analysis, in Section 5 and Section 6, respectively, we used the Barzilai-Borwein (BB) method for this purpose (see [40]).

3.2. Estimation of Parameters Under Special Case

If the assumption of a common shape parameter β 1 = β 2 = β is valid, then the joint likelihood function can be expressed as follows (see [9]):
L ( α 1 , α 2 , β | x ) = C α 1 m 1 α 2 m 2 β m i = 1 m x i ( β + 1 ) × i = 1 m k = 1 2 { 1 + x i β } ( α k + 1 ) I ( δ i = k ) × i = 1 m k = 1 2 { 1 ( 1 + x i β ) α k } I ( δ i = 3 k ) + r i ,
where C = n ( n R 1 1 ) ( n R 1 R 2 R m 1 m + 1 ) .
We define the log-likelihood function as follows:
l s = l ( α 1 , α 2 , β ) = log L ( α 1 , α 2 , β | X ) = m log ( β ) + m 1 log ( α 1 ) + m 2 log ( α 2 ) ( β + 1 ) i = 1 m log ( x i ) i = 1 m k = 1 2 ( α k + 1 ) × I ( δ i = k ) log ( 1 + x i β ) + i = 1 m k = 1 2 ( I ( δ i = 3 k ) + r i ) log ( 1 ( 1 + x i β ) α k ) .
Hence, we derive the log-likelihood derivatives with respect to α 1 , α 2 and β as follows:
α k l ( α 1 , α 2 , β ) = m k α k i = 1 m I ( δ i = k ) log ( B i ) + i = 1 m { I ( δ i = 3 k ) + r i } B i α k log ( B i ) 1 B i α k ,
and
β l ( α 1 , α 2 , β ) = m β i = 1 m log ( x i ) + i = 1 m k = 1 2 ( α k + 1 ) I ( δ i = k ) x i β log ( x i ) B i i = 1 m k = 1 2 { I ( δ i = 3 k ) + r i } × α k B i ( α k + 1 ) x i β log ( x i ) 1 B i α k ,
for k = 1 , 2 and B i = { 1 + x i β } .
Note that under standard regularity conditions, the MLEs are consistent and asymptotically normal. To assess finite-sample properties such as bias and mean squared error, we performed extensive Monte Carlo simulations, as presented in Section 5.

3.3. Likelihood Ratio Test

To validate the assumption of a common shape parameter β , we need to perform the following hypothesis test using the likelihood ratio test procedure:
H 0 = β 1 = β 2 = β vs . H a = β 1 β 2
The likelihood ratio statistic can be written as
W = L ( x | α ^ 1 , α ^ 2 , β ^ ) L ( x | α ^ 1 , α ^ 2 , β ^ 1 , β ^ 2 ) ,
where, α ^ 1 , α ^ 2 and β ^ are the maximum likelihood estimators under the special case and α ^ 1 , α ^ 2 , β ^ 1 and β ^ 2 are the maximum likelihood estimators under the general case. We note that for large sample sizes, T = 2 log W follows the chi square distribution with one degree of freedom. Therefore, the null hypothesis (4) of common β will be rejected at significance level α if
T > χ 1 ; α 2 .
In Section 6, we will apply the aforementioned testing procedure to verify the validity of the assumption of a common β in the special case.

4. Asymptotic Confidence Intervals

In this section, we will study the asymptotic confidence intervals of the unknown parameters by inverting the Fisher information matrix which is given by the following:
I G = [ 2 l g α 1 2 2 l g α 1 α 2 2 l g α 1 β 1 2 l g α 1 β 2 2 l g α 2 α 1 2 l g α 2 2 2 l g α 2 β 1 2 l g α 2 β 2 2 l g β 1 α 1 2 l g β 1 α 2 2 l g β 1 2 2 l g β 1 β 2 2 l g β 2 α 1 2 l g β 2 α 2 2 l g β 2 β 1 2 l g β 2 2 ]
for the general case and by the following:
I S = [ 2 l s α 1 2 2 l s α 1 α 2 2 l s α 1 β 2 l s α 2 α 1 2 l s α 2 2 2 l s α 2 β 2 l s β α 1 2 l s β α 2 2 l s β 2 ]
for the special case.
The 100 ( 1 γ ) confidence interval for α 1 is given by
α ^ 1 ± z γ / 2 v a r ( α ^ 1 ) ,
where z γ / 2 is the critical value from the standard normal distribution and v a r ( α ^ 1 ) is the first diagonal element of the inverted Fisher information matrix. The asymptotic confidence intervals for the other parameters are given in a similar manner.
In the next two subsections, we will present the second derivative of the log-likelihood function with respect to the parameters for the general and special cases, respectively.

4.1. General Case

The second derivatives of the log-likelihood function in the general case are given by the following:
2 α k 2 l ( α 1 , α 2 , β 1 , β 2 ) = m k α k 2 i = 1 m { I ( δ i = 3 k ) + r i } B k i α k { log ( B k i ) } 2 ( 1 B k i α k ) 2 ,
2 β k 2 l ( α 1 , α 2 , β 1 , β 2 ) = m β k 2 i = 1 m ( α k + 1 ) I ( δ i = k ) x i β log 2 ( x i ) ( 1 + 2 x i β k ) B k i 2 i = 1 m { I ( δ i = 3 k ) + r i } × { α k x i β k log 2 ( x i ) B k i ( α k + 1 ) { ( α k + 1 ) B k i 1 x i β k 1 } { 1 B k i α k } α k 2 B k i 2 ( α k + 1 ) x i 2 β k ( log x i ) 2 } ( 1 B k i α k ) 2 ,
and
2 l α k β k = i = 1 m I [ δ i = k ] x i β k log ( x i ) B k i + i = 1 m { I ( δ i = 3 k ) + r i } × x i β k log ( x i ) B k i ( α k + 1 ) 2 α k log B k i B k i α k + B k i α k + α k log B k i 1 ( 1 B k i α k ) 2 ,
for k = 1 , 2 and B k i = 1 + x i β k .
Note that 2 l α 1 α 2 = 2 l α 1 β 2 = 2 l α 2 α 1 = 2 l α 2 β 1 = 2 l β 1 α 2 = 2 l β 1 β 2 = 2 l β 2 α 1 = 2 l β 2 β 1 = 0 .

4.2. Special Case

The second derivatives of the log-likelihood function in the special case are given by the following:
2 α k 2 l ( α 1 , α 2 , β ) = m k α k 2 i = 1 m { I ( δ i = 3 k ) + r i } B i α k { log ( B i ) } 2 ( 1 B i α k ) 2 ,
for k = 1 , 2 .
2 β 2 l ( α 1 , α 2 , β ) = m β 2 + i = 1 m k = 1 2 ( α k + 1 ) I ( δ i = k ) x i β log 2 ( x i ) ( x i β B i ) B i 2 i = 1 m k = 1 2 { I ( δ i = 3 k ) + r i } × α k x i β log 2 ( x i ) B i ( α k + 1 ) { ( α k + 1 ) B i 1 x i β 1 } { 1 B i α k } + α k B i ( α k + 1 ) x i β ( 1 B i α k ) 2
and
2 l α k β = i = 1 m k = 1 2 I [ δ i = k ] x i β log ( x i ) B i i = 1 m k = 1 2 { I ( δ i = 3 k ) + r i } × x i β log ( x i ) B i ( α k + 1 ) ( 1 α k log B i ) ( 1 B i α k ) α k B i α k log B i ( 1 B i α k ) 2
Note that 2 l α 1 α 2 = 2 l α 2 α 1 = 0 .
In the next two sections, we will compare the asymptotic confidence intervals with the corresponding bootstrap confidence intervals in terms of the lengths and coverage probabilities. We will see that, generally, asymptotic confidence intervals perform better, particularly for larger sample sizes.

5. Simulation Study

In this section, we evaluated the performance of our proposed method using extensive Monte Carlo simulations. We used the R pseudo-random generator with 10,000 iterations. We set the number of bootstrap iterations to 1000.
We studied both the general case of arbitrary parameter values and the special case of common β . Without loss of generality, we considered α 1 = 1.5 , α 2 = 1.2 , β 1 = 0.5 and β 2 = 0.7 for the general case and α 1 = 1.5 , α 2 = 1.2 and β = 0.6 for the special case.
We considered different combinations of sample size n = ( 30 , 50 ) , number of failures, m = ( 15 , 20 , 25 ) and censoring schemes R. Table 1 shows the nine censoring schemes considered in this simulation study. For example, ( 15 , 0 14 ) means ( 15 , 0 , 0 , , 0 ) . Note that schemes 1, 4, and 7 are early censoring, whilst schemes 2, 5 and 8 are the conventional Type-II censoring.
For each censoring scheme, we generated a progressively Type-II competing risk censored sample from a Dagum distribution with assumed parameter values using the algorithm of [41]. We then found the maximum likelihood estimates of the parameters using the method proposed in Section 3. We employ the Barzilai-Borwein spectral method using the ‘BB’ R package to solve nonlinear equations and determine the maximum likelihood estimates of the parameters. This method offered enhanced performance and consistent convergence in our case, which ultimately provided more accurate and reliable maximum likelihood estimates. The BB algorithm uses an adaptive, derivative-free step size that implicitly captures curvature information, yielding super-linear convergence for a wide class of smooth functions without requiring explicit Hessian evaluations. In our pilot Monte Carlo experiments, the BB method reduced total CPU time compared to Newton–Raphson, while maintaining comparable accuracy.
We assess the accuracy of the maximum likelihood estimates (MLEs) by comparing their absolute bias (AB) and mean squared error (MSE). The estimator with the lowest mean squared error is deemed the best. When evaluating different interval estimations, we consider the average interval coverage percentages (CPs).
Table 2 and Table 3 show the maximum likelihood estimates, the mean squared errors, the absolute biases and the empirical coverage probabilities (approximate and bootstrap) for the estimate of parameters for the general and special cases, respectively. For ease of comparison, the corresponding graphs are depicted in Figure 2. We observe that the estimation results are generally satisfactory in terms of MSEs and ABs for all censoring schemes. However, censoring schemes 3 and 9, which feature a uniform and smooth censoring of items, relatively outperformed other schemes in terms of both MSE and AB. Moreover, early censoring schemes (1, 4 and 7) performed better than other schemes with respect to the MSE with the same number of n and m. Also, note that relative risk due to cause 1 is given by p = α 2 α 1 + α 2 = 0.44 , meaning that, relatively, more data will be due to cause 2 rather than cause 1, resulting in a better estimate of the parameter α 2 , which is very clear in Figure 2.
Moreover, both confidence interval methods produce coverage probabilities of more than the nominated levels; however, the approximate confidence intervals performed better than the bootstrap ones.
Though this study focuses on the MLE approach, future work may compare its performance against Bayesian estimators or EM-type algorithms, particularly in the presence of model misspecification or small sample sizes.

6. Numerical Example

In this section, we study two numerical examples to illustrate our proposed inferential procedure for the Dagum Distribution.
Example 1 (Pneumonia data)
The data concerning investigating the impact of hospital-acquired infections in intensive care include a random subsample of 747 patients from the SIR 3 (Spread of Nosocomial Infections and Resistant Pathogens) cohort study conducted at Charité University Hospital in Berlin, Germany (see [42]). The data are also available in “mvna” R package.
The dataset includes details on pneumonia status at admission, duration of stay in the intensive care unit, and the ICU outcome, which is either hospital death or discharge alive. The competing endpoints are discharge from the unit and death within the unit.
The objective of the study is to examine the impact of pneumonia present at admission on mortality in the unit. Given that pneumonia is a severe illness, it is anticipated that more patients with pneumonia will die compared to those without. Thus, death is the primary event of interest, with discharge serving as the competing event.
Among the 97 patients admitted with pneumonia, 8 were censored before the end of their ICU stay. Therefore, we considered n = 89 patients with pneumonia present on admission, of which 68 were discharged from hospital (failure cause 1) and 21 were dead (failure cause 2).
We checked the suitability of fitting the Dagum distribution to the competing risks, failure cause 1 and failure cause 2 data by performing well-known goodness-of-fit tests. Table 4 provides the estimation of the parameters as well as the values of the Cramér–von Mises goodness-of-fit test statistic ( ω 2 ) and the Kolmogorov–Smirnov statistic (D) with their corresponding p-values (in bracket) for the competing risk, failure cause 1 and failure cause 2. Note that λ denoted the scale parameter of the general Dagum distribution. All the p-values are sufficiently large to conclude the suitability of the Dagum distribution to model the failure time data. Figure 3 depicts the plots of the empirical and fitted Dagum CDF for competing risks, cause 1 and cause 2 for pneumonia data. Additionally, it includes the P-P plot comparing observed cumulative probabilities with expected cumulative probabilities.
We then generated the progressive Type-II censoring data with effective sample size m = 50 and the late censoring scheme R = ( 0 49 , 39 ) to complete the n = 89 data. There were m 1 = 40 data due to failure cause 1 (discharge) and m 2 = 10 data due to failure cause 2 (death).
The test statistic for testing the null hypothesis H 0 : β 1 = β 2 = β is equal to T = 21.98 with a corresponding p-value = 0.00 , which is highly significant. Hence, we used the general model to estimate the unknown parameters. Table 5 shows the maximum likelihood estimates, the asymptotic confidence intervals and the bootstrap confidence intervals of the unknown parameters ( α 1 , β 1 ) and ( α 2 , β 2 ) under different censoring schemes for the failure causes 1 and 2, respectively. It can be seen that, in most cases, the asymptotic confidence intervals are shorter than the bootstrap confidence intervals, indicating superior performance in terms of length.
Example 2 (Leukemia data)
The dataset pertains to 177 acute leukemia patients who underwent stem cell transplantation. The competing risks are the incidence of relapse and death due to transplant-related complications. After excluding 46 censored cases, the dataset includes 131 patients, with 56 cases of relapse and 75 cases of competing events.
This dataset was analyzed by [43] to fit a competing risk regression model incorporating covariates such as age, gender, disease phase, and type of transplant. The data are available in “casebase” R package.
The results of the estimation of the parameters and the goodness-of-fit tests are shown in Table 6. As observed, the null hypothesis that the data for the failure cause 1, failure cause 2 and competing risk follow a Dagum distribution is not rejected for any of the three test statistics.
Figure 4 depicts the plots of the empirical CDF, fitted Dagum CDF and P-P plot for competing risks, cause 1 and cause 2 for the leukemia data.
Progressive Type-II censoring data were generated with an effective sample size of m = 40 using the early censoring scheme R = ( 91 , 0 39 ) applied to a complete dataset of size n = 131 . The resulting data included m 1 = 18 observations attributed to failure cause 1 (relapse) and m 2 = 22 observations due to competing events. Note that the sample size m = 40 is below the conventional threshold for large-sample inference. Therefore, we recommend relying on bootstrap-based intervals to ensure accurate uncertainty quantification.
We test for for the null hypothesis of the equality of shape 2 parameters β 1 and β 2 . The test statistic was found to be T = 13.32 with a p-value equal to 0.003, indicating a high level of significance. Hence, we considered the general case for estimating the parameters. Table 7 presents the maximum likelihood estimates and the asymptotic and bootstrap confidence intervals of the unknown parameters when considering various censoring schemes.
It can be seen that, in most cases, the asymptotic confidence intervals are shorter than the bootstrap confidence intervals when estimating α 1 and α 2 . However, for estimating β 1 and β 2 , the bootstrap confidence intervals are shorter and thus perform better in terms of length.
As shown in Table 7, the asymptotic confidence intervals are generally shorter than the bootstrap confidence intervals for estimating α 1 and α 2 , except under the conventional Type-II censoring scheme, where the bootstrap confidence intervals are shorter. For β 1 , the bootstrap confidence intervals consistently outperform the asymptotic confidence intervals across all schemes in terms of length. In contrast, for β 2 , the asymptotic confidence intervals are shorter than the bootstrap confidence intervals under the conventional Type-II censoring scheme and the third smoothing censoring scheme.

7. Concluding Remarks

This paper proposes statistical inference methods to estimate the parameters of the Dagum distribution under progressively Type-II censored data with independent competing risks, including both point and interval estimation. Monte Carlo simulations demonstrate that the proposed methods perform relatively well. The interval estimation obtained from the asymptotic model generally outperforms the bootstrap method. Different censoring schemes affect both point and interval estimation, with early censoring schemes yielding the best results. Moreover, the conventional Type-II censoring scheme performs relatively well. The special case of common shape parameter β yields to a closed form for the reliability parameter R = P ( X 1 j < X 2 j ) in Remark 1. For a general setting of different values of β 1 and β 2 , the reliability parameter R is given by the following:
R = α 1 β 1 0 ( 1 + x β 2 ) α 2 x ( β 1 + 1 ) ( 1 + x β 1 ) ( α 1 + 1 ) d x .
It is evident that a numerical method is required to evaluate this integral. While this paper focuses on the case k = 2 , extending our methods to systems with more than two failure modes substantially increases the dimensionality of the likelihood functions and, therefore, finding the maximum likelihood estimators becomes more challenging.
For future work, one could consider dependent failure models by fitting a bivariate distribution, such as the bivariate exponential model or bivariate Dagum distribution (see [44]). Additionally, other censoring schemes, such as hybrid censoring, can be explored.
Moreover, future studies could include the incorporation of covariates using regression models, the development of Bayesian estimation procedures, or the modeling of dependent competing risks via bivariate Dagum or copula-based methods. To enhance reproducibility, the R code used for simulations and data analysis is available upon request.

Author Contributions

Conceptualization, R.P.; methodology, R.P.; software, R.B. and R.P.; original draft preparation, R.B. and R.P.; Supervision, R.P.; writing—review and editing, R.P. and R.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All data supporting the findings of this study are included within the article.

Acknowledgments

We sincerely thank the anonymous reviewers for their valuable comments and suggestions, which have helped improve the paper. We thank Open Access for the funding provided by the Qatar National Library.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Haller, B.; Schmidt, G.; Ulm, K. Applying competing risks regression models: An overview. Lifetime Data Anal. 2013, 19, 33–58. [Google Scholar] [CrossRef]
  2. Lau, B.; Cole, S.R.; Gange, S.J. Competing Risk Regression Models for Epidemiologic Data. Am. J. Epidemiol. 2009, 170, 244–256. [Google Scholar] [CrossRef] [PubMed]
  3. Lee, H.; Ha, H.; Lee, T. Decrement rates and a numerical method under competing risks. Comp. Stat. Data Anal. 2021, 156, 107–125. [Google Scholar] [CrossRef]
  4. Ma, Z.; Krings, A.W. Competing Risks Analysis of Reliability, Survivability, and Prognostics and Health Management (PHM). In Proceedings of the IEEE Aerospace Conference, Big Sky, MT, USA, 1–8 March 2008; pp. 1–21. [Google Scholar]
  5. Herd, R.G. Estimation of Parameters of a Population from a Multi-Censored Sample. Ph.D. Thesis, Iowa State College, Ames, IA, USA, 1956. [Google Scholar]
  6. Cohen, A.C. Progressively censored samples in life testing. Technometrics 1963, 5, 327–329. [Google Scholar] [CrossRef]
  7. Balakrishnan, N.; Aggrawala, R. Progressive Censoring: Theory, Methods, and Applications; Birkhäuser: Boston, MA, USA, 2000. [Google Scholar]
  8. Balakrishnan, N. Progressive censoring methodology: An appraisal (with discusssions). TEST 2007, 16, 211–296. [Google Scholar] [CrossRef]
  9. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring; Statistics for Industry and Technology; Birkhäuser: Boston, MA, USA, 2014. [Google Scholar]
  10. Kundu, D.; Kannan, N.; Balakrishnan, N. Analysis of progressively censored competing risks data. Handb. Stat. 2003, 23, 331–348. [Google Scholar]
  11. Pareek, B.; Kundu, D.; Kumar, S. On progressively censored competing risks data for Weibull distributions. Comput. Stat. Data Anal. 2009, 53, 4083–4094. [Google Scholar] [CrossRef]
  12. Kundu, D.; Pradhan, B. Bayesian analysis of progressively censored competing risks data. Sankhya B 2011, 73, 276–296. [Google Scholar] [CrossRef]
  13. Cramer, E.; Schmiedt, A.B. Progressively type-II censored competing risks data from Lomax distributions. Comput. Stat. Data Anal. 2011, 55, 1285–1303. [Google Scholar] [CrossRef]
  14. Wang, L. Inference of progressively censored competing risks data from Kumaraswamy distributions. J. Comput. Appl. Math. 2018, 343, 719–736. [Google Scholar] [CrossRef]
  15. Qin, X.; Gui, W. Statistical inference of Burr-XII distribution under progressive Type-II censored competing risks data with binomial removals. J. Comput. Appl. Math. 2020, 378, 1–15. [Google Scholar] [CrossRef]
  16. Ren, J.; Gui, W. Statistical analysis of adaptive type-II progressively censored competing risks from Weibull models. Appl. Math. Model. 2021, 98, 323–342. [Google Scholar] [CrossRef]
  17. Hemmati, F.; Khorram, E. Statistical Analysis of the Log-Normal Distribution under Type-II Progressive Hybrid Censoring Schemes. Commun. Stat. Simul. Comput. 2013, 42, 52–75. [Google Scholar] [CrossRef]
  18. Mao, S.; Shi, Y.M.; Sun, Y.D. Exact inference for competing risks model with generalized Type I hybrid censored exponential data. J. Stat. Comput. Simul. 2014, 84, 2506–2521. [Google Scholar] [CrossRef]
  19. Ashour, S.K.; Nassar, M.M.A. Analysis of Generalized Exponential Distribution Under Adaptive Type-II Progressive Hybrid Censored Competing Risks Data. Int. J. Adv. Stat. Prob. 2014, 2, 108–113. [Google Scholar] [CrossRef]
  20. Ahmadi, K.; Yousefzadeh, F.; Rezaei, M. Analysis of progressively Type-I interval censored competing risks data for a class of an exponential distribution. J. Stat. Comput. Simul. 2016, 86, 3629–3652. [Google Scholar] [CrossRef]
  21. Chandra, P.; Mahto, A.K.; Tripathi, Y.M. Inference for a competing risks model with Burr XII distributions under generalized progressive hybrid censoring. Braz. J. Probab. Stat. 2023, 37, 566–595. [Google Scholar] [CrossRef]
  22. Chacko, M.; Mohan, R. Bayesian analysis of Weibull distribution based on progressive type-II censored competing risks data with binomial removals. Comput. Stat. 2019, 34, 233–252. [Google Scholar] [CrossRef]
  23. Wu, M.; Shi, T. Bayes estimation and expected termination time for the competing risks model from Gompertz distribution under progressively hybrid censoring with binomial removals. J. Comput. Appl. Math. 2016, 300, 420–431. [Google Scholar] [CrossRef]
  24. Krishna, H.; Goel, N. Classical and Bayesian Inference in Two Parameter Exponential Distribution with Randomly Censored Data. Comput. Stat. 2018, 33, 249–275. [Google Scholar] [CrossRef]
  25. Abd El-Raheem, A.M.; Hosny, M.; Abu-Mousa, M.H. On Progressive Censored Competing Risks Data: Real Data Application and Simulation study. Mathematics 2021, 9, 1805. [Google Scholar] [CrossRef]
  26. Hassan, A.; Mousa, R.; Abu-Moussa, M. Analysis of Progressive Type-II Competing Risks Data, with Applications. Lobachevskii J. Math. 2022, 43, 2479–2492. [Google Scholar] [CrossRef]
  27. Dutta, S.; Kayal, S. Bayesian and non-Bayesian inference of Weibull lifetimemodel based on partially observed competing risks dataunder unified hybrid censoring scheme. Qual. Reliab. Eng. Int. 2022, 38, 3867–3891. [Google Scholar] [CrossRef]
  28. Dutta, S.; Lio, Y.; Kayal, S. Parametric inferences using dependent competing risks data with partially observed failure causes from MOBK distribution under unified hybrid censoring. J. Stat. Comput. Simul. 2023, 94, 376–399. [Google Scholar] [CrossRef]
  29. Tian, Y.; Gui, W. Statistical inference of dependent competing risks from Marshall–Olkin bivariate Burr-XII distribution under complex censoring. Commun. Stat. Simul. Comput. 2022, 53, 2988–3012. [Google Scholar] [CrossRef]
  30. Dutta, S.; Kayal, S. Inference of a competing risks model with partially observed failure causes under improved adaptive type-II progressive censoring. Proc. Inst. Mech. Eng. Part O J. Risk Reliab. 2023, 37, 765–780. [Google Scholar] [CrossRef]
  31. Burr, I.W. Cumulative Frequency Functions. Ann. Math. Stat. 1942, 13, 215–232. [Google Scholar] [CrossRef]
  32. Dagum, C. A new model of personal income distribution: Specification and estimation. Econ. Appl. 1977, 30, 413–437. [Google Scholar] [CrossRef]
  33. Kleiber, C.; Kotz, S. Statistical Size Distributions in Economics and Actuarial Sciences; Wiley: Hoboken, NJ, USA, 2003. [Google Scholar]
  34. Zaevski, T.S.; Kyurkchiev, N. On some mixtures of the Kies distribution. Hacettepe J. Math. Stat. 2024, 53, 1453–1483. [Google Scholar] [CrossRef]
  35. Zaevski, T.S.; Kyurkchiev, N. On some composite Kies families: Distributional properties and saturation in Hausdorff sense. Mod. Stochastics Theory Appl. 2023, 10, 287–312. [Google Scholar] [CrossRef]
  36. Fedotenkov, I. A review of more than one hundred Pareto-tail index estimators. Statistica 2020, 80, 245–299. [Google Scholar]
  37. Cheng, T.; Peng, X.; Choiruddin, A.; He, X.; Chen, K. Environmental extreme risk modeling via sub-sampling block maxima. arXiv 2025, arXiv:2506.14556. [Google Scholar]
  38. Mokhlis, N.A. Reliability of a stress-strength model with Burr type III distributions. Commun. Stat Theory Methods 2005, 34, 1643–1657. [Google Scholar] [CrossRef]
  39. Domma, F.; Giordano, S. A copula-based approach to account for dependence in stress-strength models. Stat. Pap. 2013, 54, 807–826. [Google Scholar] [CrossRef]
  40. Barzilai, J.; Borwein, J.M. Two-Point Step Size Gradient Methods. IMA J. Num. Anal. 1988, 8, 141–148. [Google Scholar] [CrossRef]
  41. Balakrishnan, N.; Sandhu, R.A. A simple simulational algorithm for generating progressive Type-II censored samples. Amer. Statist. 1995, 49, 229–230. [Google Scholar] [CrossRef]
  42. Wolkewitz, M.; Vonberg, R.P.; Grundmann, H.; Beyersmann, J.; Gastmeier, P.; Bärwolff, S.; Geffers, C.; Behnke, M.; Rüden, H.; Schumacher, M. Risk factors for the development of nosocomial pneumonia and mortality on intensive care units: Application of competing risks models. Crit. Care 2008, 12, 1–9. [Google Scholar] [CrossRef]
  43. Scrucc, L.; Santucci, A.; Aversa, F. Regression modeling of competing risk using R: An in depth guide for clinicians. Bone Marrow Transp. 2010, 45, 1388–1395. [Google Scholar] [CrossRef]
  44. Shih, J.H.; Emura, T. Likelihood-based inference for bivariate latent failure time models with competing risks under the generalized FGM copula. Comput. Stat. 2018, 33, 1293–1323. [Google Scholar] [CrossRef]
Figure 1. The probability density function for α 1 = 0.5 , α 2 = 0.8 and β = 2.5 under different failure causes.
Figure 1. The probability density function for α 1 = 0.5 , α 2 = 0.8 and β = 2.5 under different failure causes.
Axioms 14 00508 g001
Figure 2. MSE and AB of the MLEs of parameters in the general and special cases for various censoring schemes.
Figure 2. MSE and AB of the MLEs of parameters in the general and special cases for various censoring schemes.
Axioms 14 00508 g002
Figure 3. The empirical and fitted CDF, PDF and P-P plots for competing risks and failure causes 1 and 2 for pneumonia data.
Figure 3. The empirical and fitted CDF, PDF and P-P plots for competing risks and failure causes 1 and 2 for pneumonia data.
Axioms 14 00508 g003
Figure 4. The empirical and fitted CDF, PDF and P-P plots for competing risks and failure causes 1 and 2 for leukemia data.
Figure 4. The empirical and fitted CDF, PDF and P-P plots for competing risks and failure causes 1 and 2 for leukemia data.
Axioms 14 00508 g004
Table 1. Censoring schemes.
Table 1. Censoring schemes.
Scheme NonmCensoring Scheme
13015 ( 15 , 0 14 )
23015 ( 0 14 , 15 )
33015 ( 1 15 )
43025 ( 5 , 0 24 )
53025 ( 0 24 , 5 )
63025 ( 0 10 , 1 5 , 0 10 )
75020 ( 30 , 0 29 )
85020 ( 0 29 , 30 )
95020 ( 3 , 0 , 3 , 0 , , 3 , 0 )
Table 2. Maximum likelihood estimates, MSE, AB and empirical coverage probabilities for the estimate of parameters in the general case.
Table 2. Maximum likelihood estimates, MSE, AB and empirical coverage probabilities for the estimate of parameters in the general case.
Scheme NoParameterML EstimateMSEABCP (Approx.)CP (Boot.)
α 1 1.54710.00770.04710.99900.9992
α 2 1.23440.00700.03440.99990.9968
β 1 0.58660.01890.08660.85420.7569
β 2 0.70170.05070.00170.99980.7218
α 1 1.88460.03860.13630.98160.9278
α 2 1.70310.04030.32840.96660.9791
β 1 0.65310.02580.13500.86110.6990
β 2 1.29480.15250.26180.98090.5711
α 1 1.51800.00870.01800.99950.9821
α 2 1.27540.00820.07540.99880.9951
β 1 0.55130.01640.05130.91600.7401
β 2 0.81480.04990.11480.99970.6707
α 1 1.54730.00670.04730.96880.9990
α 2 1.22910.00690.02910.99900.9958
β 1 0.59780.01690.09780.79200.7347
β 2 0.68860.03720.01140.99940.6989
α 1 1.54160.00690.04160.98100.9987
α 2 1.23160.00900.03160.99240.9796
β 1 0.59350.01790.09370.82280.7292
β 2 0.70070.04970.00070.99270.7271
α 1 1.54140.00670.04140.98250.9992
α 2 1.23060.00720.03060.99760.9974
β 1 0.59000.01670.09000.81870.7386
β 2 0.70720.04280.00330.99780.6847
α 1 1.54790.00700.04790.98870.9986
α 2 1.23380.00570.03381.0000.9990
β 1 0.58980.01590.09890.77110.7253
β 2 0.69670.03580.00330.99990.7097
α 1 2.18480.44880.027020.89200.8086
α 2 2.16420.53280.37900.80070.9256
β 1 0.72170.03660.12880.75990.6873
β 2 1.77310.77990.38700.88480.4170
α 1 1.46400.03390.03600.99930.9460
α 2 1.28810.03510.08810.99950.9969
β 1 0.53480.02110.03480.83850.7211
β 2 0.84600.11590.14600.1.0000.6167
Table 3. Maximum likelihood estimates, MSE, AB and empirical coverage probabilities for the estimate of parameters in the special case.
Table 3. Maximum likelihood estimates, MSE, AB and empirical coverage probabilities for the estimate of parameters in the special case.
Scheme NoParameterML EstimateMSEABCP (Approx.)CP (Boot.)
α 1 1.57640.00920.07640.99961.0000
α 2 1.23090.00440.03091.00000.9999
β 0.51450.01500.08551.00000.8162
α 1 1.55790.00790.05791.00000.9999
α 2 1.23740.00560.03741.00000.9999
β 0.59150.01380.00851.00000.9610
α 1 1.56040.00710.06041.00001.0000
α 2 1.23150.00500.03151.00000.9998
β 0.57560.01100.02441.00000.9638
α 1 1.59190.01290.09190.99871.000
α 2 1.23240.00450.03241.0001.000
β 0.48060.01900.11940.99050.5933
α 1 1.59820.01520.09821.00001.0000
α 2 1.23420.00530.03421.00001.0000
β 0.48760.01820.11240.99960.6423
α 1 1.58580.01150.08580.99971.0000
α 2 1.23010.00470.03011.00001.0000
β 0.49710.01590.10291.00000.7075
α 1 1.58270.00990.08271.00000.9999
α 2 1.22410.00380.02411.00000.9996
β 0.50630.01410.09371.00000.7265
α 1 1.55000.00500.05001.00000.9999
α 2 1.25110.00590.05111.00000.9996
β 0.63480.01020..03481.00000.9938
α 1 1.55860.00620.05861.00001.0000
α 2 1.23410.00470.03411.00000.9999
β 0.59910.00710.00091.00000.9837
Table 4. The MLEs of the parameters, the gof test statistics and the corresponding p-values (in bracket) for the pneumonia data when testing for the Dagum distribution.
Table 4. The MLEs of the parameters, the gof test statistics and the corresponding p-values (in bracket) for the pneumonia data when testing for the Dagum distribution.
Failure Cause α ^ β ^ λ ^ ω 2 (p-Value)D (p-Value)
Competing risk0.652.7629.540.0515 (0.8680)0.0810 (0.6026)
Cause 10.433.7133.680.0388 (0.9455)0.0737 (0.8536)
Cause 20.981.9127.010.0318 (0.9728)0.1044 (0.9762)
Table 5. The point and interval estimates of the parameters for the pneumonia data under different censoring schemes.
Table 5. The point and interval estimates of the parameters for the pneumonia data under different censoring schemes.
ParameterSchemeMLEAsymptotic CIBootstrap CI
R = ( 0 49 , 39 ) 0.4431(0.3480, 0.5383)(0.0649, 0.8064)
α 1 ^ R = ( 39 , 0 49 ) 0.6174(0.4889, 0.7460)(0.1915, 0.9705)
R = ( 1 39 , 0 11 ) 0.7237(0.5622, 0.8852)(0.5659, 1.1051)
R = ( 0 49 , 39 ) 1.2127(0.9171, 1.5083)(0.9203, 1.8105)
α 2 ^ R = ( 39 , 0 49 ) 1.1567(0.9515, 1.3618)(0.8555, 1.5977)
R = ( 1 39 , 0 11 ) 0.9763(0.7611, 1.1915)(0.6298, 1.2774)
R = ( 0 49 , 39 ) 3.6010(3.4133, 3.8067)(3.5409, 3.9979)
β 1 ^ R = ( 39 , 0 49 ) 3.6437(2.9231, 4.3643)(3.5010, 4.2016)
R = ( 1 39 , 0 11 ) 3.9262(3.6501, 4.2023)(3.8772, 4.4468)
R = ( 0 49 , 39 ) 2.1934(1.8898, 2.4970)(1.9473, 2.5732)
β 2 ^ R = ( 39 , 0 49 ) 2.0785(1.8786, 2.7357)(1.4084, 2.8559)
R = ( 1 39 , 0 11 ) 1.6731(1.2871, 2.0591)(1.2533, 2.1004)
Table 6. The MLEs of the parameters, the gof test statistics and the corresponding p-values (in brackets) for the leukemia data when testing for the Dagum distribution.
Table 6. The MLEs of the parameters, the gof test statistics and the corresponding p-values (in brackets) for the leukemia data when testing for the Dagum distribution.
Failure Cause α ^ β ^ λ ^ ω 2 (p-Value)D (p-Value)
Competing risk0.552.556.570.0363 (0.9518)0.0434 (0.9657)
Cause 16.651.440.940.1119 (0.5312)0.1161 (0.4369)
Cause 20.313.628.140.0353 (0.9568)0.0589 (0.9571)
Table 7. The point and interval estimates of the parameters for the Leukemia data under different censoring schemes.
Table 7. The point and interval estimates of the parameters for the Leukemia data under different censoring schemes.
ParameterSchemeMLEAsymptotic CIBootstrap CI
R = ( 0 39 , 91 ) 0.7151(0.5527, 0.8776)(0.7705, 0.9721)
α 1 ^ R = ( 91 , 0 39 ) 0.6406(0.5539, 0.7272)(0.5467, 0.8498)
R = ( 3 30 , 1 , 0 9 ) 0.9020(0.7144, 1.0896)(0.7444, 1.3072)
R = ( 0 39 , 91 ) 0.8632(0.6911, 1.0352)(0.7383, 0.9778)
α 2 ^ R = ( 91 , 0 39 ) 1.4165(1.2937, 1.5393)(1.1491, 1.8382)
R = ( 3 30 , 1 , 0 9 ) 0.5019(0.4129, 0.5907)(0.0809, 0.8638)
R = ( 0 39 , 91 ) 3.9742(3.6741, 4.0743)(3.9771, 4.2053)
β 1 ^ R = ( 91 , 0 39 ) 3.7955(3.2697, 4.3213)(3.9015, 4.1460)
R = ( 3 30 , 1 , 0 9 ) 3.9700(3.4949, 4.4451)(4.0222, 4.5507)
R = ( 0 39 , 91 ) 1.8320(1.7128, 1.9513)(1.7341, 2.0731)
β 2 ^ R = ( 91 , 0 39 ) 1.4426(0.7436, 2.1417)(1.0814, 1.9950)
R = ( 3 30 , 1 , 0 9 ) 1.9609(1.7351, 2.1867)(1.6369, 2.2743)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Badwan, R.; Pakyari, R. Analyzing Competing Risks with Progressively Type-II Censored Data in Dagum Distributions. Axioms 2025, 14, 508. https://doi.org/10.3390/axioms14070508

AMA Style

Badwan R, Pakyari R. Analyzing Competing Risks with Progressively Type-II Censored Data in Dagum Distributions. Axioms. 2025; 14(7):508. https://doi.org/10.3390/axioms14070508

Chicago/Turabian Style

Badwan, Raghd, and Reza Pakyari. 2025. "Analyzing Competing Risks with Progressively Type-II Censored Data in Dagum Distributions" Axioms 14, no. 7: 508. https://doi.org/10.3390/axioms14070508

APA Style

Badwan, R., & Pakyari, R. (2025). Analyzing Competing Risks with Progressively Type-II Censored Data in Dagum Distributions. Axioms, 14(7), 508. https://doi.org/10.3390/axioms14070508

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop