Next Article in Journal
Developments of Semi-Type-2 Interval Approach with Mathematics and Order Relation: A New Uncertainty Tackling Technique
Previous Article in Journal
Completeness Theorems for Impulsive Dirac Operator with Discontinuity
Previous Article in Special Issue
Bin-3-Way-PARAFAC-PLS: A 3-Way Partial Least Squares for Binary Response
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Exponentiated Inverse Exponential Distribution Properties and Applications

by
Aroosa Mushtaq
1,
Tassaddaq Hussain
2,
Mohammad Shakil
3,*,
Mohammad Ahsanullah
4 and
Bhuiyan Mohammad Golam Kibria
5
1
Department of Mathematics, Kings College, Mirpur 10250, Pakistan
2
Department of Statistics, Mirpur University of Science and Technology, Mirpur 10250, Pakistan
3
Department of Mathematics, Miami Dade College, Hialeah, FL 33012, USA
4
Department of Management Sciences, Rider University, Lawrenceville, NJ 08648, USA
5
Department of Mathematics & Statistics, Florida International University, Miami, FL 33199, USA
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(10), 753; https://doi.org/10.3390/axioms14100753
Submission received: 9 July 2025 / Revised: 21 September 2025 / Accepted: 26 September 2025 / Published: 3 October 2025
(This article belongs to the Special Issue Probability, Statistics and Estimations, 2nd Edition)

Abstract

This paper introduces Exponentiated Inverse Exponential Distribution (EIED), a novel probability model developed within the power inverse exponential distribution framework. A distinctive feature of EIED is its highly flexible hazard rate function, which can exhibit increasing, decreasing, and reverse bathtub (upside-down bathtub) shapes, making it suitable for modeling diverse lifetime phenomena in reliability engineering, survival analysis, and risk assessment. We derived comprehensive statistical properties of the distribution, including the reliability and hazard functions, moments, characteristic and quantile functions, moment generating function, mean deviations, Lorenz and Bonferroni curves, and various entropy measures. The identifiability of the model parameters was rigorously established, and maximum likelihood estimation was employed for parameter inference. Through extensive simulation studies, we demonstrate the robustness of the estimation procedure across different parameter configurations. The practical utility of EIED was validated through applications to real-world datasets, where it showed superior performance compared to existing distributions. The proposed model offers enhanced flexibility for modeling complex lifetime data with varying hazard patterns, particularly in scenarios involving early failure periods, wear-in phases, and wear-out behaviors.

1. Introduction

The field of statistics provides a vast and diverse toolkit of distributions for modeling real-world random events in disciplines ranging from medical science and engineering to economics and finance. A common and fruitful research practice involves the development of novel lifetime distributions by extending well-established baseline models. A pivotal contribution in this area came from [1,2,3]. Ref. [2], who introduced a powerful generative technique. This method defines the cumulative distribution function (CDF) of a new distribution, G 1 ( y ) , through a specific relationship with the CDF G ( y ) of a baseline model:
G 1 ( y ) = [ G ( y ) ] α .
In this formulation, the parameter α > 0 serves as a new shape parameter; consequently, the new distribution simplifies directly to the original baseline model when α = 1 . Spurred by this foundational work, numerous researchers have expanded the repertoire of available distributions. For instance, Ref. [4] constructed four exponentiated-type distributions that generalize the standard gamma, Weibull, Gumbel, and Fréchet models, providing a comprehensive analysis of their mathematical properties. Similarly, Nadarajah [5] proposed an exponentiated standard Gumbel distribution, anticipating that its application in climate modeling would be as broad as that of its standard predecessor. Ref. [6] introduces a new family of distributions with applications to flood data, Hassan et al. [7] extends exponentiated inverse distributions with engineering applications, and Korkmaz et al. [8] combines exponentiation with Poisson-generated distributions for lifetime modeling. Many other such generalizations are well-documented in the literature.
Parallel to these efforts, Elbatal et al. [9] established an alternative methodology. Their model defines a new CDF through a quadratic function of a baseline CDF:
G 1 ( x ) = ( 1 + λ ) G ( x ) λ [ G ( x ) ] 2 λ 1 .
A defining characteristic of this approach is its reversion to the original baseline distribution whenever the parameter λ = 0 . In a separate development, Kumar et al. [10] devised a distinct approach that employs trigonometric functions. Their technique defines a new distribution using the CDF F ( x ) = sin π 2 G ( x ) and the corresponding PDF f ( x ) = π 2 g ( x ) cos π 2 G ( x ) , where G ( x ) and g ( x ) are the baseline CDF and PDF, respectively. The ongoing need to model complex tail behavior has recently driven the creation of many new transformations by well-known researchers [11,12,13,14,15]. Among these, the DUS transformation proposed by Kumar et al. [11] offers another method for generalization, defined by the CDF G 1 ( x ) = 1 e 1 e G ( x ) 1 and the PDF g 1 ( x ) = 1 e 1 g ( x ) e G ( x ) .
Given that statistical distributions are essential for modeling lifetime data in reliability engineering and survival analysis, there is a persistent need for more flexible models. Many existing distributions cannot adequately capture complex hazard rate patterns, such as bathtub curves or mixed failure behaviors. To address these limitations, we introduce Exponentiated Inverse Exponential Distribution (EIED), a versatile model within the power-transformed family. Our construction uses the inverse exponential distribution as a baseline, defined for x 0 , θ > 0 by the CDF G ( x ) = e θ x and the PDF g ( x ) = θ x 2 e θ x .
The rest of the paper is structured as follows: Section 2 derives EIED and its properties, along with comprehensive inverted completed models; Section 3 deals with characterization; Section 4 details our estimation and simulation study; Section 5 provides the material and methods; Section 6 applies EIED to engineering and global flood datasets; Section 7 ends with our conclusions.

2. Derivation, Discussion, and Statistical Properties

The proposed model was constructed using a link function  D ( · ) : [ 0 , ) [ 0 , 1 ] , which is required to be differentiable, monotonically non-decreasing, and satisfy the boundary conditions
D ( x ) 0 as x 0 , D ( x ) 1 as x .
For this study, we adopted the specific form
D ( u ) = 1 ( e 1 ) α e u 1 α , α > 0 ,
where u is a positive, monotone transformation of the baseline variable. In the case of the proposed Exponential Inverse Exponential Distribution (EIED), we chose
u ( x ) = e θ / x , x > 0 , θ > 0 .
Substituting u ( x ) into D ( · ) , the cumulative distribution function (CDF) of the EIED is obtained as
F ( x ) = D ( u ( x ) ) = 1 ( e 1 ) α e e θ / x 1 α , x > 0 .
It is straightforward to verify that F ( 0 + ) = 0 and lim x F ( x ) = 1 , while F ( x ) is strictly increasing and differentiable for x > 0 . Therefore, F ( x ) defines a valid distribution function, with its corresponding probability density function (PDF) given by
f ( x ) = 1 ( e 1 ) α α e e θ x 1 α 1 e e θ x e θ x θ x 2 x > 0
Here, α is the shape parameter, and θ is the scale parameter. Notably, when α = 1 , Equation (1) simplifies to the DUS inverse exponential distribution introduced by Kumar et al. [11], resulting in a model that is both mathematically tractable and computationally feasible. The associated hazard function, which describes the instantaneous failure rate, is expressed as
h ( x ) = α e e θ x 1 α 1 e e θ x e θ x θ x 2 ( 1 + e ) α ( e e θ x 1 ) α .
A key advantage of EIED is its highly flexible hazard rate function, which is capable of exhibiting increasing, decreasing, and reverse bathtub shapes, making it exceptionally well-suited for modeling complex lifetime data. Figure 1 illustrates the PDF and HRFs of EIED for various parameter settings. On the left side, the PDFs demonstrate the effect of the shape parameter α with a fixed scale θ = 1 . For α = 0.5 , the density is high (near zero) and decreases rapidly, indicating a higher probability of small values. As α increases to 1 and 3, the peak becomes less sharp, and the distribution spreads out more, showing that larger α values produce flatter and more dispersed densities.
On the right side, the HRFs reveal how the hazard rate varies with different α values. For α = 0.5 , the hazard rate decreases over x, indicating a decreasing failure risk over time. For higher α values, the hazard functions decrease more gradually or tend to stabilize, reflecting different failure or event risk behaviors depending on the parameters.
Overall, these plots illustrate how the parameters α and θ influence the shape and hazard characteristics of EIED, emphasizing its flexibility in modeling various failure and event rate scenarios.

2.1. Shapes

In this section, we discuss the shapes of the PDF and hazard function of the EIED distribution. By solving the equation
l n f ( x ) x = ( α 1 ) e θ x ( θ x 2 ) + e θ x ( θ x 2 ) + ( θ x 2 ) 2 x = 0 .
we can obtain the mode of the distribution. Moreover, it is also observed that ( l n ( f ( 0 ) ) ) = and ( l n ( f ( 0 ) ) ) = 0 imply that d d x l n ( f ( x ) ) monotonically decreases from to 0 against all combinations of the parameter values. Therefore, the EIED distribution has a unique mode at x 0 for some 0 x 0 . However, if α < 1 ( α > 1 ) and θ , then d d x l n ( f ( x ) ) < 0 ( > 0 ) for all x, so f is monotonically decreasing (increasing) for all x.
d h ( x ) d x = ( α 1 ) e θ x θ x 2 + e θ x θ x 2 + θ x 2 2 x + α e θ x θ x 2 = 0
To identify the modes of the hazard function (e.g., a maximum, minimum, or inflection point), one must solve Equation (3). The nature of these modes is subsequently revealed by evaluating the second derivative.

2.2. A Comparison of Inverse Distributions

Inverse PDFs are significant because they provide greater modeling flexibility, especially for extreme events, heavy tails, and non-monotonic hazard structures, which classical PDFs often fail to capture. They also have deep theoretical roots in stochastic processes and Bayesian analysis, making them both practically useful and mathematically rich. In this regard, we compared Inverted Exponential (IE), Inverse Rayleigh Distribution (IR), Modified Inverse Rayleigh (MIR), Modified Inverse Weibul (MIW), and New Generalized Inverse Rayleigh Distribution (NGIR) with Exponentiated Inverse Exponential (EIED).
  • The IE distribution is characterized by a rapid rise from the origin to a single, sharp peak, followed by a gradual, polynomial decay. The term λ x 2 dominates for a larger x, dictating the rate of decay. The scale parameter λ controls the location of the peak; a larger λ shifts the peak to the right.
  • The IR distribution has a very sharp peak close to the origin, even more pronounced than the IE. Its decay is governed by 2 σ 2 x 3 e σ 2 / x 2 . The cubic term x 3 and the exponential term e σ 2 / x 2 cause it to vanish much faster than the IE as x increases.
  • MIR is the exponentiated version of the IR distribution, with α adding flexibility to the shape of the IR. When α = 1 , it reduces to the standard IR. For α > 1 , the peak shifts rightward, flattening and widening, which models data less concentrated near zero than the IR. For α < 1 , the peak’s height and sharpness increase, making the distribution more concentrated near the origin.
  • MIW offers high flexibility due to its additional shape parameter β , which influences the core Weibull shape independent of α . The polynomial decay rate is determined by x ( β + 1 ) , while the exponential term e γ ( γ / x ) β can significantly alter the tail behavior depending on β . The parameter α provides further flexibility on top of this base shape.
  • The key innovation of NGIR is its additive structure in the exponent: α x + λ x 2 . This form allows it to blend characteristics of the IE and IR families seamlessly. The parameter β acts as an exponentiation parameter, adding flexibility around this new baseline. The distribution can mimic IE (by setting λ = 0 ), MIR (by setting α = 0 ), or any hybrid thereof, resulting in a highly versatile range of shapes.
  • EIED is a flexible extension of IE. The new shape parameter α “stretches” the baseline IE distribution. When α = 1 , it reduces to the standard IE. For α > 1 , the PDF shifts to the right, with the peak becoming lower and wider, allocating more probability mass to a larger x. Conversely, when α < 1 , the PDF becomes more concentrated near zero, producing a higher, sharper peak.

2.2.1. Common Characteristics of Inverse Distributions

The CDFs of inverted competing models are listed in Table A8, and the asymptotic structure is portrayed in Table A2, showing that all of the competing models exhibit polynomial decay. This is a fundamental characteristic of inverse distributions. They are constructed by taking the reciprocal of a random variable with support on ( 0 , ) , which naturally leads to polynomially decaying tails.
  • Tail behavior of inverse distributions: However, not all inverse distributions have tails of equal heaviness. The “weight” of the tail is determined by the rate of polynomial decay: distributions such as IE, EIED, and NGIR exhibit the heaviest tails, as their survival functions decay at the slowest rate, with S ( x ) constant / x . Lighter yet still heavy tails are observed in the IR and MIR distributions, where the survival function decays as S ( x ) constant / x 2 . Consequently, an extreme value is less likely to be observed from an IR distribution than from an IE distribution with a comparable scale.
  • Tail weight hierarchy: The MIW distribution is unique in allowing the tail weight to be tuned via its shape parameter β . Depending on β , its tail can be heavier than IR ( β < 2 ), similar to IR ( β = 2 ), or lighter ( β > 2 ) while still belonging to the heavy-tailed family. For example, when β = 3 , the survival function behaves as S ( x ) constant / x 3 , making its tail lighter than that of IR but still heavier than any exponential-tailed distribution. So, the hierarchy of tail weight among these heavy-tailed models generally follows IE, EIED, NGIR > MIW with β < 2 > IR, MIR > MIW with β > 2 .

2.2.2. Comparison and Conclusions

From the above discussion, we can conclude that all models are capable of representing the unimodal (upside-down bathtub) failure rate, which is common in reliability engineering for items that have a higher risk of failure during an initial “burn-in” phase, followed by a period of lower and decreasing risk. The key difference lies in their flexibility to model the diverse manifestations of this unimodal shape found in real-world data. The MIW and NGIR models, with three parameters, offer the greatest capability to fit complex hazard rate curves (see Table A4). Moreover, the asymptotic behavior, as portrayed in Table A3, of all competing models exhibits heavy-tail behavior with polynomial decay. Specifically, IE, EIED, and NGIR have the heaviest tails, characterized by a decay rate of approximately ∼ 1 / x . IR and MIR possess lighter but still heavy tails, with a decay rate of around ∼ 1 / x 2 . The MIW distribution features tunable tail behavior, controlled by the parameter β . Regarding hazard shape, all HRFs display unimodal (upside-down bathtub) shapes. Among these, MIW and NGIR offer the highest flexibility for modeling complex data patterns.

2.3. Identifiability of EIED

The parameters ( α , θ ) of the model are identifiable by analyzing the behavior of the CDF at its boundaries and leveraging the distribution’s functional structure. This approach involves the following steps: (i) rewriting the CDF to explore how it depends on the parameters; (ii) investigating the tail behavior as x 0 + and x to establish constraints on the parameters; (iii) utilizing the functional form to demonstrate that distinct parameter values produce different distributions; and (iv) understanding that exponentiating an identifiable base distribution preserves its identifiability.
Definition 1.
A parametric model { f ( x ; θ ) : θ Θ } is identifiable if
f ( x ; θ 1 ) = f ( x ; θ 2 ) for all x θ 1 = θ 2 .
Theorem 1.
The parameters α and θ in the EIED distribution are identifiable; that is, if
f ( x ; α 1 , θ 1 ) = f ( x ; α 2 , θ 2 ) for all x > 0 ,
then necessarily
α 1 = α 2 and θ 1 = θ 2 .
Proof. 
Suppose there exist two parameter pairs, ( α , θ ) and ( α , θ ) , such that their corresponding densities are equal for all x > 0 : f ( x ; α , θ ) = f ( x ; α , θ ) . Given the cumulative distribution function (CDF):
F ( x ) = e 1 e θ / x 1 α ,
the equality of distributions implies
e 1 e θ / x 1 α = e 1 e θ / x 1 α for all x > 0 .
Taking the natural logarithm:
α ln e 1 e θ / x 1 = α ln e 1 e θ / x 1 .
This relation must hold for all x > 0 . Now, as x , note that e θ / x 1 , so e θ / x 1 0 . More precisely, for a large x: e θ / x 1 θ x , so e θ / x 1 θ x . Thus,
e 1 e θ / x 1 α e 1 θ x α = e 1 x θ α .
This indicates polynomial tail behavior influenced by x α . Now, as x 0 + , e θ / x 0 ; thus, e θ / x 1 1 , and consequently, F ( x ) e 1 1 α = ( e 1 ) α . This approaches a constant, indicating that the distribution approaches a finite limit at zero. Define t : = 1 / x . As x 0 + , we have t . Re-expressing the distribution gives F ( t ) = e 1 e θ t 1 α . As t , e θ t 0 , so F ( t ) ( 1 ) α , and the tail decay rate is dominated by the exponential term e θ t . The tail behavior indicates that for the distribution to be the same, the decay rates must match, implying α θ = α θ . Similarly, analyzing the behavior as t 0 + (corresponding to x ), the distribution approaches a constant, and the key parameter influencing the tail is, again, α θ . From the tail behavior, if the distributions are identical, then α θ = α θ . To verify the uniqueness of the parameters, suppose α = c α , θ = θ c . Then, α θ = c α × θ c = α θ , which is consistent with the tail behavior. However, the full distribution depends on the entire functional form, not solely on the tail. To distinguish parameters completely, note that the function
F ( x ; α , θ ) = e 1 e θ / x 1 α
strictly increases in parameters α and θ . For example, choosing α = 2 α and θ = θ / 2 , yields
F ( x ; 2 α , θ / 2 ) = e 1 e θ / ( 2 x ) 1 2 α ,
which is not equal to the original F ( x ; α , θ ) unless the parameters coincide, because the exponential functions differ. Since different parameter pairs produce different distribution functions, the parameters α and θ are identifiable; that is,
f ( x ; α 1 , θ 1 ) = f ( x ; α 2 , θ 2 ) α 1 = α 2 , θ 1 = θ 2 .

2.4. Quantile Function

If X is distributed according to EIED ( α , θ ) with the PDF given in Equation (1), then its quantile function, Q p , is implicitly defined by F ( Q p ) = p for 0 < p < 1 . This Q p value is precisely where the cumulative distribution function equals p, effectively partitioning the distribution such that the area to its left is p, as detailed below.
Q p = θ l n l n ( 1 + p 1 α ( 1 + e ) .
The simulation of EIED random variables can be performed using Equation (4). This same equation also allows for the calculation of EIED’s median (defined as = Q 0.5 when p = 0.5), skewness, and kurtosis,
s k = Q 3 4 2 Q 1 2 + Q 1 4 Q 3 4 Q 1 4 , k u = Q 7 8 Q 5 8 + Q 3 8 Q 1 8 Q 6 8 Q 2 8 .
Figure 2 presents plots of competing models along with their respective HRF distributions, which are unimodal, positively skewed, and exhibit decreasing or upside-down bathtub shapes.
Figure 3 reveals how skewness changes in relation to kurtosis and the parameter α . As α increases, skewness generally decreases, indicating that the distribution becomes more symmetric; kurtosis increases slightly, suggesting the distribution becomes more peaked or heavy-tailed. The shape indicates nonlinear relationships among the three quantities. Similarly, by increasing α , kurtosis increases, showing heavier tails or a sharper peak in the distribution; skewness decreases, suggesting a move toward symmetry. The MIR and MIW families exhibit intermediate behavior depending on their parameter values. The NGIR and EIED models display more complex quantile behavior due to the effect of exponentiation. A useful diagnostic is the ratio Q ( 0.95 ) / Q ( 0.5 ) , which serves as an indicator of tail heaviness. Larger ratios imply that more probability mass lies in the extreme tail region.
The quantile analysis, as portrayed in Table A5 and Table A6, highlights several important behaviors across distributions. The IE distribution shows the fastest quantile growth, indicating the heaviest tail. In contrast, the IR distribution demonstrates more moderate growth, suggesting a lighter tail. Figure 4 reveals distinct tail behaviors across the considered models. The EIED distribution exhibits the steepest quantile growth, confirming its status as the heaviest-tailed model. The IE distribution also shows rapid quantile growth, though less extreme than EIED, indicating a heavy tail. In contrast, the IR distribution demonstrates much slower and smoother growth, highlighting its lighter tail properties. The MIR and MIW families display intermediate behavior, lying between IE and IR, with their exact position depending on parameter choices. The NGIR distribution presents a more complex and non-monotonic shape due to its exponentiation structure, suggesting irregular quantile behavior. On the log–log scale, these patterns are reinforced: EIED and IE dominate the upper tails, IR remains the lightest-tailed, and MIR/MIW provide moderate tail weight. The diagnostic ratio Q ( 0.95 ) / Q ( 0.5 ) further quantifies these differences, with higher values observed for EIED and IE, confirming their heavier tails, while IR yields a much smaller ratio, consistent with lighter tail mass.
Table A7 presents diverse statistical characteristics across 30 observations, with parameters indicating moderate to high variability in means, dispersion, skewness, and kurtosis. Most distributions are positively skewed and exhibit heavy tails, reflecting varied shapes and tail behaviors among the samples.
Figure 4 portrays that the plots compare different models’ distributions, showing variations in skewness and tail behavior. Some models have heavier tails and greater variability, especially at lower probabilities, while others exhibit lighter tails. The log-scale plot highlights differences in the lower quantiles, aiding in assessing model characteristics for risk or extreme event analysis.

2.5. Mean Deviation

To measure the spread of a population from its origin, one can consider the deviation from the median or the deviation from the mean. The cumulative absolute spread of a random variable X within a population, relative to both its median and mean, is commonly described as M D μ = E | X μ | = 0 | x μ | f ( x ) d x and M D μ ˜ = E | X μ ˜ | = 0 | x μ ˜ | f ( x ) d x , simplifying to M D μ = 2 μ F ( μ ) 2 μ 0 μ x f ( x ) d x and M D μ ˜ = μ 2 0 μ ˜ x f ( x ) d x respectively, where μ denotes the mean, and μ ˜ denotes the median. Then, the mean deviation from the mean of the EIED α , θ distribution is represented by
E | x μ | = M D μ = 2 μ F ( μ ) 2 μ 0 μ x f ( x ) d x
where
0 μ x f ( x ) d x = α θ ( e 1 ) α 0 e θ μ ( ln u ) 1 e u e u 1 α 1 d u = α θ ( e 1 ) α G ( 1 , α 1 , u , 1 ; 0 , e θ μ ) ,
Therefore, we have
M D μ = 2 μ F ( μ ) 2 μ α θ ( e 1 ) α G ( 1 , α 1 , u , 1 ; 0 , e θ μ ) .
The mean deviation about the median of the EIED α , θ distribution is represented by
E | x μ ˜ | = μ 2 α θ ( e 1 ) α G ( 1 , α 1 , u , 1 ; 0 , e θ μ ˜ ) .

2.6. Information Generating Function

The information-generating function of the EIED α , θ distribution is defined as I ( s ) = E ( f ( x ) s 1 ) = 0 x ( f ( x ) ) s d x and is expressed as
I ( s ) = 0 1 ( e 1 ) α α e e θ / x 1 α 1 e e θ / x e θ / x θ x 2 s d x
Let u = e θ / x x = θ ln u . Then, d x = θ u ( ln u ) 2 d u for the limits. As x 0 + , we have u = e θ / x 0 + , and as x , u 1 . Express each component:
x 2 s = θ ln u 2 s = ( θ ) 2 s ( ln u ) 2 s , e e θ / x = e u , e θ / x = u , e s e θ / x = e s u , e s θ / x = u s .
Now, substitute all terms into the integral to get
I ( s ) = α θ ( e 1 ) α s 0 1 ( θ ) 2 s ( ln u ) 2 s ( e u 1 ) s ( α 1 ) e s u u s θ u ( ln u ) 2 d u ,
which, after simplification, becomes
I ( s ) = α s θ 1 s ( e 1 ) α s 0 1 ( ln u ) 2 s 2 ( e u 1 ) s ( α 1 ) e s u u s 1 d u ,
since ( 1 ) 2 s = 1 . Therefore, we get
I ( s ) = α θ ( e 1 ) α s ( θ ) 2 s G ( 2 2 s , s α s , s u , s ; 0 , 1 ) ,
where G ( 2 2 s , s α s , s u , s ; 0 , 1 ) = 0 1 ( ln u ) 2 s 2 ( e u 1 ) s ( α 1 ) e s u u s 1 d u ; thus, we have
I ( s ) = α s θ 1 s ( e 1 ) α s G ( 2 2 s , s α s , s u , s ; 0 , 1 ) where u = e θ / x , x > 0 .

2.7. Conditional Moments

In flood risk analysis, conditional tools, like partial moments, the mean residual life (MRL) function, and mean inactivity time (MIT), help assess extreme events, damages, and recovery times. They support infrastructure planning, insurance, and resource allocation, revealing that a small percentage of floods cause most losses. Consequently, to facilitate this, the rth partial moment of the variable X, denoted as r ( t ) for any real r > 0 , is defined as
r ( t ) = t x r f ( x ; α , β , θ ) d x ,
Let u = e θ / x , x = θ ln u , d x = θ u ( ln u ) 2 d u and e e θ / x = e u , e θ / x = u , x r = θ ln u r = ( θ ) r ( ln u ) r . As x 0 + , u 0 ; x = t , u = e θ / t . Since
r ( t ) = x = 0 x = t x r f ( x ) d x = u = 0 u = e θ / t ( θ ) r ( ln u ) r · α θ ( e 1 ) α ( e u 1 ) α 1 e u u · 1 θ 2 ( ln u ) 2 · θ u ( ln u ) 2 d u .
Upon simplification, we get
r ( t ) = α θ r ( e 1 ) α 0 e θ / t ( e u 1 ) α 1 e u u 1 1 ( ln u ) r d u .
r ( t ) = ( 1 ) r α θ r ( e 1 ) α G ( r , α 1 , u , 1 ; 0 , e θ / t ) d u .
where G ( r , α 1 , u , 1 ; 0 , e θ / t ) = 0 e θ / t ( ln u ) r ( e u 1 ) α 1 e u u 1 1 d u . For random variables encountered in lifetime data analysis, determining their conditional moments is a matter of interest. If X follows an EIED ( α , θ ) distribution, its conditional moments can be expressed as
E [ ( X t ) r X > t ] = 1 1 F ( t ) t ( x t ) r f ( x ) d x .
For any non-negative integer r, we can expand ( x t ) r using the following binomial theorem: ( x t ) r = k = 0 r r k x k ( t ) r k . Next, using the substitution u = e θ / x (so x = θ / ln u and d x = θ u ( ln u ) 2 d u ), the term t x k f ( x ) d x becomes
t x k f ( x ) d x = α θ k + 1 ( e 1 ) α 0 e θ / t ( ln u ) ( k + 2 ) ( e u 1 ) α 1 e u u 0 1 d u .
Let G ( k + 2 , α 1 , u , 0 ; 0 , e θ / t ) = 0 e θ / t ( ln u ) k 2 ( e u 1 ) α 1 e u u 0 1 d u . Hence, the conditional moment can be written as
E [ ( X t ) r X > t ] = α θ ( e 1 ) α 1 F ( t ) k = 0 r r k ( t ) r k G ( k + 2 , α 1 , u , 0 ; 0 , e θ / t )
Table A8 suggests that the expected residual life of the system is generally negative, indicating shorter remaining lifetimes. Variance and standard deviation are low, implying predictions are fairly reliable. Negative skewness indicates a higher chance of shorter remaining lives, while kurtosis varies, reflecting differences in the likelihood of extreme outcomes. Overall, these moments help assess the reliability, predictability, and risk of rare failures in the system.

2.8. Lorenz Curves

A Lorenz curve, generally expressed as L ( F ) , visually depicts the distribution of income or wealth. Here, F signifies the cumulative percentage of the population (horizontal axis), while L represents the cumulative percentage of income or total wealth (vertical axis). Given a probability density function f ( x ) and its cumulative distribution function F ( x ) , the Lorenz curve L is mathematically given by L ( p ) = 1 μ x t f ( t ) d t . Alternatively, let q = F ( x ) ; then, x = F 1 ( p ) , and L ( p ) = 1 μ 0 p F 1 ( u ) d u Let F ( x ) = u . Then, e e θ / x 1 e 1 α = u e e θ / x 1 e 1 = u 1 / α . Multiply e e θ / x = 1 + ( e 1 ) u 1 / α . Take ln: e θ / x = ln 1 + ( e 1 ) u 1 / α . Then, again, ln: θ x = ln ln 1 + ( e 1 ) u 1 / α x = θ ln ln 1 + ( e 1 ) u 1 / α . As μ = α θ ( e 1 ) α 0 1 1 ln u e u 1 α 1 e u d u , which implies that G ( 1 , α 1 , u , 1 ; 0 , 1 ) = 0 1 1 ln u e u 1 α 1 e u u 1 1 d u . Now, put e θ x = u and incorporate Equation (1) into the integral I ( p ) = 0 F 1 ( p ) t f ( t ) d t to get
0 q x f ( x ) d x = α θ ( e 1 ) α 0 e e θ / q 1 ln u ( e u 1 ) α 1 e u d u .
where q = F 1 ( p ) = θ ln 1 + ( e 1 ) p 1 / α e , 0 < p < 1 . Since e e θ / q = 1 + ( e 1 ) p 1 / α , define c = ln 1 + ( e 1 ) p 1 / α . Then,
0 q x f ( x ) d x = α θ ( e 1 ) α 0 c 1 ln u ( e u 1 ) α 1 e u d u .
after substituting G ( 1 , α 1 , u , 1 ; 0 , c ) = 0 c 1 ln u e u 1 α 1 e u u 1 1 d u in the above equation, we get
0 q x f ( x ) d x = α θ ( e 1 ) α G ( 1 , α 1 , u , 1 ; 0 , c ) .
The Lorenz curve for the EIED ( α , θ ) distribution is, therefore, represented by
L ( p ) = G ( 1 , α 1 , u , 1 ; 0 , c ) G ( 1 , α 1 , u , 1 ; 0 , 1 ) .
Similarly, the Bonferroni curve is
B ( p ) = 1 p μ 0 q x f ( x ) d x , B ( p ) = G ( 1 , α 1 , u , 1 ; 0 , c ) p G ( 1 , α 1 , u , 1 ; 0 , 1 ) .
Figure 5 shows that EIED does not represent a perfectly equal distribution. Bonferroni curves emphasize the average income share of the poorest p, and Lorenz curves show the cumulative distribution of income. Lower curves indicate greater inequality. The figure demonstrates how changing EIED parameters affects the level of inequality: some parameter sets result in slightly more equal distributions, while others lead to less equal distributions.

2.9. Entropy

The term entropy, as defined by Shannon, serves as a measure of unpredictability. To assess this inherent disorder, particularly for an EIED random variable, researchers often utilize three prominent entropy measures: Shannon entropy, Rényi entropy, and cumulative residual entropy. Therefore, the entropy of the EIED α , θ distribution is represented by
E ( x ) = l n α ( 1 + e ) α ( α 1 ) k = 1 m = 0 ( k ) m m ! k E ( e θ X ) + α E ( e θ X ) θ E ( X 1 ) + l n θ + 2 E ( X ) .
As E e X θ can be expressed as
E e X θ = ( e 1 ) α 0 1 ( ln u ) 1 1 e u 1 α 1 e u u 2 1 d u = ( e 1 ) α G ( 0 , α 1 , u , 2 ; 0 , 1 )
and
E ( X 1 ) = θ ( e 1 ) α 0 1 ( ln u ) 2 1 e u 1 α 1 e u u 1 1 d u = θ ( e 1 ) α G ( 1 , α 1 , u , 1 ; 0 , 1 ) .
So, by incorporating the above expected values, the Shannon entropy of the EIED α , θ distribution is represented by
E ( x ) = l n α ( 1 + e ) α α + ( α 1 ) k = 1 m = 0 ( k ) m m ! k ( e 1 ) α G ( 0 , α 1 , u , 2 ; 0 , 1 ) + θ 2 ( e 1 ) α G ( 1 , α 1 , u , 1 ; 0 , 1 ) + l n θ + θ α ( e 1 ) α G ( 1 , α 1 , u , 0 ; 0 , 1 ) .

3. Characterization

A characterization helps distinguish a distribution from others using specific features like moments or mathematical properties. Unique properties serve as its identifier. This improves understanding, assists in choosing models, and supports statistical inference with clear, rigorous criteria. Moreover, for proving identifiability, we can utilize the characterizing properties of the distribution, such as unique moments (for example, mean and variance), unique hazard rates or survival functions, unique quantile functions, or specific functional forms like particular Laplace, Mellin, and Fourier transformations. If a distribution can be uniquely characterized by a property that depends on the parameters, then the parameters are identifiable. The following theorems help us to uniquely determine the distribution via Fourier, Laplace, and Mellin transformations.
Theorem 2.
Let X be an absolute, continuous random variable that follows EIED ( α , θ ) with a PDF, as defined in Equation (1); it is uniquely determined if its Fourier transformation is expressed as
f ^ ( ω ) = 0 1 α u α 1 exp i ω θ l n l n 1 + u ( e 1 ) d u ,
where α > 0 , θ > 0 and | ω | 1 .
Proof. 
Necessity: The Fourier transform of f ( x ) is defined as f ^ ( ω ) = 0 f ( x ) e i ω x d x , Now, substitute Equation (1) in the above definition to get
f ^ ( ω ) = 0 e i ω x α ( e 1 ) α e e θ / x 1 α 1 e e θ / x e θ / x θ x 2 d x ,
where α > 0 and θ > 0 , with a PDF on ( 0 , ) . Define the random variable U = e e θ / X 1 e 1 , meaning the transformation induces a Beta ( α , 1 ) distribution on u, with PDF g ( u ) = α u α 1 . As x 0 + and e θ / x 0 , then e e θ / x 1 ; hence, u 0 . As x , e θ / x 1 , then e e θ / x e ; hence, u 1 . Thus, u ranges from 0 to 1 as x ranges from 0 + to . 0 < u < 1 . Since u = e e θ / x 1 e 1 , then x = θ ln ln ( 1 + u ( e 1 ) ) . Since f ( x ) d x = α u α 1 d u and x = x ( u ) , we have
f ^ ( ω ) = 0 f ( x ) e i ω x d x = 0 1 α u α 1 e i ω x ( u ) d u ,
where
x ( u ) = θ ln ln ( 1 + u ( e 1 ) ) .
Substituting x ( u ) ,
i ω x ( u ) = i ω θ ln ( ln ( 1 + u ( e 1 ) ) ) = i ω θ ln ( ln ( 1 + u ( e 1 ) ) ) .
Then, the Fourier transform of f ( x ) is expressed as
f ^ ( ω ) = 0 1 α u α 1 exp i ω θ ln ln 1 + u ( e 1 ) d u = M ( α , θ , u ; 0 , 1 ) .
Sufficiency: Suppose Equation (2) holds; then, the inverse Fourier transform, defined as
f ( x ) = 1 2 π f ^ ( ω ) e i ω x d ω ,
recovers the original density f ( x ) for x > 0 and f ( x ) = 0 for x 0 . Upon substituting f ^ ( ω ) into Equation (2), we get f ( x ) , as
f ( x ) = 1 2 π 0 1 α u α 1 exp i ω θ ln ln 1 + u ( e 1 ) d u e i ω x d ω .
By using Fubini’s theorem, interchange the order of integration to get
f ( x ) = 1 2 π 0 1 α u α 1 exp i ω θ ln ln 1 + u ( e 1 ) + x d ω d u .
Now, let
a = θ ln ln 1 + u ( e 1 ) + x .
Then,
e i ω a d ω = 2 π δ ( a ) .
So the inner integral yields
2 π δ θ ln ln 1 + u ( e 1 ) + x .
Now, put it back to
f ( x ) = 0 1 α u α 1 δ θ ln ln 1 + u ( e 1 ) + x d u .
Solve for the root of the delta function, which is non-zero when
θ ln ln 1 + u ( e 1 ) + x = 0 .
Solving,
u 0 = e e θ / x 1 e 1 , 0 < u 0 < 1 for x > 0 .
Apply the delta function formula. Using
δ ( g ( u ) ) = u i : g ( u i ) = 0 δ ( u u i ) | g ( u i ) | ,
we compute
g ( u ) = θ ln ln 1 + u ( e 1 ) + x ,
and at u 0 ,
| g ( u 0 ) | = θ e 2 θ / x e θ / x · e 1 e e θ / x .
Evaluate the integral
f ( x ) = α u 0 α 1 · 1 | g ( u 0 ) | , x > 0 .
Substitute u 0 :
f ( x ) = α ( e 1 ) α e e θ / x 1 α 1 · e e θ / x e θ / x θ x 2 .
Thus, the inverse Fourier transform correctly recovers the original function:
f ( x ) = α ( e 1 ) α e e θ / x 1 α 1 e e θ / x e θ / x θ x 2 , x > 0 .
This completes the Proof. □

Laplace and Mellin Transformation

Theorem 3.
Let X be an absolutely continuous random variable with a PDF, as defined in Equation (1); then, it is uniquely determined if its Laplace transformation for ( t ) > 1 can be written as
L { f ( x ) } ( t ) = α ( e 1 ) α 0 1 e t θ ln u ( e u 1 ) α 1 e u d u
Proof. 
Necessity: Since the Laplace transform of a function f ( x ) is defined as L [ f ( x ) ] ( t ) = 0 e t x f ( x ) d x , upon incorporating Equation (1), we get
L { f ( x ) } ( t ) = 0 e t x 1 ( e 1 ) α α e e θ / x 1 α 1 e e θ / x e θ / x θ x 2 d x
L { f ( x ) } ( t ) = α θ ( e 1 ) α 0 e t x 1 x 2 e e θ / x 1 α 1 e e θ / x e θ / x d x .
Let u = e θ / x x = θ ln u and d u d x = e θ / x · θ x 2 = u · θ x 2 d x = x 2 θ u d u . As A s ( x 0 + ) , ( u 0 + ) , and as ( x ) , ( u 1 ) , this implies that ( e t x = e t · ( θ / ln u ) = e t θ / ln u ) , ( 1 x 2 d x = 1 x 2 · x 2 θ u d u = 1 θ u d u ) and e e θ / x 1 α 1 e e θ / x e θ / x = ( e u 1 ) α 1 e u u . Thus, after incorporating the above results, we get
L { f ( x ) } ( t ) = α θ ( e 1 ) α u = 0 1 e t θ / ln u · 1 θ u · ( e u 1 ) α 1 e u u d u ,
Upon simplification, we get Equation (7). Now, upon expanding e t θ / ln u as a series, i.e., e t θ / ln u = r = 0 ( t θ ) r r ! ( ln u ) r , this leads to potentially expressible integrals:
L { f ( x ) } ( t ) = α ( e 1 ) α r = 0 ( t θ ) r r ! 0 1 ( ln u ) r ( e u 1 ) α 1 e u d u
So its rth moment can be expressed as the coefficients of t r r ! i.e.,
L { f ( x ) } ( t ) = r = 0 t r r ! θ r α ( e 1 ) α G ( r , α , u , 1 ; 0 , 1 ) ,
where G ( r , α , u , 1 ; 0 , 1 ) = 0 1 ( ln u ) r ( e u 1 ) α 1 e u u 1 1 d u . Thus,
μ r = θ r α ( e 1 ) α G ( α , r , u , 1 ; 0 , 1 ) .
Sufficiency: Suppose Equation (7) holds; then, find L 1 { L { f ( x ) } ( t ) } ( x ) , which should, by definition, yield the original PDF f ( x ) . The inverse Laplace transform is given by the Bromwich integral (see [16])
L 1 { L { f ( x ) } ( t ) } ( x ) = 1 2 π i c i c + i e t x L { f ( x ) } ( t ) d t ,
where c is a real number greater than the real parts of all singularities of L { f ( x ) } ( t ) . Substituting L { f ( x ) } ( t ) ,
f ( x ) = 1 2 π i c i c + i e t x α ( e 1 ) α 0 1 e t θ ln u ( e u 1 ) α 1 e u d u d t ,
Interchanging the order of integration (justified by Fubini’s Theorem under appropriate conditions; see [17]),
f ( x ) = α ( e 1 ) α 0 1 1 2 π i c i c + i e t x e t θ ln u d t ( e u 1 ) α 1 e u d u ,
let the inner integral be
I = 1 2 π i c i c + i e t x + θ ln u d t .
This is a standard inverse Laplace transform, i.e., L 1 { 1 } ( x ) = δ ( x ) ; see [18]. More generally, for a shift in the exponent L 1 e a t ( x ) = δ ( x + a ) . Here, a = θ ln u . Therefore, we have I = δ x + θ ln u ; see [19]. Now, substitute it into Equation (8) to get
f ( x ) = α ( e 1 ) α 0 1 δ x + θ ln u ( e u 1 ) α 1 e u d u .
The delta function δ ( g ( u ) ) is non-zero when g ( u ) = 0 . Let
g ( u ) = x + θ ln u = 0 θ ln u = x ln u = θ x u = e θ / x ,
which is the unique root in u ( 0 , 1 ) for x > 0 , θ > 0 . Now, use the identity δ ( g ( u ) ) h ( u ) d u = i h ( u i ) | g ( u i ) | , where u i are the roots of g ( u ) = 0 . Here, there is one root: u 0 = e θ / x . Compute g ( u ) , g ( u ) = x + θ ( ln u ) 1 .
g ( u ) = θ · ( 1 ) · ( ln u ) 2 · 1 u = θ u ( ln u ) 2 ,
Now evaluate | g ( u 0 ) | at u 0 = e θ / x : ln u 0 = θ x ,
| g ( u 0 ) | = θ u 0 ( ln u 0 ) 2 = θ e θ / x θ x 2 = θ e θ / x · θ 2 x 2 = x 2 θ e θ / x ,
Upon using the Delta Function Identity (see [19]), we get
0 1 δ x + θ ln u h ( u ) d u = h ( u 0 ) | g ( u 0 ) | = ( e u 0 1 ) α 1 e u 0 x 2 θ e θ / x ,
where h ( u ) = ( e u 1 ) α 1 e u . Now, substitute u 0 = e θ / x so that we have ( e u 0 = e e θ / x ) . Thus, h ( u 0 ) = ( e e θ / x 1 ) α 1 e e θ / x (see [19]) and
f ( x ) = α ( e 1 ) α × ( e e θ / x 1 ) α 1 e e θ / x · θ x 2 e θ / x .
This completes the proof. □
Theorem 4.
Let X be an absolutely continuous random variable with a PDF, as defined in Equation (1); then, it is uniquely determined if its Mellin transformation for ( s ) > 1 can be written as
M [ f ( x ) ] ( s ) = α ( e 1 ) α e i π ( s 3 ) θ s 1 0 1 ( ln u ) ( s 1 ) e u 1 α 1 e u d u .
Proof. 
Necessity: Since the Mellin transform of a function f ( x ) is defined as M [ f ( x ) ] ( s ) = 0 x s 1 f ( x ) d x , upon incorporating Equation (1), we get
M [ f ( x ) ] ( s ) = 0 x s 1 1 ( e 1 ) α α e e θ / x 1 α 1 e e θ / x e θ / x θ x 2 d x = α θ ( e 1 ) α 0 x s 3 e e θ / x 1 α 1 e e θ / x e θ / x d x
Let, u = e θ / x ; then, x = θ ln u d x = θ u ( ln u ) 2 d u and e θ / x = u e e θ / x = e u . Now, when x 0 + , we have u 0 + , and when x , we have u 1 . So, we have
M [ f ( x ) ] ( s ) = α θ 2 ( θ ) s 3 ( e 1 ) α 0 1 ( ln u ) ( s 1 ) ( e u 1 ) α 1 e u d u
Now, upon expressing ( θ ) s 3 = e ( s 3 ) ln ( θ ) = e ( s 3 ) ( ln | θ | + i π ) = θ s 3 e i π ( s 3 ) , we get
M [ f ( x ) ] ( s ) = α e i π ( s 3 ) ( e 1 ) α θ s 1 0 1 ( ln u ) ( s 1 ) ( e u 1 ) α 1 e u d u
Sufficiency: Suppose Equation (9) holds; then, it can be simplified as
M [ f ( x ) ] ( s ) = α e i π ( s 3 ) ( e 1 ) α θ s 1 0 1 ( ln u ) ( s 1 ) ( e u 1 ) α 1 e u d u = α e 3 i π ( e 1 ) α e i π s θ s 1 0 1 ( ln u ) ( s 1 ) ( e u 1 ) α 1 e u d u
Since e 3 i π = 1 (see [20]), we have
M [ f ( x ) ] ( s ) = α ( e 1 ) α e i π s θ s 1 0 1 ( ln u ) ( s 1 ) ( e u 1 ) α 1 e u d u
Now, take the integral
I ( s ) = 0 1 ( ln u ) ( s 1 ) ( e u 1 ) α 1 e u d u
make a substitution v = ln u so that u = e v , d u = e v d v . When u = 0 , v ; when u = 1 , v = 0 . Thus,
I ( s ) = 0 v ( s 1 ) ( e e v 1 ) α 1 e e v ( e v ) d v = 0 v ( s 1 ) ( e e v 1 ) α 1 e e v e v d v
This is the Mellin transform of the function g ( v ) = ( e e v 1 ) α 1 e e v e v so that I ( s ) = M [ g ( v ) ] ( s ) . Now, the inverse Mellin transformation can be written as
f ( x ) = 1 2 π i c i c + i x s M [ f ( x ) ] ( s ) d s = α ( e 1 ) α · 1 2 π i c i c + i x s e i π s θ s 1 M [ g ( v ) ] ( s ) d s
The inverse Mellin transform of a product corresponds to a Mellin convolution of the inverse transforms (see [21,22]), i.e., H ( s ) = e i π s θ s 1 , the inverse Mellin transform of which is (up to constants and interpretations involving delta functions) expressed as
M 1 [ H ( s ) ] ( x ) some distribution involving δ x θ e i π y .
Similarly, the inverse Mellin of M [ g ( v ) ] ( s ) is g ( v ) . Thus, the inverse Mellin transform yields Equation (1), confirming the consistency of the Mellin inversion process. This completes the proof. □

4. Parameter Estimation Methods

In order to find estimates, various methods exist; among these, maximum likelihood estimation (MLE) is efficient and widely used but can be computationally demanding for complex models. Bayesian Estimation Method (BEM) incorporates prior knowledge and provides full uncertainty quantification, though it requires intensive computation. L-moments offer a robust and simple alternative, especially suited for heavy-tailed or outlier-prone data, but may be less efficient in large samples. The choice of method depends on data characteristics, model complexity, and computational resources.

4.1. Maximum Likelihood Estimation

Let X 1 , X 2 , X 3 , , X n be a random sample from the EIED distribution with parameter vector Θ = ( α , θ ) and X 1 , X 2 , X 3 , , x n ; then, the log-likelilihood function can be expressed as
l n L = l n α n ( 1 + e ) n α + i = 1 n l n e e θ x i 1 α 1 e e θ x i e θ x i θ x i 2 ,
Taking the partial derivative w.r.t θ and α and setting it equal to zero, we get
l n L θ = ( α 1 ) i = 1 n 1 ( e e θ x i 1 ) e e θ x i e θ x i ( 1 x i ) + i = 1 n e θ x i ( 1 x i ) i = 1 n ( 1 x i ) + i = 1 n x i 2 θ . 1 x i 2 = 0
l n L α = n α n l n ( 1 + e ) + i = 1 n l n e e θ x i 1 = 0
after simplification
α = i = 1 n γ i i = 1 n ξ / i = 1 n η θ i + 1 n i = 1 n η i i = 1 n γ i + i = 1 n ξ i + 1 n l n ( 1 + e ) + β θ i = 0 .
where
η θ i = 1 ( e e θ x i 1 ) e e θ x i e θ x i ( 1 x i ) β θ i = l n e e θ x i 1
The nature of these nonlinear equations means a closed-form solution is not possible. Instead, numerical approaches, such as the bivariate Newton–Raphson method (available in various computational packages, like MATHEMATICA and MATLAB), can be employed for their solution. Given the absence of a closed-form solution for the nonlinear system, an information matrix of size 2 × 2 becomes essential for performing hypothesis tests and interval estimations on the distribution parameters. The second derivatives that constitute the elements of this matrix are provided above.

4.2. Simulation Study

We conducted simulation studies with 1000 replications to assess and compare the performance of the different estimation procedures discussed in the preceding section. Random variables X from the EIED ( α , θ ) distribution were generated using the Mathematica [8.0] computational package. The quality of these estimators is assessed primarily through bias and mean square error (MSE).
It is often impractical to achieve perfectly unbiased estimators, leading to the frequent use of biased estimators. Estimators with lower bias are generally preferred. A biased estimator may be used for several reasons: an unbiased estimator might not exist without further assumptions or could be too complex to compute; an estimator might be median-unbiased but not mean-unbiased; a biased estimator might minimize a loss function (such as MSE) more effectively than its unbiased counterparts (as with shrinkage estimators); or in some scenarios, strict unbiasedness is an overly stringent condition, making the only unbiased estimators unhelpful. In contrast, MSE is a measure of estimator quality; it is always non-negative, and lower values signify better performance. For this purpose, we took four sets of parameter combinations, among which, Set I: α = 0.03 , θ = 0.45 , Set II: α = 0.55 , θ = 1.30 , Set III: α = 1.05 , θ = 0.53 , and Set IV: α = 1.98 , θ = 2.32 . We took 10,000 replications, each of sizes n = 15 , 25 , 35 , 80 , 150 , 250 . Then, the MLEs were obtained, and their average biases and mean squared errors were calculated as
  • Average bias of the simulated estimates:
    1 10,000 j = 1 10,000 ( Θ Θ ) ,
    where Θ ( α ^ , θ ^ ) are the MLEs of EIED ( α , θ ) .
  • Average mean square error (MSE) of the simulated estimates:
    1 10,000 j = 1 10,000 ( Θ Θ ) 2 ,
Moreover, biases and MSE error plots are also generated, portrayed in Figure 6 and Figure 7, which show that increasing the sample size reduces α bias across all parameter sets, with Set I consistently exhibiting the lowest bias. However, θ bias appears to be relatively stable and low across increasing sample sizes, suggesting the estimator’s bias in theta is minimal, regardless of sample size or parameter set. Thus, increasing sample size reduces α bias significantly across all parameter sets, and θ bias remains low and stable, showing less sensitivity to sample size or parameter set. However, Set I tends to have the lowest α bias, while all sets maintain a low θ bias.
Similarly, Figure 7 shows that, for all parameter sets, the MSE of α decreases as the sample size increases, indicating improved estimation accuracy with more data. Set IV consistently has the highest alpha MSE at all sample sizes, indicating less accurate estimates. However, Set I has the lowest MSE across all sample sizes, showing better estimation precision. The decline in MSE is more pronounced at smaller sample sizes and tapers off as the sample size grows, suggesting diminishing returns with very large samples. However, the θ MSE remains relatively low and stable across all sample sizes for all parameter sets. Differences among sets are minimal, with Set I showing a slightly lower MSE. The MSE for θ estimates is much smaller compared to alpha, indicating high estimation accuracy or low variability in theta estimates.

4.3. Discussion About Simulation Study

The simulation comparison, as portrayed in shows that LMM consistently outperforms MLE and BEM by providing the lowest bias and MSE across all sample sizes. MLE’s estimates are unstable for small samples but improve with a larger n. BEM offers stability at a small n but suffers from a higher MSE and persistent bias, especially for certain parameters. Overall, LMM is the most reliable and accurate method, making it the preferred choice in this study.
Notably, θ estimates demonstrate greater stability but with less pronounced improvement as sample sizes increase, especially in Set IV, where biases and MSEs remain consistently high. The results suggest that MLE performs adequately only for very small α values (Set I) with large samples, while struggling with larger parameter values (Sets III–IV). This pattern points to potential model identifiability issues and highlights the need for alternative estimation approaches, such as Bayesian methods with informative priors or regularized estimation techniques, particularly when dealing with small-to-moderate sample sizes or larger parameter values. The findings underscore the importance of careful method selection when working with this distribution, especially in data-scarce scenarios.
Based on the simulation tables (MLE, BEM, and LM), the comparative analysis reveals that LM generally outperforms the other methods. MLE exhibits very large biases for small sample sizes, particularly in Sets II–IV, and maintains unstable bias even with larger samples. Its MSE is also extremely high for a small n, decreasing with larger samples but remaining comparatively large. BEM shows moderate, more consistent bias but tends to have a higher MSE than LMM, especially in larger sets. Conversely, LM consistently demonstrates the smallest bias and MSE across nearly all sample sizes, with stability improving as n increases. Overall, LM provides the most reliable and accurate estimates, making it the preferred method among the three.
Figure 8 and Table A19 illustrate the bias of parameters α and θ as a function of sample size across four different parameter sets originating from BEM. For α , the bias generally decreases with increasing sample size for Sets I and III, indicating improved estimation accuracy, while Set II exhibits a slight positive bias that diminishes at larger samples. Set IV shows relatively stable bias across sample sizes. In contrast, the bias of θ demonstrates diverse patterns: it increases with sample size in Set I, remains stable and low in Set II, and shows a slight overestimation in Set III, whereas Set IV maintains near-zero bias. These observations highlight that the estimation accuracy of both parameters is influenced by the parameter set and sample size, emphasizing the importance of sufficient data for reliable inference and the variability in estimation performance depending on underlying parameter configurations.
Figure 9 and Table A19 present the mean squared error (MSE) of the parameters α and θ across varying sample sizes for four different parameter sets, as shown in the left and right panels, respectively. For α , the MSE generally decreases with increasing sample size for Sets I and III, indicating improved estimation accuracy as more data becomes available. Conversely, Set II exhibits fluctuating MSE values, with no clear decreasing trend, suggesting variability in estimation precision. Set IV maintains a relatively stable MSE across sample sizes. Regarding θ , the MSE tends to increase with sample size in Set I, implying potential challenges in estimation consistency for this set. In contrast, Sets II and IV show relatively stable or decreasing MSEs, indicating more reliable estimates as the sample size grows. Set III demonstrates an increasing trend in MSE, highlighting possible estimation difficulties. Overall, the figure underscores the impact of sample size and underlying parameter configurations on the accuracy of parameter estimation in the Bayesian model.

4.4. Bayes Estimation Method (BEM)

Let X E I E D ( α , θ ) with a PDF, as defined in Equation (1). Let the prior density of α and θ be α Gamma ( 0.5 , 1 ) with PDF π ( α ) = 1 0.5 Γ ( 0.5 ) α 0.5 1 e 1 · α and θ Gamma ( 1 , 0.5 ) with PDF π ( θ ) = 0.5 1 Γ ( 1 ) θ 1 1 e 0.5 θ = 0.5 e 0.5 θ , respectively. Suppose we have a sample x 1 , x 2 , , x n . The likelihood is
L ( α , θ x ) = i = 1 n α ( e 1 ) α e e θ / x i 1 α 1 · e e θ / x i e θ / x i · θ x i 2 ,
and it can be written as
L ( α , θ x ) = α n ( e 1 ) n α i = 1 n e e θ / x i 1 α 1 i = 1 n e e θ / x i e θ / x i θ x i 2 .
The posterior distribution is proportional to π ( α , θ x ) L ( α , θ x ) · π ( α ) · π ( θ ) , upon substituting the prior, we get
π ( α , θ x ) α n ( e 1 ) n α i = 1 n e e θ / x i 1 α 1 i = 1 n e e θ / x i e θ / x i θ x i 2 · α 0.5 1 e α · 0.5 e 0.5 θ ,
π ( α , θ x ) α n + 0.5 1 ( e 1 ) n α i = 1 n e e θ / x i 1 α 1 e α i = 1 n e e θ / x i e θ / x i θ x i 2 e 0.5 θ ,
After simplification, we get
π ( α , θ x ) α n 0.5 ( e 1 ) n α i = 1 n e e θ / x i 1 α 1 e α · θ n i = 1 n 1 x i 2 i = 1 n e e θ / x i e θ / x i e 0.5 θ .
The Bayes estimators under squared error loss are α ^ Bayes = E [ α x ] , θ ^ Bayes = E [ θ x ] . Exact analytical expressions are complicated due to the complex form of the likelihood; therefore, Markov Chain Monte Carlo (MCMC) methods, such as Gibbs sampling or Metropolis-Hastings, are used to generate samples from the posterior and compute these expectations. Therefore, for practical computation, we take the following steps: (1). Define the log-posterior to facilitate numerical stability:
log π ( α , θ x ) = ( n 0.5 ) log α n α log ( e 1 ) α + ( α 1 ) i = 1 n log e e θ / x i 1 + i = 1 n e θ / x i + log θ 2 log x i 0.5 θ + const
(2). Use numerical optimization to find the Maximum a Posteriori (MAP) estimates or implement MCMC algorithms to sample from the posterior.

4.5. L-Moments Method

L-moments are linear combinations of probability weighted moments (PWMs) that provide a robust alternative to conventional moments. They are particularly useful for parameter estimation in distributions where traditional moments may not exist or are unstable. These are less sensitive to outliers than conventional moments. They exist when conventional moments may diverge and are good for small-sample properties. In this, λ 1 measures location, λ 2 measures scale, and λ 3 / λ 2 measures skewness. For a random variable X with a cumulative distribution function F ( x ) , the L-moments are defined as
λ 1 = E [ X ] , λ 2 = 1 2 E X ( 2 : 2 ) X ( 1 : 2 ) ,
λ 3 = 1 3 E X ( 3 : 3 ) 2 X ( 2 : 3 ) + X ( 1 : 3 ) , λ 4 = 1 4 E X ( 4 : 4 ) 3 X ( 3 : 4 ) + 3 X ( 2 : 4 ) X ( 1 : 4 )
where X ( k : n ) denotes the k -th order statistic from a sample of size n.

4.5.1. Probability Weighted Moments (PWMs)

The probability weighted moments of order ( r , s ) for a random variable X with cumulative distribution function F ( x ) are defined as
β r , s = E X r F ( X ) s ( 1 F ( X ) ) t
For the special case used in L-moments, we consider
β r = E X ( 1 F ( X ) ) r = 0 x f ( x ) ( 1 F ( x ) ) r d x
where f ( x ) is the probability density function.
Definition 2.
The L-moments λ k are linear combinations of probability weighted moments, expressed as
λ 1 = β 0 , λ 2 = 2 β 1 β 0 , λ 3 = 6 β 2 6 β 1 + β 0 , λ 4 = 20 β 3 30 β 2 + 12 β 1 β 0
In general, the k-th L-moment is given by
λ k = j = 0 k 1 ( 1 ) k 1 j k 1 j k 1 + j j β j
For an ordered sample x ( 1 ) x ( 2 ) x ( n ) , the unbiased estimators of PWMs are
b r = 1 n i = 1 n n i r n 1 r x ( i )
The sample L-moments are then
l 1 = b 0 , l 2 = 2 b 1 b 0 , l 3 = 6 b 2 6 b 1 + b 0 , l 4 = 20 b 3 30 b 2 + 12 b 1 b 0
Now, the theoretical PWMs of EIED can be expressed as
β r ( α , θ ) = 0 x e e θ / x 1 e 1 α r α ( e 1 ) α e e θ / x 1 α 1 e e θ / x e θ / x θ x 2 d x
which, upon simplification, becomes
β r ( α , θ ) = α θ ( e 1 ) α ( r + 1 ) 0 1 x e e θ / x 1 α ( r + 1 ) 1 e e θ / x e θ / x d x
The parameters ( α , θ ) are estimated by solving
l 1 = λ 1 ( α , θ ) , l 2 = λ 2 ( α , θ ) , l 3 = λ 3 ( α , θ ) ,
which is equivalent to minimizing the objective function:
J ( α , θ ) = l 1 λ 1 ( α , θ ) 2 + l 2 λ 2 ( α , θ ) 2 + l 3 λ 3 ( α , θ ) 2
The plots in Figure 10 indicate that the biases for both the α and θ parameters tend to decrease or stabilize as the sample size increases. Set I consistently shows the lowest bias, while Set III and Set IV exhibit higher bias levels that remain relatively constant with larger samples. Overall, increasing the sample size improves the estimation accuracy by reducing bias across the different datasets. Similarly, the MSE plots, as portrayed in Figure 11, show that the MSE for α and θ generally decreases with increasing sample size, indicating improved estimation accuracy. Different datasets exhibit varying patterns, with some benefiting more from larger samples, while others maintain higher MSE levels regardless of sample size. Overall, larger samples tend to lead to more precise parameter estimates.

4.5.2. Numerical Implementation Strategy

Due to the complexity of the integrals, a numerical approach is employed, comprising the following steps: (i) Precompute a grid of parameter values ( α i , θ j ) and compute the corresponding theoretical L-moments
Λ i j = ( λ 1 ( α i , θ j ) , λ 2 ( α i , θ j ) , λ 3 ( α i , θ j ) )
using numerical integration or Monte Carlo simulation. (ii) For a sample with L-moments l = ( l 1 , l 2 , l 3 ) , find the parameters that minimize
d ( α , θ ) = k = 1 3 w k l k λ k ( α , θ ) σ k 2 ,
where w k are weights and σ k are scale factors. (iii) Solve the minimization problem:
( α ^ , θ ^ ) = arg min ( α , θ ) Θ d ( α , θ ) .
Under regularity conditions, the L-moment estimators are consistent: lim n ( α ^ n , θ ^ n ) = ( α 0 , θ 0 ) almost surely . The estimators are asymptotically normal
n α ^ n θ ^ n α 0 θ 0 d N ( 0 , I 1 ) ,
where I is the Fisher information matrix.

5. Materials and Methods

In the statistical literature, numerous distributions are available for life testing applications. In this study, we utilize real lifetime data to demonstrate that EIED can serve as a suitable model for analyzing both lifetime and flood-related data. For comparison, we evaluated the performance of EIED against other distributions, including the IE, IR, MIR, MIW, and NGIR. The PDFs for these distributions can be found in Khan and Kind (2018). Model comparisons were performed using several statistical criteria: AIC, BIC, AICc, KS, AD, CM, HQIC, and p-value. Detailed definitions and descriptions of these methods follow.
One popular method for comparing two or more models is log-likelihood, ( Θ ^ ) = l n ( L ( Θ ^ ) ) ; a larger log-likelihood value suggests a better fit of a model to data. Additionally, information criteria are effective for comparing statistical models that may differ in their number of parameters. AIC, HQIC, AICc, and CAIC are frequently used information criteria for selecting the most appropriate model among various options.
AIC = 2 k 2 ( Θ ^ ) , AICc = AIC + 2 k ( k + 1 ) n k 1 ,
HQIC = 2 ( Θ ^ ) + 2 k ln ( ln ( n ) ) , CAIC = 2 ( Θ ^ ) + 2 k n n k 1 .
In this context, k signifies the number of parameters, n refers to the number of observations, and Θ ^ indicates the estimated parameters.
The efficacy of the proposed model is also assessed through the Kolmogorov–Smirnov (K-S), Anderson–Darling ( A 0 ), and Cramér-von Mises ( W 0 ) statistics. The mathematical formulas for these statistics are presented below.
We present the Kolmogorov–Smirnov (K-S), Anderson–Darling ( A 0 ), and Cramér-von Mises ( W 0 ) statistics, including their mathematical expressions, in the following section.
KS = max i m z i , z i i 1 m ,
A 0 = 2.25 m 2 + 0.75 m + 1 m 1 m i = 1 m ( 2 i 1 ) ln ( z i ( 1 z m i + 1 ) ) ,
W 0 = i = 1 m z i 2 i 1 2 m 2 + 1 12 m ,
To define the variables, k is the number of parameters, n is the number of observations, and m is the number of classes. z i = F ( x i ) ) represents the cumulative probability associated with the ith-ordered observation x i ’s. Lastly, o i and e i refer to the observed and expected frequencies for the ith class.

Real Data Illustration

In this section, the usefulness of the EIED distribution is illustrated by using two real datasets: one for the investigation of failure times and the other for flood discharge rates. These datasets are described as follows.
From Figure 12, it is clear that the data points of Set I mostly follow the straight line, especially in the middle, indicating that the data is approximately normally distributed. The deviations at the extremes (tails) suggest some discrepancy from perfect normality, possibly indicating heavier tails or skewness. However, the data points of Set II deviate more from the straight line, especially at the upper tail, suggesting that this dataset may not follow a normal distribution closely. The presence of points far from the line at the upper end indicates potential outliers or a distribution with heavier tails than the normal distribution.

6. The Datasets and Their Properties

The first data set was obtained from [23]. The data represents the failure times of the airconditioning system of an aircraft, with measurements of 23, 261, 87, 7, 120, 14, 62, 47, 225, 71, 246, 21, 42, 20, 5, 12, 120, 11, 3, 14, 71, 11, 14, 11, 16, 90, 1, 16, 52, 95. The second flood data was reported by [1]; it investigates the flood discharge from the Floyd River located in James, Iowa, USA. The Floyd River flood discharge rates in f t 3 / s for the years 1935–1973 are provided as follows: 1935–1944: 1460, 4050, 3570, 2060, 1300, 1390, 1720, 6280, 1360, 7440; 1945–1954: 5320, 1400, 3240, 2710, 4520, 4840, 8320, 13,900, 71,500, 6250; 1955–1964: 2260, 318, 1330, 970, 1920, 15,100, 2870, 20,600, 3810, 726; 1965–1973: 7500, 7170, 2000, 829, 17,300, 4740, 13,400, 2940, 5660.
Table A9 shows that both datasets are positively skewed and leptokurtic, which suggests they are not normally distributed. Set II is more extreme in both skewness and kurtosis, indicating heavier tails and more extreme values. Set II also shows a larger gap between the mean and median, supporting the idea of a strong influence of outliers. The skewness/kurtosis ratio is higher for Set I, suggesting a more balanced deviation from normality than Set II. Similarly, Figure 13 shows that Set I shows a fairly normal-like distribution with mild skewness and a moderate spread. Set II is highly skewed, with extreme values pulling the mean upward, which is clearly visualized through the violin’s shape. The violin plot confirms the earlier table’s descriptive statistics, especially regarding skewness, kurtosis, and dispersion.
Kernel Density Estimation (KDE) is used to compare data distributions visually. It helps to obtain a smooth estimate of the data distribution and identify the number of modes (peaks) in the data. In this regard, Figure 14 shows that both datasets are right-skewed, with most data points concentrated around lower values. Set I has a peak near zero and gradually decreases, indicating fewer high-value outliers. Set II also peaks near zero but shows more significant outliers at very high values. Overall, both datasets suggest the presence of predominantly low values with some large outliers, especially in Set II.

Data Analysis and Interpretation

Here, we are going to present and analyse the above-stated datasets. All results in this section were compiled on the basis of MLEs obtained from different distributions. In this regard, Table A10 shows that the proposed model has the lowest A 0 (1.9946), W 0 (0.3365), and KS (0.2467), along with the highest p-value (0.1214 > 0.05). This suggests it is the most suitable model for Set I. The IR distribution performs poorly, with a very high A 0 (38.3314), W 0 (3.7244), and KS (0.6442), and a p-value of 0.0000, rejecting the null hypothesis. However, IE and MIR have moderate fits but are rejected at the 5% significance level (p-values < 0.05). Moreover, MIW and NGIR are also rejected (p-values < 0.05), with MIW showing the second-worst KS statistic.
The goodness of fit test results, as portrayed in Table A11, indicate that the EIED and MIW distributions provide a reasonably good fit to the data, as evidenced by their relatively high p-values of 0.1933 and 0.3897 for KS statistics, respectively. In contrast, the IR and NGIR distributions show very low p-values (0.0000), indicating poor fits to the data. Overall, based on the KS test results, the EIED and MIW distributions are the most appropriate models among those considered. Table A12 shows EIED’s dominance in goodness of fit and in BIC, AICC, BIC, HQIC, and CAIC, except in AIC and when estimating using both methods. Moreover, the Figure 15 also support the above-mentioned results displayed in Table A10 and Table A11, respectively.
The two-parameter EIED model demonstrates remarkable consistency and robustness. It performs exceptionally well under both MLE and LM estimation methods and is consistently the most parsimonious option according to BIC. Therefore, unless there is a specific need to extract the absolute maximum likelihood from the data using the LM method (opting for MIW), EIED stands out as the most reliable and interpretable model for Set I. Table A13 summarizes various model selection criteria—AIC, BIC, AICC, HQIC, and CAIC—for different distributions fitted to the data with n = 30 . Among the models, the EIEDdistribution consistently exhibits the lowest values across all criteria, with an AIC of 316.803, BIC of 319.605, AICC of 317.232, HQIC of 317.694, and CAIC of 321.605. These results suggest that the EIED distribution provides the best balance between model fit and complexity for the data. The MIW distribution also performs well, with comparable criteria values, indicating it is a strong alternative. Conversely, the IR and NGIR distributions show higher scores across all criteria, implying a poorer fit relative to the other models. Overall, based on these model selection metrics, the EIED distribution can be regarded as the most appropriate model among those considered.
The comparison between the TTT plots as portrayed in Figure 16 obtained using the maximum likelihood estimation (MLE) method (left) and the L-moments (LM) method (right) reveals notable differences in the stability and fit of the models to the empirical data. In the MLE-based TTT plot, the fitted models (such as IE, IR, MIR, MIW, NGIR, and the exponential reference) tend to closely follow the empirical curve, especially in the central region, indicating higher precision and accuracy under the assumption of a well-behaved data distribution. However, the MLE estimates can be sensitive to outliers and data contamination, which may affect the stability of the fit.
Conversely, the LM-based TTT plot demonstrates that the fitted models (including IE, IR, MIR, MIW, NGIR, and EIED) exhibit more stable behavior across the entire range of i / n , particularly in the tails, due to the robustness of the L-moments against outliers and small sample variability. In the case of LM, all distributions show convex TTT curves, indicating decreasing hazard rates. EIED and NGIR show TTT curves closest to the original data, confirming their better fit. The IE distribution shows the most pronounced convex shape, while IR shows the least.
However, MLE generally provides more precise and accurate parameter estimates when the model assumptions are met, and the LM method ensures greater stability and reliability, especially in the presence of data irregularities. The choice of parameter estimation method, thus, influences the interpretation of the distributional characteristics depicted in the TTT plots, with LM offering robustness and MLE offering efficiency. The analysis of Set II, as portrayed in Table A14, indicates that the EIED and MIW distributions provide the best fit to the data, evidenced by their low KS values and high p-values, suggesting good model adequacy. In contrast, distributions such as IR and NGIR exhibit very high KS values and negligible p-values, indicating poor fit. Overall, the results favor the EIED and MIW models for this dataset. As shown in Table A15, the analysis indicates that the EIED distribution provides the best fit to Set II when parameters are estimated via LM, as evidenced by its lowest A 0 and W 0 values and moderate to high p-value (0.9011) for KS statistics, suggesting strong compatibility with the data. The IE distribution also fits well, with a moderate KS and a high p-value (0.9085). In contrast, distributions such as IR and NGIR have very high KS values and negligible p-values, indicating poor fit. The MIR and MIW distributions show acceptable fits with high p-values (0.8182 and 0.9093, respectively). Overall, the results favor EIED and MIW as the most suitable models for this dataset.
The histograms of Set II, presented in Figure 17, display the empirical distribution of the data alongside fitted theoretical models. The left panel shows the histogram with the fitted distributions overlaid, including EIED, IE, IR, MIR, MIW, and NGIR, each represented by different colors. The right panel presents the L-moments-based histogram fitted with the same distributions. Visually, the EIED distribution appears to closely follow the shape of the data, capturing the skewness and tail behavior effectively, which is consistent with the goodness of fit measures, indicating that it is the best-fitting model. The IE distribution also provides a reasonable fit, though some deviations are noticeable in the tails. The other models, particularly IR and NGIR, do not align well with the data, especially in the tails, suggesting that they are less suitable for modeling this dataset. The consistency between the histograms and the statistical fit results reinforces that the EIED distribution offers the most accurate representation of Set II among the candidate models. In a nutshell, the analysis compares two datasets (Set I and Set II) using histograms and fitted probability distributions (EIED, IE, IR, MIR, MW, and NGIR) to evaluate their goodness of fit. Set I exhibits a right-skewed distribution with a peak near zero and a long tail, while Set II shows a sparse, near-zero density across a wide range, suggesting heavy-tailed or extreme-valued data. For Set I, the EIED distribution performs best, with the lowest AIC (316.803), BIC (319.605), and Kolmogorov–Smirnov (KS) statistic (0.2467), closely followed by NGIR. In Table A16, under MLE, the two-parameter models—EIED, MIR, and MIW—perform similarly with nearly identical log-likelihoods, but EIED slightly outperforms the others with the lowest AIC (759.469) and BIC (762.797). Both methods corroborate the fact that simpler models like IR and NGIR are inadequate and that the more complex MIW does not offer sufficient improvement to justify its complexity. Overall, the results in Table A17 based on the LM method favor the simple IE model, while MLE suggests that the EIED model is the best option due to its slightly superior fit and lower penalized criteria. Despite some divergence, both methods agree on discarding models like IR and NGIR. The robustness of MLE, which successfully estimated all models, contrasts with LM’s failures for certain distributions. In conclusion, for Set II, the choice of the optimal model depends on the estimation technique: LM favors the parsimonious IE distribution, whereas MLE supports the slightly more flexible EIED distribution as the most appropriate model based on fit and complexity considerations.
Figure 18 compares the return period analyses for Set II using the MLE and LM methods. Both plots show that models like EIED and MIW closely match the empirical data, especially at higher values, indicating their effectiveness in modeling extreme events. The MLE plot demonstrates that these models accurately capture the tail behavior, while the LM plot emphasizes the dataset’s heavy-tail nature. Conversely, models such as IR and NGIR deviate at higher return periods, suggesting poorer performance in estimating rare events. Overall, the figures highlight that EIED and MIW provide the best fit for predicting return periods, with the choice of method influencing the interpretation of tail behavior and model suitability.

7. Conclusions

In this paper, we introduce EIED, a novel, two-parameter extension of the traditional inverse exponential distribution. Our motivation stems from the prevalent use of the inverse exponential distribution in reliability analysis and the recognition that generalizing this distribution can enhance flexibility in modeling lifetime data. We have derived explicit formulas for key statistical measures, including moments, the moment generating function, Lorenz and Bonferroni curves, and entropy. Furthermore, we compared EIED with related inverse distributions using PDFs and hazard rate functions. To characterize the distribution, we employed Mellin, Laplace, and Fourier transforms, addressing the important issue of identifiability not only through the cumulative distribution function (CDF), but also via these characterization methods.
For parameter estimation, we utilized three approaches: maximum likelihood estimation (MLE), Bayesian Estimation Method (BEM), and Linear Moments Method (LMM). A comprehensive simulation study was conducted using 10,000 samples across various sample sizes (n = 25, 50, 100, 150, 200, and 250), where we evaluated the estimators using mean squared error (MSE) and bias to compare estimated parameters against true values, with the results summarized in tables. The findings suggest that BEM provides stable estimates, particularly when an appropriate prior distribution is chosen.
Additionally, we applied our model to two real datasets—one medical and one related to flood data—demonstrating that EIED offers a better fit compared to competing models. We believe this distribution holds promise for broader applications in statistical modeling and analysis.

Author Contributions

Conceptualization, T.H. and A.M.; methodology, T.H.; software, T.H. and A.M.; validation, T.H., M.S. and M.A.; formal analysis, T.H.; investigation, T.H.; resources, B.M.G.K.; data curation, T.H.; writing—original draft preparation, A.M. and T.H.; writing—review and editing, T.H.; visualization, M.S.; supervision, M.A.; project administration, B.M.G.K.; funding acquisition, None. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original data presented in the study are openly available in Akinsete, Famoye & Lee (2008) and Khan & King (2016).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Cumulative distribution functions (CDFs) of inverse models.
Table A1. Cumulative distribution functions (CDFs) of inverse models.
DistributionParametersCDF, F ( x )
Inverted Exponential (IE) λ > 0 F ( x ; λ ) = exp λ x I 0 , ( x )
Inverse Rayleigh (IR) σ > 0 F ( x ; σ ) = exp σ x 2 I 0 , ( x )
Modified Inverse Rayleigh (MIR) α > 0   β > 0 F ( x ; α , γ ) = exp α x γ x 2 I 0 , ( x )
Modified Inverse Weibull (MIW) α > 0 , β > 0   γ > 0 F ( x ; α , β , γ ) = e α x γ x β I 0 , ( x )
New Generalized Inverse Rayleigh (NGIR) α > 0 , β > 0   λ > 0 F ( x ; α , β , λ ) = 1 e α x λ x 2 β I 0 , ( x )
Exponentiated Inverse Exponential (EIED) α > 0   θ > 0 F ( x ; α , θ ) = e e θ / x 1 e 1 α I 0 , ( x )
Table A2. Asymptotic tail behavior of inverse models.
Table A2. Asymptotic tail behavior of inverse models.
DistributionTail Asymptotics as x Tail Classification
Inverted Exponential (IE) λ x Heavy Tail (Polynomial decay: x 1 )
Inverse Rayleigh (IR) σ 2 x 2 Heavy Tail (Polynomial decay: x 2 )
Exponentiated Inverse Exponential (EIED) α θ x Heavy Tail (Polynomial decay: x 1 )
Modified Inverse Rayleigh (MIR) α β 2 x 2 Heavy Tail (Polynomial decay: x 2 )
Modified Inverse Weibull (MIW) α 2 γ β x β Heavy Tail for β < (Polynomial decay: x β ).
New Generalized Inverse Rayleigh (NGIR) β α x Heavy Tail (Polynomial decay: x 1 )
Table A3. Flexibility of HRF OF inverse models.
Table A3. Flexibility of HRF OF inverse models.
DistributionHRF ShapeFlexibilityKey Feature
IE/IRFixed UnimodalLowBaseline models with no shape flexibility.
EIED/MIRFlexible UnimodalMediumShape parameter α modulates the peak of the baseline (IE/IR) HRF.
MIWHighly Flexible UnimodalVery HighShape-driven flexibility. Parameter β controls the core HRF form (sharpness), and  α provides additional modulation.
NGIRStructurally Flexible UnimodalHighestAdditive-structure flexibility. Can mimic or blend the HRFs of IE and IR families, offering a unique class of shapes.
Table A4. Asymptotic tail behavior HRF of inverse models.
Table A4. Asymptotic tail behavior HRF of inverse models.
DistributionParamsTail BehaviorHazard ShapeFlexibilityKey Feature
IE λ Heaviest (∼ 1 / x )UnimodalLowBaseline heavy-tail model
IR σ Heavy (∼ 1 / x 2 )UnimodalLowFaster decay than IE
EIED α , θ Heaviest (∼ 1 / x )Flexible UnimodalMediumGeneralized IE with shape parameter α
MIR α , β Heavy (∼ 1 / x 2 )Flexible UnimodalMediumGeneralized IR with shape parameters α , β
MIW α , β , γ Tunable (∼ 1 / x β )Highly FlexibleVery High β controls decay rate
NGIR α , β , λ Heaviest (∼ 1 / x )Structurally FlexibleHighestAdditive IE + IR structure
Table A5. Quantile comparison at key percentiles.
Table A5. Quantile comparison at key percentiles.
PercentileIEIRMIRMIWNGIREIED
0.0100.3260.4660.5230.52318.2180.543
0.0500.5010.5780.6670.6676.7110.890
0.2501.0820.8491.0491.0492.5232.093
0.5002.1641.2011.6151.6151.5724.365
0.7505.2141.8642.9262.9261.08910.793
0.95029.2444.41511.45011.4500.72461.475
0.990149.2499.97551.67551.6750.576314.602
Table A6. Analytical quantile behavior.
Table A6. Analytical quantile behavior.
DistributionQ (0.5)Q (0.95)/Q (0.5)Tail Behavior
IE2.16413.513Extremely Heavy
IR1.2013.676Very Heavy
MIR1.6157.091Extremely Heavy
MIW1.6157.091Extremely Heavy
NGIR1.5720.461Light
EIED4.36514.083Extremely Heavy
Table A7. Summary statistics from EIED.
Table A7. Summary statistics from EIED.
# α θ MeanVarianceSDSkewnessExcess Kurtosis
12.32320.89687.1209777.54628.80602.31605.6638
22.57281.661410.2649113.905910.67271.57072.0641
32.40411.18428.446092.44719.61491.97273.8239
42.31731.35278.861997.31169.86471.87433.3534
52.13550.52824.884552.46777.24353.110711.1252
62.46881.42659.3922103.371710.16721.75672.8264
72.15641.41818.738496.05239.80061.90113.4768
82.83771.425410.0401110.844610.52831.62012.2608
92.94551.915611.4945130.204511.41071.31981.1711
102.07521.52278.874497.72599.88561.86873.3244
112.68761.03938.381291.48859.56501.99083.9155
122.29331.15558.126688.90779.42912.04934.2064
132.35211.54649.5261105.088510.25131.72632.6943
142.88840.59036.357468.70108.28862.55357.1228
151.60661.50027.654684.32999.18312.15964.7773
161.63071.50607.736285.22619.23182.13874.6655
171.53030.81565.175956.19597.49642.971210.0381
182.74890.69346.785473.53978.57552.41846.2757
192.66720.97318.061887.88599.37472.06904.3108
202.80501.04568.602493.92509.69151.93913.6637
212.96791.355310.0241110.538810.51371.62442.2789
222.69871.15798.876597.14699.85631.87453.3575
232.19221.982610.2993114.825010.71561.55992.0176
242.67080.65316.436069.67408.34712.52626.9471
251.67740.81335.486959.61097.72082.84809.1289
262.45990.74206.602071.64088.46412.47186.6029
271.71501.47977.885086.80479.31692.10204.4721
282.91700.88008.026787.33729.34542.07984.3681
292.28281.19958.266990.51359.51392.01464.0310
302.12200.86666.612271.97808.48402.46466.5557
Table A8. Conditional mean, variance, standard deviation, skewness, and kurtosis.
Table A8. Conditional mean, variance, standard deviation, skewness, and kurtosis.
# α θ tMeanVarianceSDSkewnessEx. Kurtosis
12.04880.89680.7385−0.162950.014970.12234−0.78352−0.06462
22.21521.66140.6656−0.103670.006850.08274−0.995660.57050
32.10281.18421.4845−0.362900.070990.26643−0.71456−0.23260
42.04491.35270.7073−0.133610.010630.10311−0.876990.19719
51.92370.52820.7949−0.215310.023750.15411−0.63506−0.41110
62.14591.42651.0531−0.216200.027130.16472−0.832500.06758
71.93761.41811.7315−0.443790.103850.32226−0.67813−0.31952
82.39181.42540.6457−0.098980.006300.07936−1.011060.61958
92.46371.91561.7569−0.349080.071920.26818−0.862200.15293
101.88341.52270.6441−0.121080.008690.09321−0.870790.18357
112.29171.03931.9647−0.509300.135110.36757−0.66571−0.33647
122.02891.15551.2030−0.286380.044770.21159−0.73267−0.19194
132.06801.54641.9651−0.487450.127200.35665−0.70351−0.25886
142.42560.59031.4073−0.376660.072330.26895−0.63494−0.39891
151.57101.50021.6089−0.464070.107170.32737−0.58176−0.53226
161.58711.50600.5588−0.120060.008020.08955−0.76792−0.09166
171.52020.81560.9242−0.275940.037240.19298−0.55456−0.58597
182.33260.69340.6803−0.143920.011910.10914−0.818830.03191
192.27820.97310.9442−0.202490.023440.15310−0.808790.00440
202.37001.04560.6781−0.122320.009130.09556−0.918440.31856
212.47861.35530.9770−0.176930.019120.13827−0.919640.32205
222.29921.15791.1214−0.238740.032690.18079−0.814030.01861
231.96151.98260.5962−0.091570.005330.07300−0.994740.57382
242.28050.65311.5387−0.423470.090200.30034−0.61351−0.44488
251.61830.81331.3499−0.414010.082380.28701−0.53478−0.61380
262.13990.74200.8981−0.214930.025150.15859−0.73046−0.19448
271.64341.47971.2849−0.342360.060530.24602−0.63957−0.41152
282.44470.87990.6409−0.117840.008430.09183−0.909650.29164
292.02181.19951.3639−0.333430.059950.24485−0.71380−0.23654
301.91470.86661.8939−0.559360.152190.39011−0.55968−0.55629
Table A9. Descriptive Summary of Set I and Set II.
Table A9. Descriptive Summary of Set I and Set II.
DataSample SizeMeanMedianStandard DeviationSkewnessKurtosis Skewness Kurtosis
I3059.622071.88481.69364.96670.3409
II396771.13570.011695.74.558025.44360.1791
Table A10. Set I: Goodness of fit measures based on MLEs.
Table A10. Set I: Goodness of fit measures based on MLEs.
Distribution θ ^ α ^ β ^ A 0 W 0 KSp-Value
EIED7.19831.22681.99460.33650.24670.1214
IE11.17983.18230.53520.28800.0440
IR24.27938.33143.72440.64420.0000
MIR−2.28588.70515.21810.95160.33550.0112
MIW3.27705.204312.229211.39701.95310.44910.00018
NGIR−4.653011.48210.77414.23490.85140.34820.0075
Table A11. Set I: Goodness of fit measures based on LMs.
Table A11. Set I: Goodness of fit measures based on LMs.
Distribution θ ^ α ^ β ^ A 0 W 0 KSp-Value
EIED7.19841.22681.64430.27640.19170.1933
IE11.17992.82220.48620.23300.0648
IR24.27944.26365.00440.64420.0000
MIR0.000011.17992.82220.48610.23300.0648
MIW6.97120.00000.72400.78570.10600.15940.3890
NGIR0.00008.12390.649356.51208.87510.99980.0000
Table A12. Set I: Information criterion summary using MLE.
Table A12. Set I: Information criterion summary using MLE.
Distribution AICAICCBICHQICCAIC
EIED156.401316.803317.247319.605317.699321.605
IE159.062320.124320.267321.525320.572322.525
IR215.745433.491433.634434.892433.939435.892
MIR158.767321.534321.979324.337322.431326.337
MIW157.082320.160321.083324.364321.505327.364
NGIR155.367316.734317.657320.938318.079323.938
Table A13. Set I: Information criterion summary LM.
Table A13. Set I: Information criterion summary LM.
Distribution AICAICCBICHQICCAIC
EIED156.4013316.803319.605317.232317.694321.605
IE159.0620320.124321.525320.267320.570322.525
IR215.7455433.491434.892433.634433.937435.892
MIR159.0620322.124324.927322.553323.016326.927
MIW155.5140317.029317.952321.232318.374324.232
NGIR157.1929320.386324.589321.243321.724327.589
Table A14. Set II: Goodness of fit measures based on MLEs.
Table A14. Set II: Goodness of fit measures based on MLEs.
Distribution θ ^ α ^ β ^ A 0 W 0 KSp-Value
EIED295.4454.84140.39690.04900.08660.9318
IE2166.260.40180.04960.08670.9313
IR 1.9686 × 10 6 15.72372.19760.38020.00002
MIR−204,920.002391.750.40740.04940.09370.8827
MIW2266.27133.6371.01510.40390.05010.08670.9317
NGIR10.90612400.881.17180.39760.05790.08700.9316
Table A15. Set II: Goodness of fit measures based on LMs.
Table A15. Set II: Goodness of fit measures based on LMs.
Distribution θ ^ α ^ β ^ A 0 W 0 KSp-Value
EIED535.83482.79820.39540.04940.08740.9011
IE2165.87540.39930.04970.08640.9085
IR 1.0 × 10 7 30.66704.33440.48810.0000
MIR 1.0 × 10 7 2.0 × 10 3 0.77550.05160.09740.8182
MIW 1.0 × 10 6 2166.506757.98210.39550.04960.08630.9092
NGIR 1.0 × 10 7 2.0 × 10 3 1102.403413.18621.00000.0000
Table A16. Information criterion summary for Set II using MLE.
Table A16. Information criterion summary for Set II using MLE.
Distribution AICAICCBICHQICCAIC
EIED377.735759.469759.803762.797760.663764.797
IE380.110762.221762.329763.884762.818764.884
IR404.562811.124811.232812.787811.721813.787
MIR377.815759.630759.963762.957760.824764.957
MIW377.987759.974760.307763.301761.167765.301
NGIR377.743761.485762.171766.476763.276769.476
Table A17. Information criterion summary for Set II using LM.
Table A17. Information criterion summary for Set II using LM.
Distribution AICAICCBICHQICCAIC
EIED377.2174758.435758.768761.762759.629763.762
IE377.9945757.989758.097759.653758.586760.653
IR411.7900825.580825.688827.244826.177828.244
MIRNaNNaNNaNNaNNaNNaN
MIW376.2611758.522759.208763.513760.313766.513
NGIRNaNNaNNaNNaNNaNNaN
  • Set I:
    Parameter’s estimation: ( α ^ , θ ^ ) = ( 4.8414 , 295.445 )
    The variance–covariance matrices of MLEs:
    C ( α ^ , θ ^ ) = α ^ θ ^ α ^ θ ^ ( 0.6010 38.9917 38.9917 2450 ) .
    Standard error of MLEs: (0.7752, 49.4974)
    The variance–covariance matrices of LM:
    C ( α ^ , θ ^ ) = α ^ θ ^ α ^ θ ^ ( 0.6010 38.9917 38.9917 2450 ) .
    Standard error of LM: (0.7752, 49.4974)
  • Set II:
    Parameter’s estimation: ( α ^ , θ ^ ) = ( 2.7981 , 535.8349 )
    The variance–covariance matrices of MLEs:
    C ( α ^ , θ ^ ) = α ^ θ ^ α ^ θ ^ ( 0.0502 0.3214 0.3214 1.9802 ) .
    Standard error of MLEs: (0.2239,1.4072)
    The variance–covariance matrices of LMs:
    C ( α ^ , θ ^ ) = α ^ θ ^ α ^ θ ^ ( 0.2008 41.2373 41.2373 8400.1 ) .
    Standard error of LMs: (0.4480, 91.6521)
  • Simulation results by MLE.
  • Simulation results by BEM.
  • Simulation results by LMM.
Table A18. Parameter estimation results (MLE).
Table A18. Parameter estimation results (MLE).
SetnTrue ValuesBiasMSE
α θ α θ α θ
I150.030.45115.59−0.0268,073.180.26
250.030.4562.450.0235,067.670.20
350.030.4533.610.0617,363.000.16
800.030.452.000.16552.570.16
1500.030.450.770.1055.530.13
2500.030.450.460.040.310.10
II150.551.30320.98−0.84239,154.921.18
250.551.30237.20−0.85163,812.721.07
350.551.30170.13−0.82111,226.980.93
800.551.3056.87−0.6933,165.211.09
1500.551.3010.83−0.695584.570.61
2500.551.303.00−0.741449.740.66
III151.050.53369.18−0.12265,312.350.56
251.050.53329.36−0.23218,134.410.39
351.050.53289.84−0.27185,121.380.31
801.050.53236.11−0.39132,010.610.21
1501.050.53158.07−0.4275,627.760.20
2501.050.53117.91−0.4453,535.760.20
IV151.982.32499.83−1.96430,620.094.21
251.982.32377.62−1.95313,489.564.07
351.982.32341.30−1.98258,241.514.14
801.982.32231.77−2.02162,926.814.20
1501.982.32128.90−2.0365,642.774.22
2501.982.3284.34−2.0433,810.684.21
Table A19. Parameter estimation results (BEM).
Table A19. Parameter estimation results (BEM).
SetnTrue ValuesBiasMSE
α θ α θ α θ
Set I150.030.451.19690.45021.80250.9255
250.030.451.04970.43251.55280.7441
350.030.450.90680.67381.01321.4492
800.030.450.76710.98620.76033.6325
1500.030.450.29382.85120.131811.7355
2500.030.450.51322.93400.455215.9645
Set II150.551.30.1578−0.34740.17080.5802
250.551.30.3885−0.59490.53600.5757
350.551.30.3138−0.61410.18520.5276
800.551.30.5867−0.82260.51900.7763
1500.551.30.5488−0.90230.38710.8278
2500.551.30.5981−0.94790.39140.9044
Set III151.050.53−0.81453.12080.879915.2454
251.050.53−1.00324.24771.006418.5637
351.050.53−1.00364.77011.007223.5752
801.050.53−1.00615.62221.012232.2936
1501.050.53−1.00525.55911.010531.0864
2501.050.53−1.00656.00231.013136.3424
Set IV151.982.32−0.5571−1.70280.48472.9378
251.982.32−0.3428−1.79720.49043.2823
351.982.32−0.1122−1.88600.27343.5768
801.982.32−0.3685−1.85030.25163.4424
1501.982.320.1532−1.94260.59373.8018
2501.982.320.2057−2.99810.26184.0030
Table A20. Parameter estimation results (LMM).
Table A20. Parameter estimation results (LMM).
Parameter_SetSample_SizeTrue ValuesBiasMSE
α θ α θ α θ
Set I150.030.450.82790.05450.82260.0190
250.030.450.88880.04580.91280.0126
350.030.450.91200.04880.92210.0093
800.030.450.94190.04880.93690.0056
1500.030.450.94770.05080.92640.0044
2500.030.450.95840.05180.93250.0037
Set II150.551.3−0.0320−0.58760.049950.3663
250.551.30.0193−0.58640.04360.3562
350.551.30.0256−0.59330.03630.3604
800.551.30.0479−0.59060.01760.3523
1500.551.30.0567−0.59020.01250.3505
2500.551.30.0541−0.59090.00830.3504
Set III151.050.53−0.39040.66070.25030.5107
251.050.53−0.36420.65270.18930.4693
351.050.53−0.33660.65680.16170.4631
801.050.53−0.32860.65380.13150.4403
1501.050.53−0.30660.65330.10830.4332
2501.050.53−0.30890.65300.10390.4309
Set IV151.982.32−1.81560.35493.30270.2071
251.982.32−1.80970.35183.27880.1712
351.982.32−1.80280.35153.25350.1590
801.982.32−1.80010.35473.24180.1404
1501.982.32−1.79920.35243.23780.1329
2501.982.32−1.79780.34723.23240.1257

References

  1. Akinsete, A.; Famoye, F.; Lee, C. The beta-Pareto distribution. Statistics 2008, 42, 547–563. [Google Scholar] [CrossRef]
  2. Gupta, R.C.; Gupta, R.D.; Gupta, P.L. Modeling failure time data by Lehman alternatives. Commun. Stat.-Theory Methods 1998, 27, 887–904. [Google Scholar] [CrossRef]
  3. Nassar, M.; Alzaatreh, A.; Mead, M.; Abo-Kasem, O. Alpha Power Weibull Distribution: Properties and Applications. Commun. Stat.-Theory Methods 2017, 46, 10236–10252. [Google Scholar] [CrossRef]
  4. Nadarajah, S.; Kotz, S. The Exponentiated Type Distributions. Acta Appl. Math. 2006, 92, 97–111. [Google Scholar] [CrossRef]
  5. Nadarajah, S. The exponentiated Gumbel distribution with climate application. Environmetrics 2006, 17, 13–23. [Google Scholar] [CrossRef]
  6. Al-Babtain, A.A.; Elbatal, I.; Chesneau, C.; Jamal, F. The Topp-Leone Odd Burr XII-G Family of Distributions: Theory, Characterization, and Applications to Industry and Flood Data. Ann. Data Sci. 2023, 10, 723–747. [Google Scholar]
  7. Hassan, A.S.; Nassr, S.G.; Elsehamy, S.A. A New Generalization of the Exponentiated Inverted Weibull Distribution with Applications to Engineering and Medical Data. J. Appl. Stat. 2022, 49, 4099–4120. [Google Scholar]
  8. Korkmaz, M.Ç.; Yousof, H.M.; Hamedani, G.G. The Exponentiated Burr XII-Poisson Distribution with Applications to Lifetime Data. Commun. Stat.-Simul. Comput. 2022, 51, 6552–6571. [Google Scholar]
  9. Elbatal, I.; Hiba, M.Z. Exponentiated Generalized Inverse Weibull Distribution. Appl. Math. Sci. 2014, 8, 3997–4012. [Google Scholar] [CrossRef]
  10. Kumar, D.; Singh, U.; Singh, S.K. A New Distribution Using Sine Function-Its Application To Bladder Cancer Patients Data. J. Stat. Appl. Probab. Nat. Sci. Publ. Corp 2015, 4, 417. [Google Scholar]
  11. Kumar, D.; Singh, U.; Singh, U. Lifetime Distribution: Derived from some Minimum Guarantee Distribution. Sohag J. Math. 2017, 4, 7–11. [Google Scholar] [CrossRef]
  12. Chesneau, C.; Bakouch, H. A New Cumulative Distribution Function Based on m Existing Ones; HAL: Bangalore, India, 2017. [Google Scholar]
  13. Christophe, C.; Hassan, S.B.; Hussain, T. A New Class of Probability Distributions via Cosine and Sine Functions with Applications. Communications in Statistics, Simulation and Computation; Taylor & Francis: Abingdon, UK, 2018; pp. 1–14. [Google Scholar]
  14. Kumar, D.; Singh, U.; Singh, S.K.; Mukherjee, S. The New Probability Distribution: An Aspect to a Life Time Distribution. Math. Sci. Lett. 2017, 6, 35–42. [Google Scholar] [CrossRef]
  15. Nasir, A.; Yousof, H.M.; Jamal, F.; Korkmaz, M.Ç. The exponentiated Burr XII power series distribution: Properties and applications. Stats 2018, 2, 15–31. [Google Scholar] [CrossRef]
  16. Schiff, J.L. The Laplace Transform: Theory and Applications; Springer: New York, NY, USA, 1999. [Google Scholar]
  17. Folland, G.B. Real Analysis: Modern Techniques and Their Applications; John Wiley & Sons: Hoboken, NJ, USA, 1999. [Google Scholar]
  18. Erdelyi, A.; Magnus, W.; Oberhettinger, F.; Tricomi, F. Tables of Integral Transforms; McGraw-Hill: New York, NY, USA, 1954; Volume 1. [Google Scholar]
  19. Kanwal, R.P. Generalized Functions: Theory and Technique, 2nd ed.; Springer Science + Business Media: New York, NY, USA, 2004. [Google Scholar]
  20. Ablowitz, M.J.; Fokas, A.S. Complex Variables: Introduction and Applications, 2nd ed.; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  21. Brychkov, Y.A.; Prudnikov, A.P. Integral transforms of generalized functions. J. Sov. Math. 1986, 34, 1630–1655. [Google Scholar] [CrossRef]
  22. Debnath, L.; Bhatta, D. Integral Transforms and Their Applications; Chapman and Hall/CRC: Boca Raton, FL, USA, 2016. [Google Scholar]
  23. Khan, M.S.; King, R. New generalized inverse Weibull distribution for lifetime modeling. Commun. Stat. Appl. Methods 2016, 23, 147–161. [Google Scholar] [CrossRef]
Figure 1. Graphs of EIED HRF and HRF.
Figure 1. Graphs of EIED HRF and HRF.
Axioms 14 00753 g001
Figure 2. Comparison of PDFs and HRFs of competing models.
Figure 2. Comparison of PDFs and HRFs of competing models.
Axioms 14 00753 g002
Figure 3. Skewness and kurtosis of EIED.
Figure 3. Skewness and kurtosis of EIED.
Axioms 14 00753 g003
Figure 4. Quantile function comparison of competing models.
Figure 4. Quantile function comparison of competing models.
Axioms 14 00753 g004
Figure 5. Bonferroni and Lorenz curves of EIED.
Figure 5. Bonferroni and Lorenz curves of EIED.
Axioms 14 00753 g005
Figure 6. MLEs’ bias of α and θ .
Figure 6. MLEs’ bias of α and θ .
Axioms 14 00753 g006
Figure 7. MLEs’ MSE of α and θ .
Figure 7. MLEs’ MSE of α and θ .
Axioms 14 00753 g007
Figure 8. Bias of α and θ plots of BEM.
Figure 8. Bias of α and θ plots of BEM.
Axioms 14 00753 g008
Figure 9. MSE of α and θ plots of BEM.
Figure 9. MSE of α and θ plots of BEM.
Axioms 14 00753 g009
Figure 10. Bias of α and θ plot of LMM.
Figure 10. Bias of α and θ plot of LMM.
Axioms 14 00753 g010
Figure 11. MSEs of α and θ plot of LMM.
Figure 11. MSEs of α and θ plot of LMM.
Axioms 14 00753 g011
Figure 12. QQ plot of Sets I and II.
Figure 12. QQ plot of Sets I and II.
Axioms 14 00753 g012
Figure 13. Violin plot for Sets I and II.
Figure 13. Violin plot for Sets I and II.
Axioms 14 00753 g013
Figure 14. KDE of Sets I and II.
Figure 14. KDE of Sets I and II.
Axioms 14 00753 g014
Figure 15. Histogram of Set I.
Figure 15. Histogram of Set I.
Axioms 14 00753 g015
Figure 16. TTT plots of Set I.
Figure 16. TTT plots of Set I.
Axioms 14 00753 g016
Figure 17. Histogram of Set II.
Figure 17. Histogram of Set II.
Axioms 14 00753 g017
Figure 18. Return period plots of Set II.
Figure 18. Return period plots of Set II.
Axioms 14 00753 g018
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mushtaq, A.; Hussain, T.; Shakil, M.; Ahsanullah, M.; Kibria, B.M.G. An Exponentiated Inverse Exponential Distribution Properties and Applications. Axioms 2025, 14, 753. https://doi.org/10.3390/axioms14100753

AMA Style

Mushtaq A, Hussain T, Shakil M, Ahsanullah M, Kibria BMG. An Exponentiated Inverse Exponential Distribution Properties and Applications. Axioms. 2025; 14(10):753. https://doi.org/10.3390/axioms14100753

Chicago/Turabian Style

Mushtaq, Aroosa, Tassaddaq Hussain, Mohammad Shakil, Mohammad Ahsanullah, and Bhuiyan Mohammad Golam Kibria. 2025. "An Exponentiated Inverse Exponential Distribution Properties and Applications" Axioms 14, no. 10: 753. https://doi.org/10.3390/axioms14100753

APA Style

Mushtaq, A., Hussain, T., Shakil, M., Ahsanullah, M., & Kibria, B. M. G. (2025). An Exponentiated Inverse Exponential Distribution Properties and Applications. Axioms, 14(10), 753. https://doi.org/10.3390/axioms14100753

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop