You are currently viewing a new version of our website. To view the old version click .
Entropy
  • Article
  • Open Access

5 December 2025

Entropy-Based Evidence Functions for Testing Dilation Order via Cumulative Entropies

Department of Quantitative Analysis, College of Business Administration, King Saud University, Riyadh 11362, Saudi Arabia
This article belongs to the Special Issue Entropy, Statistical Evidence, and Scientific Inference: Evidence Functions in Theory and Applications, 2nd Edition

Abstract

This paper introduces novel non-parametric entropy-based evidence functions and associated test statistics for assessing the dilation order of probability distributions constructed from cumulative residual entropy and cumulative entropy. The proposed evidence functions are explicitly tuned to questions about distributional variability and stochastic ordering, rather than global model fit, and are developed within a rigorous evidential framework. Their asymptotic distributions are established, providing a solid foundation for large-sample inference. Beyond their theoretical appeal, these procedures act as effective entropy-driven tools for quantifying statistical evidence, offering a compelling non-parametric alternative to traditional approaches, such as Kullback–Leibler discrepancies. Comprehensive Monte Carlo simulations highlight their robustness and consistently high power across a wide range of distributional scenarios, including heavy-tailed models, where conventional methods often perform poorly. A real-data example further illustrates their practical utility, showing how cumulative entropies can provide sharper statistical evidence and clarify stochastic comparisons in applied settings. Altogether, these results advance the theoretical foundation of evidential statistics and open avenues for applying cumulative entropies to broader classes of stochastic inference problems.

1. Introduction

Modern statistical inference increasingly emphasizes the quantification of statistical evidence, functions that measure the support that observed data provide for competing models or for specific structural features of the underlying generating process. Classical information-theoretic criteria, such as Kullback–Leibler divergence and model selection scores (AIC, BIC), are powerful for assessing overall fit, but many applications in reliability, finance, and insurance require evidence about distributional variability, stochastic comparisons, and ordered structural properties, not only global fit.
Stochastic orders provide a rigorous language for such comparisons. They rank distributions by characteristics such as location, variability, and risk, and are widely used in reliability, actuarial science, economics, and finance. Among these orders, the dilation order is central because it ranks random variables by dispersion, thereby offering a principled evidential framework for assessing relative variability. This is particularly valuable when analyzing heavy-tailed or complex data, where traditional dispersion measures such as variance can be insensitive or misleading.
The dilation order therefore plays a crucial role in applications where understanding differences in spread rather than central tendency is essential. For example, in survival analysis, finance, and insurance, practitioners often seek evidence regarding whether one population exhibits greater dispersion than another, rather than whether their means differ. Developing non-parametric tools tailored to this question is both theoretically important and practically necessary, especially when model assumptions cannot be guaranteed. Recall that a random variable Y is more dispersed in the dilation order than a random variable X (denoted by Xdil Y) if
E [ ϕ ( X E ( X ) ) ] E [ ϕ ( Y E ( Y ) ) ] ,
for every convex function ϕ for which the expectations exist [1]. This formalizes that Y displays greater dispersion than X . It is immediate that Xdil Y implies V a r ( X ) V a r ( Y ) , but variance induces a total order and is therefore less informative than the partial order provided by dilation. For background and related variability orders, see [2,3,4]. In parallel, measuring uncertainty via entropy functions has become an active area in statistics and information theory. Shannon’s [5] differential entropy for a nonnegative random variable X with density (pdf) f and distribution function (cdf) F is defined by:
H X = 0   f x log f x d x ,
where “ l o g ( ) ” means natural logarithm. The quantity H ( X ) is location-free since X and X + b share the same differential entropy for any b > 0 . Thus, negative values of b are allowed whenever min X + b > 0 . However, expression (2) has well-known limitations as a continuous analog of discrete entropy, which has motivated the development of alternative measures, including weighted entropy and residual/past entropy variants [6,7]. The cumulative residual entropy (CRE), defined by:
C R E X = 0     S x log S x d x = 0     S x Λ x d x ,
where S ( x ) = P ( X > x ) denotes the survival function, and
Λ ( x ) = l o g S ( x ) = 0 x     f u S u d u , x > 0 ,
denotes the cumulative hazard function [8]. The CRE effectively characterizes information dispersion and has numerous applications in reliability and aging analysis [9,10,11,12,13]. From an evidential perspective, CRE functions not only as a measure of uncertainty but also as a tool for drawing statistical evidence about variability and aging behavior.
The cumulative entropy (CE), an alternative probability-based measure associated with inactivity time, is obtained by replacing the pdf with the cdf in Shannon’s definition
C E X = 0     F x log F x d x = 0     F x T x d x ,
where
T ( x ) = l o g F ( x ) = x     f u F u d u , x > 0 ,
denotes the cumulative reversed hazard function [14]. Because the logarithmic argument is a probability, both C R E ( X ) and C E ( X ) are nonnegative; by contrast, H ( X ) may be negative for continuous variables. Moreover, C E X = 0 if and only if X is degenerate, underscoring its role as a measure of uncertainty. Properties of CE and its dynamic version for past lifetimes establish useful connections to characterization and stochastic orderings, reinforcing its value as a functional tool for comparing distributions. Several recent developments on cumulative entropies and their applications are presented in [15,16,17,18,19,20].
Within the evidential framework, an evidence function is designed to estimate a clearly defined target that represents the scientific contrast of interest. The quality of an evidence function is evaluated through desiderata such as consistency, interpretability, and its explicit treatment of uncertainty. Recent work has clarified the distinction between statistical evidence, and Neyman–Pearson (NP) testing and has formalized the link between an evidence function and its associated target in applied analysis [21,22,23,24,25].
While evidential statistics treats statistical evidence as a continuous and independent measure of support for one hypothesis over another, clearly separating evidence from belief and decision, most applications to date have relied on parametric models (e.g., Cahusac [26], Dennis et al. [25], and Powell et al. [27]). In contrast, our approach provides the first fully non-parametric construction of an evidence function, thereby extending the evidential framework beyond parametric assumptions. Traditional tests often fail to quantify the relative support for competing scientific claims; for example, a non-significant p-value does not imply equality of variances but merely indicates insufficient evidence against the null hypothesis. In contrast, evidential functions such as the entropy-based estimators proposed in this work are continuous, portable, and accumulable, providing estimates of well-defined inferential targets. Importantly, even under model misspecification, evidential methods preserve interpretable error properties, whereas classical tests can become increasingly misleading as data size grows (see Taper et al. [21] for further discussion).
This paper develops entropy-based evidence functions for the dilation order. We define two quantities, D ^ 1 ( n , m ) (CRE-based) and D ^ 2 ( n , m ) (CE-based), as evidence estimators of targets D 1 ( X , Y ) and D 2 ( X , Y ) that quantify the degree of stochastic variability between two distributions. Our primary framing is evidential: D ^ 1 ( n , m ) and D ^ 2 ( n , m ) are constructed and analyzed as evidence functions; NP-style tests are presented only as optional, secondary adaptations for decision-making. This reframing yields two concrete extensions: (i) model-to-process feature evidence, comparing models to generating processes with respect to dispersion/variability; and (ii) process-to-process evidence, comparing distinct generating processes by their distributional features using CRE and CE.
By framing D ^ 1 ( n , m ) and D ^ 2 ( n , m ) as evidence functions for the dilation order, this study aligns with the modern evidential paradigm. The objective is not to reject or accept a null hypothesis of equal dispersion, but to quantify the degree to which the observed data support one distribution as more dispersed than another. This approach is robust to heavy-tailed behavior, invariant to location shifts, and firmly grounded in information-theoretic principles. Moreover, the evidential perspective naturally accommodates comparisons between models and empirical reality, or between two observed processes, without requiring either model to be strictly true. This property represents a critical advantage in fields such as reliability, finance, and insurance, where all models are inherently approximations.
Methodologically, we derive large-sample distributions for the proposed functionals, provide practical nonparametric estimators, and evaluate performance via Monte Carlo experiments spanning light- and heavy-tailed families. A real-data application (survival times) illustrates interpretability and robustness. Compared with existing dilation-order procedures [4,28,29], the proposed approach is straightforward to implement, computationally efficient, and often competitive in power, particularly under heavy tails.
The remainder of this paper is organized as follows. Section 2 presents the theoretical foundations of CRE and CE under the dilation order. Section 3 introduces the proposed evidential functionals and investigates their asymptotic properties. Furthermore, it evaluates the accuracy of the proposed measures using a Monte Carlo simulation and demonstrates the methodology through a real-data application. Finally, Section 4 summarizes the main findings and outlines potential directions for future research.
Throughout this paper, we consider the random variables to be absolutely continuous, and we assume that all integrals and expectations exist whenever they are mentioned in the text.

2. Preserving the Dilation Order via Cumulative Entropies

This section investigates how CRE and CE can serve as evidence functions for assessing the dilation order. A key advantage of CRE lies in its connection to the mean residual life (MRL) function, m ( x ) = E ( X x X > x ) . It has been shown that C R E X = E m X , a relationship that underscores the relevance of CRE in reliability theory, where the MRL function is widely used to describe system aging [10]. Similarly, CE is closely related to the mean inactivity time (MIT) function, m ~ ( x ) = E ( x X X x ) ; in particular, C E ( X ) = E [ m ~ ( X ) ] [14]. These connections highlight the role of CRE and CE not only as measures of uncertainty but also as practical adequacy measures for quantifying variability. We now turn to the relationship between the dilation order of two random variables and the ordering of their CRE and CE. Recall that for a random variable X with cdf F , the quantile function is defined by F 1 ( u ) = i n f { x : F ( x ) u } for u ( 0,1 ) .
Theorem 1.
Let  X  and  Y  be two absolutely continuous nonnegative random variables with respective finite means  E ( X )  and  E ( Y )  and with pdfs  f  and  g  and cdfs  F  and  G , respectively. Then,
(i) 
if Xdil Y, then   C R E ( X ) C R E ( Y ) .
(ii) 
if Xdil Y, then  C E ( X ) C E ( Y ) .
Proof. 
(i) Given that E ( X X > x ) = 1 S ( x ) x   u f ( u ) d u , and m ( x ) = E ( X X > x ) x , along with the relation C R E ( X ) = E [ m ( X ) ] and Equation (4), we can rewrite C R E ( X ) as:
C R E X = 0     E X X > x f x d x E X = 0     1 S ( x ) x     u f u d u f x d x E X = 0     u f u 0 u     f x S x d x d u E X = 0     u f u log S u d u E X = 0 1     F 1 u log 1 u d u E X ,
where the final equality follows from the substitution u = F ( x ) . An analogous identity holds for C R E ( Y ) . Recall from Theorem 3.A.8 of [2] that X d i l Y implies
1 1 p p 1   F 1 ( u ) G 1 ( u ) d u 0 1   F 1 u G 1 u d u ,   f o r   a l l   p [ 0,1 ] .
Since E ( X ) = 0 1   F 1 ( u ) d u and E ( Y ) = 0 1   G 1 ( u ) d u , integrating both sides of the inequality over [0,1] and applying Fubini’s theorem yields
E ( X ) E ( Y ) 0 1     1 1 p p 1     F 1 ( u ) G 1 ( u ) d u d p = 0 1     F 1 ( u ) G 1 ( u ) 0 u     1 1 p d p d u = 0 1     F 1 ( u ) G 1 ( u ) l o g ( 1 u ) d u = 0 1     G 1 u log 1 u d u 0 1     F 1 u log 1 u d u .
This implies
0 1   F 1 u log 1 u d u E X 0 1   G 1 u log 1 u d u E Y ,
or equivalently, C R E ( X ) C R E ( Y ) , by recalling relation (7).
(ii) Using E ( X X x ) = 1 F ( x ) 0 x   u f ( u ) d u , m ~ ( x ) = x E ( X X x ) , and Equation (6), we can rewrite C E ( X ) = E [ m ~ ( X ) ] , as in Part (i), as follows:
C E X = 0     u f u log F u d u + E X = 0 1     F 1 u log u d u + E X ,
where the last equality follows from the substitution u = F ( x ) . The same identity holds for C E ( Y ) . Recall from [2] that X dil   Y yields
0 1   F 1 ( u ) G 1 ( u ) d u 1 p 0 p   F 1 u G 1 u d u ,   f o r   a l l   p [ 0,1 ] .
Integrating both sides of the preceding relation over [0,1] and applying Fubini’s theorem yields
E ( X ) E ( Y ) 0 1     1 p 0 p     F 1 u G 1 u d u   d p = 0 1     F 1 ( u ) G 1 ( u ) u 1     1 p d p d u = 0 1     G 1 u log u d u 0 1     F 1 u log u d u .
This implies that
0 1   F 1 u log u d u + E X 0 1   G 1 u log u d u + E Y ,
or equivalently C E ( X ) C E ( Y ) by (8). This completes the proof. □
Before presenting the next theorem, we define the absolute lower Lorenz curve, used in economics to compare income distributions, as
A X p = 0 p   F 1 t E X d t ,
for all 0 p 1 , see e.g., [4]. When X is a degenerate random variable, A X p coincides with the horizontal axis. A X p decreases for 0 p F ( E X ) and increases for F ( E X ) p 1 , with A X ( 0 ) = A X ( 1 ) = 0 . Furthermore, A X p is a convex function with respect to p implying A X p 0 for all p [ 0,1 ] . Note that t can also be written as A X p = 0 p   F X 1 t d t , so its convexity in p reflects the convexity of functionals of the demeaned variable X = X E [ X ] , consistent with the definition of the dilation order given in (1). Moreover, we also define the complete function of the absolute upper Lorenz curve as follows:
A ¯ X p = p 1   F 1 t E X d t ,   f o r   a l l   0 p 1 .
It follows that
A ¯ X p + A X p = 0 ,   f o r   a l l   0 p 1 .
From Theorem 3.A.8 of [2], we have X dil   Y if and only if A X p A Y p or A ¯ X p A ¯ Y p for all 0 p 1 . Let us check this with an example.
Example 1.
Let  X  and  Y  be random variables following exponential distributions with cdfs  F x = 1 e λ X x ,   x > 0 , λ X > 0 ,  and  G x = 1 e λ Y x ,   x > 0 , λ Y > 0 ,  where  λ X > λ Y .  It is easy to see that
A X p = 1 λ X 1 p log ( 1 p ) ,   a n d   A Y p = 1 λ Y 1 p log ( 1 p ) ,
for  0 < p < 1 .  Since  λ X > λ Y  and  1 p log ( 1 p )  is negative for  0 p 1 , we conclude that
1 λ X 1 p log ( 1 p ) 1 λ Y 1 p log 1 p ,
which means that  A X p A Y p  for all  0 < p < 1  and hence  X d i l   Y .
Remark 1.
We recall that the results given in Example 1 are consistent with the fact that for exponential random variables,  X d i s p   Y  whenever  λ X > λ Y  (see [2] for the definition of the dispersive order  d i s p ). So, by Theorem 3.B.16 in [2], it implies that  X d i l   Y .
The CRE and CE measures can be expressed in terms of the functions A ¯ X p and A X p , respectively. Recalling (7), and applying integration by parts with u = log ( 1 p ) and d v = F 1 p E X d p , and thus d u = 1 1 p d p and v = A ¯ X p , we obtain an alternative expression for the CRE in terms of A ¯ X p as follows:
C R E X = 0 1 F 1 p E X   log 1 p d p = 0 1     1 1 p A ¯ X p d p .
Similarly, recalling (8), and applying integration by parts with u = log ( p ) and d v = F 1 p E X d p , and thus d u = 1 p d p and v = A X p , we have an alternative expression in terms of A X p for the CE as follows:
C E X = 0 1 F 1 p E X   log p d p = 0 1     1 p A X p d p .
The following counterexample demonstrates that the converse of Theorem 1 is not necessarily true, that is, C R E ( X ) C R E ( Y ) to ( C E ( X ) C E ( Y ) ) does not imply X dil   Y .
Example 2.
Let  X  and  Y  be random variables with cdfs  F x = 2 x x 2 ,  and  G x = x 2 ,  for  0 < x < 1 . It is straightforward to verify that  C E X = 0.187  and  C E Y = 0.222  which implies  C E X < C E ( Y ) . Moreover, one can obtain
A X p = 2 3 p 1 + 1 p 3 2 ,   a n d   A Y p = 2 3 p 2 3 p ,
for  0 < p < 1 .  Figure 1 displays the plots of  A X p  and  A Y p  over the interval  0 < p < 1 .  The figure reveals that  A X p < A Y p  for  p ( 0 , 0.5 )  and  A X p < A Y p  for  p ( 0.5,1 ) , leading to the conclusion that  X d i l Y .
Figure 1. The plot of A X p and A Y p for all 0 < p < 1 .
The expressions presented in Equations (9) and (10) are now utilized to derive several important results. The following theorem emphasizes a key implication: if two random variables are ordered by dilation and possess the same CE, then they are identical in distribution or differ only by a location shift.
Theorem 2.
Under the conditions of Theorem 1, if  X d i l   Y  and  C E ( X ) = C E ( Y ) , then  X  and  Y  have the same distribution up to a location parameter.
Proof. 
Based on the assumption C E ( X ) = C E ( Y ) and recalling (10), it is equivalent to
0 1     1 p A X ( p ) A Y ( p ) d p = 0 .
Furthermore, by Theorem 2.1 of [24], X dil   Y implies
A X ( p ) A Y ( p ) , 0 p 1 .
From (11) and (12), A X ( p ) = A Y ( p ) almost everywhere on [ 0,1 ] . We claim that A X ( p ) = A Y ( p ) for all p [ 0,1 ] . Otherwise, there exists an interval ( a , b ) [ 0,1 ] such that A X ( p ) > A Y ( p ) for all p ( a , b ) . Then,
0 1   1 p A X ( p ) A Y ( p ) d p a b   1 p A X p A Y p d p > 0 .
contradicting with (11). Therefore,
p 1     F 1 ( t ) E ( X ) d t = p 1     G 1 ( t ) E ( Y ) d t ,   f o r   a l l   p [ 0,1 ] .
Differentiating (13) with respect to p yields
F 1 p = G 1 p + E X E Y ,   for   all   p 0,1 ,
implying that X and Y have the same distribution up to a location parameter. □
For random variables ordered by dilation, equal CRE values imply identical distributions or differences only in location, which is proven in the next theorem.
Theorem 3.
Under the conditions of Theorem 1, if  X d i l   Y  and  C R E ( X ) = C R E ( Y ) , then  X  and  Y  have the same distribution up to a location parameter.
Proof. 
Based on the assumption C R E ( X ) = C R E ( Y ) and recalling (9), it is equivalent to
0 1     1 1 p A X ( p ) A Y ( p ) d p = 0 .
Since A ¯ X ( p ) = A X ( p ) for all 0 p 1 , Theorem 2.1 of [24] implies that Xdil Y leads to
A ¯ X ( p ) A ¯ Y ( p ) , 0 p 1
The remainder of the proof is analogous to that of Theorem 2. □
It should be noted that Theorems 2 and 3 imply that if X dil   Y and C E ( X ) = C E ( Y ) (or C R E ( X ) = C R E ( Y ) ), then X =   dil   Y , meaning X and Y are equal in distribution up to a location parameter.

3. Statistical Evidence for the Dilation Order via Cumulative Entropies

Economic and social processes often influence the variability of distributions, such as household spending before and after tax reforms, or stock returns before and after financial policy changes. A natural question in these contexts is whether such changes significantly alter variability. To address this, we develop tests for the null hypothesis H 0 : X =   dil   Y (variability remains unchanged) against the alternative H 1 : X d i l Y and X d i l Y (variability increases). Note that X d i l Y and Y d i l X i.e., X =   dil   Y if and only if F x = G ( x + d ) for some real constant d and for all x . According to Theorems 1–3, this comparison can be expressed in terms of entropy-based evidence functions. In particular, the functionals
D 1 ( X , Y ) = C R E Y C R E X   and   D 2 ( X , Y ) = C E Y C E X ,
serve as natural measures of departure from H 0 in favor of H 1 . Thus, the null hypothesis should be rejected if the absolute values of D 1 ( X , Y ) or D 2 ( X , Y ) exceed their corresponding critical thresholds. Since the true values of C R E X , C R E Y , C E X , and C E Y are generally unknown, we estimate them from independent random samples X 1 , X 2 , , X n   and Y 1 , Y 2 , , Y m   . Replacing the population entropies with their empirical counterparts, we obtain the test statistics
D ^ 1 ( n , m ) = C R E ^ m ( Y ) C R E ^ n ( X ) ,
and
D ^ 2 ( n , m ) = C E ^ m Y C E ^ n X .
where C R E ^ n ( . ) , C R E ^ m . ,   C E ^ n . and C E ^ m . denote the empirical estimators of CRE and CE, for random samples from X and Y , respectively. Under the null hypothesis X =   dil   Y i.e., the distributions of X and Y differ at most by a location shift; both population contrasts D 1 ( X , Y ) and D 2 ( X , Y ) are equal to zero (see Theorems 2 and 3). Large absolute values of the estimators D ^ 1 n , m and D ^ 2 n , m thus provide evidence against the null hypothesis. Importantly, while X d i l Y implies D 1 X , Y > 0 and D 2 X , Y > 0 (see Theorem 1), the converse does not hold (Example 2). Therefore, the sign of the estimated contrast should not be interpreted as definitive evidence for a specific direction of the dilation order. Instead, the statistics function as evidence functions, with the formal two-sided test providing protection against false claims of difference when X =   dil   Y . Thus, the null hypothesis should be rejected if | D ^ 1 n , m | > c 1 and | D ^ 2 n , m | > c 2 . Here, the rejection thresholds c 1 and c 2 are determined by the null distributions of D ^ 1 ( n , m ) and D ^ 2 ( n , m ) , which are studied in the next subsection. We reject H 0 when the estimates of D ^ 1 ( n , m ) or D ^ 2 ( n , m ) are sufficiently large. Let X 1 , X 2 , , X n be a sequence of independent and identically distributed (i.i.d.) continuous nonnegative random variables, with order statistics X 1 : n X 2 : n X n : n . The empirical distribution function corresponding to F ( x ) is defined as
F ^ n x = 1 n i = 1 n   I X i x ,
which can equivalently be expressed as
F ^ n ( x ) = 0 , x < x 1 : n i n , x i : n x x i + 1 : n , ( i = 0 , 1 , 2 , , n 1 ) 1 , x > x n : n
where I A denotes the indicator function of event A . An estimator of the CRE, based on a nonparametric approach and derived from the L-functional estimator, is then given by
C R E ^ n X = 0   x J F ^ n x d F ^ n x = 1 n i = 0 n 1   J i n X i : n ,
where J ( u ) = l o g ( 1 u ) 1 , for 0 < u < 1 . Similar arguments can be applied to obtain the estimator C R E ^ m ( Y ) . The following theorem demonstrates mean, variance and RMSE of C R E ^ n X is invariant to shifts in the random variable X , but not to scale transformations.
Theorem 4.
Assume that  X 1 , X 2 , , X n  is a random sample of size  n  taken from a population with pdf  f  and cdf  F  and. Then, when  n  is approximately large, the following properties apply:
(i) 
E C R E ^ n a X + b = a E C R E ^ n X ,
(ii) 
V a r C R E ^ n a X + b = a 2 V a r C R E ^ n X ,
(iii) 
R M S E C R E ^ n a X + b = a R M S E C R E ^ n X .
Proof. 
It is not hard to see that
C R E ^ n a X + b = 1 n i = 0 n 1   J i n ( a X i : n + b ) = a n i = 0 n 1   J i n X i : n + b n i = 0 n 1   J i n = a C R E ^ n X .
The last equality is obtained by noting that
1 n i = 0 n 1   J i n = 1 n i = 0 n 1   log n i n 1 0 ,
since
1 n i = 0 n 1   log n i n 1
is a Riemann sum for the integral of 0 1 log 1 x 1 d x = 0 , when n is approximately large. The proof is then completed by leveraging the properties of the mean, variance, and RMSE of C R E ^ n a X + b = a C R E ^ n X . □
Similar arguments can be applied to obtain the estimator C R E ^ m ( Y ) . Since D ^ 1 ( n , m ) is a linear combination of the dispersion measures C R E ^ n X and C R E ^ m ( Y ) , the results established in Theorem 4 for D ^ 1 ( n , m ) follow directly from the corresponding properties of these estimators. The following theorem establishes the asymptotic normality of the test statistic D ^ 1 ( n , m ) , providing the theoretical foundation for its use as an evidence function in testing the dilation order.
Theorem 5.
Assume that  E X 2 <  and  E Y 2 <  with  σ 2 ( F ) > 0 , and  σ 2 ( G ) > 0 . Let  S = m + n  and suppose that for some  0 < τ < 1 ,  n S τ ,   m S 1 τ     as   m i n n , m .  Then, as  m i n { n , m } ,  S D ^ 1 ( n , m ) D 1 ( X , Y ) d N 0 , σ 2 ( F , G ) , where
σ 2 ( F , G ) = σ 2 ( F ) τ + σ 2 ( G ) 1 τ ,
with
σ 2 F = 0     0     F m i n x , y F x F y J x J y d x   d y ,
and  σ 2 ( G )  defined analogously.
Proof. 
Since the function J is bounded and continuous, Theorems 2 and 3 of [30] imply that n C R E ^ n ( X ) C R E ( X ) converges in distribution to a normal law with mean zero and finite variance σ 2 ( F ) > 0 as n . A similar convergence holds for E ^ m ( Y ) , because convergence in distribution is preserved under convolution.
To address the dependence on the unknown distribution function, we employ a consistent estimator of the variance. Following the representation of [31], we define
σ ^ n 2 ( F ) = i = 0 n 1   j = 0 n 1   m i n i n , j n i n j n J i n J j n X i + 1 : n X i : n X j + 1 : n X j : n ,
with σ ^ m 2 ( G ) defined analogously. The decision rule for rejecting H 0 in favor of H 1 at significance level α is:
D ^ 1 ( n , m ) σ ^ n 2 ( F ) n + σ ^ m 2 ( G ) m > z 1 q 2 ,
where z 1 q 2 represents the 1 q 2 -quantile of the standard normal distribution. □
We now present the analogous result for the cumulative entropy. To this end, we propose a nonparametric estimator of CE, derived from the L -functional representation, defined as:
C E ^ n X = 0   x J ¯ F ^ n x d F ^ n x = 1 n i = 0 n 1   J ¯ i n X i : n ,
where J ¯ ( u ) = l o g ( u ) + 1 , for 0 < u < 1 . A similar estimator can be constructed for C E ^ m ( Y ) . Similar Theorem 4 can also be obtained for the estimator C E ^ n X .
Theorem 6.
Assume that  X 1 , X 2 , , X n  is a random sample of size  n  taken from a population with pdf  f  and cdf  F  and. Then, when  n  is approximately large, the following properties apply:
(i) 
E C E ^ n a X + b = a E C E ^ n X ,
(ii) 
V a r C E ^ n a X + b = a 2 V a r C E ^ n X ,
(iii) 
R M S E C E ^ n a X + b = a R M S E C E ^ n X .
Similar arguments can be applied to obtain the estimator C E ^ m ( Y ) . Since D ^ 2 ( n , m ) is a linear combination of the dispersion measures C E ^ n X and C E ^ m ( Y ) , the results established in Theorem 6 for D ^ 2 ( n , m ) follow directly from the corresponding properties of these estimators. The asymptotic normality of the CE-based test statistic D ^ 2 ( n , m ) is established in the following theorem. Since its proof closely parallels that of Theorem 4, it is omitted here for brevity.
Theorem 7.
Assume that  E X 2 <  and  E Y 2 <  such that  σ ¯ 2 ( F ) > 0 , and  σ ¯ 2 ( G ) > 0 . Let  S = m + n  and suppose that for some  0 < τ < 1 , we have
n S τ , m S 1 τ   as   m i n n , m .
Then, as  m i n { n , m } ,  S D ^ 2 ( n , m ) D 2 ( X , Y )  is normal with mean zero and the finite variance
σ ¯ n , m 2 ( F , G ) = σ ¯ n 2 ( F ) τ + σ ¯ m 2 ( G ) 1 τ ,
where
σ ¯ 2 F = 0     0     F m i n x , y F x F y J ¯ x J ¯ y d x   d y ,
and  σ ¯ 2 ( G )  defined analogously.
The estimator for  σ ¯ 2 ( F )  is defined as:
σ ¯ ^ n 2 ( F ) = i = 1 n 1   j = 1 n 1   m i n i n , j n i n j n J ¯ i n J ¯ j n X i + 1 : n X i : n X j + 1 : n X j : n .
Similarly, we estimate  σ ¯ 2 ( G )  as  σ ¯ ^ m 2 ( G ) . Consequently, the decision rule for rejecting  H 0  in favor of  H 1  at significance level q is:
D ^ 2 ( n , m ) σ ¯ ^ n 2 ( F ) n + σ ¯ ^ m 2 ( G ) m > z 1 q 2 ,
where  z 1 q 2  is defined previously.
Remark 2.
An important feature of the dilation order is that it provides a natural framework for characterizing the harmonic new better than used in expectation (HNBUE) and harmonic new worse than used in expectation (HNWUE) aging classes [32,33]. Specifically, a random variable  X  belongs to the HNBUE (respectively, HNWUE) class if and only if  X d i l Y  for some  Y  that is exponential with mean equal to that of  X , i.e.,  E ( Y ) = E ( X ) . Building on this foundational concept, we introduce a test statistic that can be employed to evaluate the null hypothesis:  H 0 :  X  follows an exponential distribution vs.  H 1 : X  belongs to the HNBUE or HNWUE class but does not conform to an exponential distribution.
If  Y  represents a random variable with an exponential distribution with mean of  E ( X ) , we can derive
D 1 ( X , Y ) = 2 E X 0   x log 1 F x d F x ,
and
D 2 ( X , Y ) = 0   x log F x d F x E X π 2 16 2 ,
where measures quantify deviation from  H 0  to  H 1 . The measures are empirically estimated, respectively, as:
D ^ 1 ( n , m ) = 1 n i = 0 n 1   J i n X i : n ,     and     D ^ 2 ( n , m ) = 1 n i = 0 n 1   J ¯ i n X i : n ,
where  J u = 2 l o g ( 1 u )  and  J ¯ ( u ) = l o g ( u ) π 2 16 2  for  0 < u < 1 . Similar to Theorem 4 can also be obtained for the estimators  D ^ 1 ( n , m )  and  D ^ 2 ( n , m ) . To obtain scale invariant tests, we can use the statistics  D ^ 1 H N B U E ( n , m ) = D ^ 1 ( n , m ) / X ¯  and  D ^ 2 H N B U E ( n , m ) = D ^ 2 ( n , m ) / X ¯ , where  X ¯  represents the sample mean. By similar arguments as in the proofs of Theorems 5 and 7, and applying Slutsky’s theorem, we obtain the following asymptotic distributions:
n D ^ 1 H N B U E ( n , m ) D 1 ( X , Y ) E ( X ) d   N 0 , σ J 2 ( F ) E 2 ( X ) ,
and
n D ^ 2 H N B U E ( n , m ) D 2 ( X , Y ) E ( X ) d   N 0 , σ J ¯ 2 ( F ) E 2 ( X ) .
The null hypothesis  H 0  should be rejected if
n σ ^ n , J 2 ( F ) D ^ 1 ( n , m ) > z 1 q 2 ,     and   n σ ^ n , J ¯ 2 ( F ) D ^ 2 ( n , m ) > z 1 q 2 ,
where  σ ^ n , J 2 ( F )  and  σ ^ n , J ¯ 2 ( F )  are the estimators of  σ J 2 ( F )  and  σ J ¯ , 2 ( F ) , respectively.

3.1. Simulation Study

To assess the finite-sample performance of the proposed tests in (19) and (21), we carried out a simulation study comparing their power functions across a range of representative probability models. The chosen distributions are widely applied in economics, finance, insurance, and reliability, and together they span scenarios from light-tailed to heavy-tailed behavior. As a natural benchmark, we first considered the exponential distribution, a standard reference model in reliability theory whose tail behavior provides a baseline for detecting departures toward heavier-tailed alternatives. To capture such heavy-tailed phenomena, we included the Pareto distribution, commonly employed in economics, finance, and insurance to model extreme events. Its scale and shape parameters strongly affect dispersion, making it particularly relevant for testing under the dilation order. We also examined the gamma distribution, a versatile model frequently used in econometrics, Bayesian analysis, and life-testing. Its shape–scale parameterization offers flexibility in modeling waiting times, and in the special case of integer shape parameters it reduces to the Erlang distribution. Finally, we incorporated the Weibull distribution, another classical lifetime model with broad applications in reliability and survival analysis, well known for its ability to describe diverse aging behaviors. Together, these four families, exponential, Pareto, gamma, and Weibull, provide a balanced experimental design that reflects both exponential-tail and long-tail settings. For comparability and to ensure meaningful stochastic orderings, all simulated distributions were standardized to share a common mean, although this constraint is not required for the theoretical validity of the proposed tests. We evaluated our proposed tests by comparing their empirical power against four recent tests for the dilation order. Specifically, we compared our statistics D ^ 1 ( n , m ) and D ^ 2 ( n , m ) to those developed by the following test statistics:
  • Aly’s t N statistic [28]:
    t N = δ m G δ n F ,
    where
    δ m ( G ) = 1 m 2 i = 2 m   ( i 1 ) ( m i + 1 ) X i : m X i 1 : m ,
    and δ n ( F ) is defined similarly.
  • The test statistic Δ ^ d i l α 1 ( X , Y ) proposed by Belzunce et al. [32]:
    Δ ^ d i l α 1 ( X , Y ) = 1 n i = 1 n   c i , n α 1 1 α 1 2 6 X i : n 1 m i = 1 m   c i , m α 1 1 α 1 2 6 Y i : n ,
    where
    c i , r α 1 = 1 α 1 1 3 r 2 α 1 3 i 2 + 3 i 1 6 r 2   if   i < k + 1 3 α 1 ( r k 1 ) 2 + ( k r α 1 ) 3 + α 1 ( 3 r 3 k 2 ) 6 r 2 α 1   if   i = k + 1 3 ( r i ) 2 + 3 ( r i ) + 1 6 r 2   if   i > k + 1 ,
    with k r α 1 < k + 1 r .
  • The test statistic Δ ^ ( n , m ) introduced by [33], which is based on Gini’s mean difference:
    Δ ^ n , m = m + 1 m m 1 i = 1 m   2 2 i m + 1 1 Y i : m n + 1 n n 1 i = 1 n   2 2 i n + 1 1 X i : n .
  • Zardasht’s T n m α 2 statistic [29], defined as:
    T n m α 2 = 1 m i = 1 m   J α 2 i m Y i : m 1 n i = 1 n   J α 2 i n X i : n ,
    where J α 2 ( u ) = 1 1 α 2 1 α 2 ( 1 u ) α 2 1 for all α 2 > 0 and α 2 1 .
The statistic Δ ^ d i l α 1 ( X , Y ) from [32] depends on a parameter α 1 ( 0,1 ) ; since its performance remains largely consistent across different α 1 values, we adopted α 1 = 0.5 for our analysis. Similarly, for T n m α 2 from [29], we chose α 2 = 2 . We also simulated the following scenarios which are tabulated in Table 1 and compared the empirical powers of the test statistics.
Table 1. Probability distributions with the shape parameter β and the scale parameter γ.
(i)
Exponential Distribution: For this scenario, X E ( 1 ) and Y E ( 1 / β ) where β is varied from 1 to 2. The null hypothesis is then represented by the case where β = 1 .
(ii)
Pareto Distribution: For this scenario, the random variable X P ( 10,3 ) and Y P ( 10 / β , 3 ) where β varied from 1 to 2. The null hypothesis is then represented by the case where β = 1 .
(iii)
Gamma Distribution: For this scenario, X G ( 2,1 ) and Y G ( β , 1 ) where β varied from 2 to 3. The null hypothesis is then represented by the case where β = 2 .
(iv)
Weibull Distribution: For this scenario, X W ( 2,1 ) and Y W ( β , 1 ) where β varied from 1 to 2. The null hypothesis is then represented by the case where β = 2 .
(v)
Mixture Weibull Distribution: For this scenario, we have
f Z z = 0.5 [ f X z + f Y z ]   z > 0 ,
where X W ( 2,1 ) and Y W ( β , 1 ) where β varied from 1 to 2. The null hypothesis is then represented by the case where β = 2 .
The asymptotic distribution of the test statistic is crucial for determining the critical values used to decide on the null hypothesis. To examine the empirical densities of the estimators D ^ 1 ( n , m ) and D ^ 2 ( n , m ) under the null hypothesis, we conducted an extensive simulation study based on the Monte Carlo method. Specifically, we generated 20,000 independent iterations pairs of random samples from each of the four distributions listed in Table 1, as well as the mixture Weibull distribution. For each distribution, we considered three sample size configurations n = m = 25 ,   50 , and 100 under the null hypothesis. The Q-Q plots of two estimators are presented in Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6. These plots provide valuable insight into the shape, center, and spread of the estimators’ distributions under H 0 and illustrate the convergence toward normality as the sample size increases, consistent with asymptotic theory.
Figure 2. The Q-Q plots of the estimators D ^ 1 ( n , m ) and D ^ 2 ( n , m ) obtained from simulations under the exponential distribution for various sample sizes.
Figure 3. The Q-Q plots of the estimators D ^ 1 ( n , m ) and D ^ 2 ( n , m ) obtained from simulations under the Pareto distribution for various sample sizes.
Figure 4. The Q-Q plots of the estimators D ^ 1 ( n , m ) and D ^ 2 ( n , m ) obtained from simulations under the Weibull distribution for various sample sizes.
Figure 5. The Q-Q plots of the estimators D ^ 1 ( n , m ) and D ^ 2 ( n , m ) obtained from simulations under the gamma distribution for various sample sizes.
Figure 6. The Q-Q plots of the estimators D ^ 1 ( n , m ) and D ^ 2 ( n , m ) obtained from simulations under the mixture Weibull distribution for various sample sizes.
To obtain the critical values of estimators D ^ 1 ( n , m ) and D ^ 2 ( n , m ) , we iterate 5000 samples of size n = m from the null hypothesis distributions. From the 5000 values of estimators D ^ 1 ( n , m ) and D ^ 2 ( n , m ) , ( 1     q ) th quantile represents the critical value corresponding to sample size n = m of the test statistics at significance level q . If the critical values for the two-sided test are denoted as D ^ 1,1 q 2 ( n , m ) and D ^ 2,1 q 2 ( n , m ) , then the null hypothesis is rejected with size q whenever | D ^ 1 n , m | > D ^ 1,1 q 2 ( n , m ) and | D ^ 1 n , m | > D ^ 2,1 q 2 ( n , m ) . The critical values of D ^ 1,1 q 2 ( n , m ) and D ^ 2,1 q 2 ( n , m ) are based on 5000 samples of different sample sizes generated from the null distribution at significance level q =   0.05 . The empirical power of the proposed test statistics was evaluated via five distribution functions, exponential, Pareto, gamma, Weibull, and mixture Weibull, using 5000 independent pairs of random samples for each configuration, with equal sample sizes n = m = 25 ,   50 , and 100. For each replication, the null hypothesis of dilation equivalence was tested, and empirical power was calculated as the proportion of rejections among the 5000 simulations. The results, summarized in Table 2, Table 3, Table 4, Table 5 and Table 6, confirm the expected consistency: power generally increases with sample size for all methods.
Table 2. Power comparisons of the tests in significance level q = 0.05.
Table 3. Power comparisons of the tests in significance level q = 0.05.
Table 4. Power comparisons of the tests in significance level q = 0.05.
Table 5. Power comparisons of the tests in significance level q = 0.05.
Table 6. Power comparisons of the tests in significance level q = 0.05.
Notably, the CE-based statistic D ^ 2 ( n , m ) exhibits superior power in most scenarios, particularly for exponential, Pareto, and gamma distributions. This underscores its effectiveness in detecting deviations from exponentiality toward heavier-tailed or more dispersed alternatives within these families. However, D ^ 2 ( n , m ) shows weaker performance under Weibull and mixture Weibull alternatives, indicating limited sensitivity to the specific dispersion characteristics of this model. In contrast, the CRE-based statistic D ^ 1 ( n , m ) demonstrates modest power in small samples but markedly improves as sample size grows, especially in Weibull and mixture Weibull settings, where it often outperforms competing methods including D ^ 2 ( n , m ) at n = m = 100 . This suggests that D ^ 1 ( n , m ) gains reliability and robustness with larger datasets for these distributions.
Overall, while D ^ 2 ( n , m ) emerges as a versatile and powerful tool across a broad range of distributions, its suitability depends on the underlying data-generating process. Likewise, D ^ 1 ( n , m ) offers complementary strengths, particularly in Weibull-type or heavy-tailed mixture contexts. Thus, the two statistics are best viewed as complementary evidence functions, with the choice between them guided by the nature of the distribution and the available sample size.

3.2. Real-Data Analysis

To illustrate the practical utility of the proposed methodology, we analyze a real dataset on survival times of male RFM strain mice, originally reported by [34]. The study considered two groups: the first group ( X ) consisted of mice raised under conventional laboratory conditions, while the second group ( Y ) was raised in a germ-free environment. In both groups, death was due to thymic lymphoma, allowing a direct comparison of survival variability under different environmental conditions. This dataset is particularly valuable in survival and reliability analysis because it allows us to investigate how external factors, in this case environmental exposure, affect the dispersion of lifetimes. Specifically, our interest lies in testing whether the survival distribution of mice raised in germ-free conditions ( Y ) is more dispersed than that of mice raised conventionally ( X ), which corresponds to verifying the dilation order relationship X d i l Y .
As a preliminary step, we followed the graphical approach recommended by [5], which suggested evidence consistent with the dilation ordering X d i l Y . Building on this, we applied six test statistics, including the proposed entropy-based measures D ^ 1 ( n , m ) and D ^ 2 ( n , m ) , to formally assess the hypothesis. The results, summarized in Table 7, provide strong statistical support for the dilation order. In particular, all six test statistics yielded small p-values, leading to rejection of the null hypothesis of equality and confirming the alternative X d i l Y in significance level q = 0.05 . The entropy-based test D ^ 2 was especially effective, delivering the strongest evidence among the six statistics.
Table 7. Statistical test results for real dataset.
This real-data application highlights three key insights. First, it demonstrates how cumulative entropies can function as practical evidence measures, capable of validating stochastic orderings in empirical settings. Second, it shows that entropy-based tests can reveal differences in variability between populations that may not be apparent from mean comparisons alone. Third, it illustrates the robustness and versatility of the proposed methodology in survival data analysis, with implications extending to biomedical research, reliability engineering, and actuarial science. This validation on real data underscores that entropy-based evidence functions are not only theoretically sound but also practically reliable, even in complex biological survival settings.

4. Conclusions

This paper introduced novel classes of entropy-based test statistics for assessing the dilation order, one rooted in an evidential interpretation of stochastic variability. By leveraging CRE and CE, we constructed evidence functions that quantify the degree to which one distribution exhibits greater variability than another, without requiring parametric assumptions. These statistics not only offered a principled measure of divergence aligned with dilation ordering but also served as interpretable evidence metrics in hypothesis testing. Moreover, we established the theoretical foundation of the proposed methods by deriving their asymptotic distributions and analyzing their large-sample behavior, ensuring validity under standard regularity conditions. Extensive simulation studies demonstrated that the CE-based statistic achieves high power and strong consistency across diverse alternatives, even in moderate samples, while the CRE-based counterpart exhibited notable robustness and improved performance as sample size increases. This complementary behavior underscored their joint utility across both small- and large-sample regimes. The practical relevance of the framework is illustrated through an analysis of survival times from RFM strain mice, where the proposed tests provide statistically significant evidence of dilation ordering between treatment groups. This real-world application highlights the methodology’s potential in survival analysis, reliability engineering, and biomedical research, where assessing variability differences is often more informative than mean comparisons.
On the other hand, the approach is extended to the classical problem of testing exponentiality against HNBUE and HNWUE alternatives, a critical task in reliability and actuarial science. The resulting tests inherited the evidential structure of the main framework, thereby bridging stochastic orders, aging properties, and information-theoretic measures in a unified setting. Despite these advances, several promising directions remain open for future work:
  • Refined inference procedures, such as bootstrap or permutation-based methods, to enhance small-sample accuracy in strength of evidence uncertainty estimation control of test Type I error.
  • Extension to multivariate settings, where notions of dilation order and entropy must be generalized to account for dependence and dimensionality.
  • Adaptation to time-dependent and censored data, including survival models with covariates, recurrent events, or temporal dependence structures (e.g., Markov or stationary processes).
  • Robustness analysis under model misspecification and integration into Bayesian evidence frameworks, potentially via entropy-based Bayes factors.
  • Generalization to alternative entropy measures, such as Rényi or Tsallis entropies, which may yield more flexible evidence functions adaptable to heavy-tailed or asymmetric distributions.
Together, these future avenues promise to broaden the scope, rigor, and applicability of entropy-based evidential testing in both theoretical and applied statistics.

Funding

The author would like to thank the Ongoing Research Funding Program (ORFFT-2025-129-2), King Saud University, Riyadh, Saudi Arabia for financial support.

Institutional Review Board Statement

This study did not involve human participants or animals.

Data Availability Statement

The data generated or analyzed during this study are included in the article.

Acknowledgments

The author would like to thank the Academic Editors and the three anonymous reviewers for their helpful, constructive, and valuable comments that greatly strengthened this manuscript.

Conflicts of Interest

The author declares that there are no known competing financial interests that could have appeared to influence the work reported in this paper.

References

  1. Hickey, R.J. Concepts of dispersion in distributions: A comparative note. J. Appl. Probab. 1986, 23, 914–921. [Google Scholar] [PubMed]
  2. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer Science & Business Media: New York, NY, USA, 2007. [Google Scholar]
  3. Sordo, M.A.; de Souza, M.C.; Suárez-Llorens, A. Testing variability orderings by using Gini’s mean differences. Stat. Methodol. 2016, 32, 63–76. [Google Scholar] [CrossRef]
  4. Ramos, H.M.; Sordo, M.A. Dispersion measures and dispersive orderings. Stat. Probab. Lett. 2003, 61, 123–131. [Google Scholar] [CrossRef]
  5. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  6. Di Crescenzo, A.; Longobardi, M. On weighted residual and past entropies. arXiv 2007, arXiv:math/0703489. [Google Scholar]
  7. Schroeder, M.J. An alternative to entropy in the measurement of information. Entropy 2004, 6, 388–412. [Google Scholar] [CrossRef]
  8. Rao, M.; Chen, Y.; Vemuri, B.C.; Wang, F. Cumulative residual entropy: A new measure of information. IEEE Trans. Inf. Theory 2004, 50, 1220–1228. [Google Scholar] [CrossRef]
  9. Rao, M. More on a new concept of entropy and information. J. Theor. Probab. 2005, 18, 967–981. [Google Scholar] [CrossRef]
  10. Asadi, M.; Zohrevand, Y. On the dynamic cumulative residual entropy. J. Stat. Plan. Inference 2007, 137, 1931–1941. [Google Scholar] [CrossRef]
  11. Baratpour, S. Characterizations based on cumulative residual entropy of first-order statistics. Commun. Stat. Theory Methods 2010, 39, 3645–3651. [Google Scholar]
  12. Navarro, J.; del Aguila, Y.; Asadi, M. Some new results on the cumulative residual entropy. J. Stat. Plan. Inference 2010, 140, 310–322. [Google Scholar] [CrossRef]
  13. Baratpour, S.; Habibi Rad, A. Testing goodness-of-fit for exponential distribution based on cumulative residual entropy. Commun. Stat. Theory Methods 2012, 41, 1387–1396. [Google Scholar] [CrossRef]
  14. Di Crescenzo, A.; Longobardi, M. On cumulative entropies. J. Stat. Plan. Inference 2009, 139, 4072–4087. [Google Scholar] [CrossRef]
  15. Klein, I.; Mangold, B.; Doll, M. Cumulative paired φ-entropy. Entropy 2016, 18, 248. [Google Scholar] [CrossRef]
  16. Klein, I.; Doll, M. (Generalized) maximum cumulative direct, residual, and paired Φ entropy approach. Entropy 2020, 22, 91. [Google Scholar] [CrossRef] [PubMed]
  17. Fitousi, D. Quantifying entropy in response times (RT) distributions using the cumulative residual entropy (CRE) function. Entropy 2023, 25, 1239. [Google Scholar] [CrossRef]
  18. Barberi, E.; D’Alessandro, A.; Di Stefano, A.; Foti, C.; La Rosa, F.; Sciacca, G. DECI: A differential entropy-based compactness index for point clouds analysis: Method and potential applications. Eng. Proc. 2023, 56, 273. [Google Scholar]
  19. Kayid, M.; Alshehri, M.A. Cumulative residual entropy of the residual lifetime of a mixed system at the system level. Entropy 2023, 25, 1033. [Google Scholar] [CrossRef]
  20. Benmahmoud, S. A new class of fractional cumulative residual entropy—Some theoretical results. J. Telecommun. Inf. Technol. 2023, 2023, 45–53. [Google Scholar] [CrossRef]
  21. Taper, M.L.; Lele, S.R. Evidence, evidence functions, and error probabilities. In Philosophy of Statistics; North-Holland: Amsterdam, The Netherlands, 2011; pp. 513–532. [Google Scholar]
  22. Dennis, B.; Ponciano, J.M.; Taper, M.L.; Lele, S.R. Errors in statistical inference under model misspecification: Evidence, hypothesis testing, and AIC. Front. Ecol. Evol. 2019, 7, 372. [Google Scholar] [CrossRef]
  23. Taper, M.L.; Lele, S.R.; Ponciano, J.M.; Dennis, B.; Jerde, C.L. Assessing the global and local uncertainty of scientific evidence in the presence of model misspecification. Front. Ecol. Evol. 2021, 9, 679155. [Google Scholar] [CrossRef]
  24. Taper, M.L.; Ponciano, J.M.; Dennis, B. Entropy, statistical evidence, and scientific inference: Evidence functions in theory and applications. Entropy 2022, 24, 1273. [Google Scholar] [CrossRef]
  25. Dennis, B.; Taper, M.L.; Ponciano, J.M. Evidential analysis: An alternative to hypothesis testing in normal linear models. Entropy 2024, 26, 964. [Google Scholar] [CrossRef]
  26. Cahusac, P.M. Likelihood ratio test and the evidential approach for 2 × 2 tables. Entropy 2024, 26, 375. [Google Scholar] [CrossRef] [PubMed]
  27. Powell, J.H.; Kalinowski, S.T.; Taper, M.L.; Rotella, J.J.; Davis, C.S.; Garrott, R.A. Evidence of an absence of inbreeding depression in a wild population of Weddell seals (Leptonychotes weddellii). Entropy 2023, 25, 403. [Google Scholar] [CrossRef]
  28. Aly, E.-E.A.A. A simple test for dispersive ordering. Stat. Probab. Lett. 1990, 9, 323–325. [Google Scholar] [CrossRef]
  29. Zardasht, V. Testing the dilation order by using cumulative residual Tsallis entropy. J. Stat. Comput. Simul. 2019, 89, 1516–1525. [Google Scholar] [CrossRef]
  30. Stigler, S.M. Linear functions of order statistics with smooth weight functions. Ann. Stat. 1974, 2, 676–693. [Google Scholar] [CrossRef]
  31. Jones, B.L.; Zitikis, R. Empirical estimation of risk measures and related quantities. N. Am. Actuar. J. 2003, 7, 44–54. [Google Scholar] [CrossRef]
  32. Belzunce, F.; Pinar, J.F.; Ruiz, J.M. On testing the dilation order and HNBUE alternatives. Ann. Inst. Stat. Math. 2005, 57, 803–815. [Google Scholar] [CrossRef]
  33. Belzunce, F.; Candel, J.; Ruiz, J.M. Testing mean residual alternatives by dispersion of residual lives. J. Stat. Plan. Inference 2000, 86, 113–127. [Google Scholar] [CrossRef]
  34. Hoel, D.G. A representation of mortality data by competing risks. Biometrics 1972, 28, 475–488. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.