Next Article in Journal
On the History of Ecosystem Dynamical Modeling: The Rise and Promises of Qualitative Models
Previous Article in Journal
Quantum Obfuscation of Generalized Quantum Power Functions with Coefficient
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Covariance Representations and Coherent Measures for Some Entropies

School of Statistics and Data Science, Qufu Normal University, Qufu 273165, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(11), 1525; https://doi.org/10.3390/e25111525
Submission received: 25 September 2023 / Revised: 3 November 2023 / Accepted: 4 November 2023 / Published: 7 November 2023
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
We obtain covariance and Choquet integral representations for some entropies and give upper bounds of those entropies. The coherent properties of those entropies are discussed. Furthermore, we propose tail-based cumulative residual Tsallis entropy of order α (TCRTE) and tail-based right-tail deviation (TRTD); then, we define a shortfall of cumulative residual Tsallis (CRTES) and shortfall of right-tail deviation entropy (RTDS) and provide some equivalent results. As illustrated examples, the CRTESs of elliptical, inverse Gaussian, gamma and beta distributions are simulated.

1. Introduction

A risk measure is a functional ρ that maps from convex cone X of risks to R ¯ [ , + ] on a probability space ( Ω , F , P ) . A good risk measure should satisfy some desirable properties (see, e.g., [1,2,3]). The several standard properties for general risk measures are presented as follows:
(A)
Law invariance: For V , W X , if V = d W , then ρ ( V ) = ρ ( W ) ;
(A1)
Monotonicity: For V , W X , if V W , then ρ ( V ) ρ ( W ) ;
(A2)
Translation invariance: For V X and any c R , we have ρ ( V + c ) = ρ ( V ) + c ;
(A3)
Positive homogeneity: For V X and any λ R + [ 0 , ) , we have ρ ( λ V ) = λ ρ ( V ) ;
(A4)
Subadditivity: For V , W X , we have ρ ( V + W ) ρ ( V ) + ρ ( W ) ;
(A5)
Comonotonic additivity: If V and W are comonotonic, then ρ ( V + W ) = ρ ( V ) + ρ ( W ) .
To estimate and identify risk measure, the law invariance (A) is an essential requirement. When a risk measure ρ further satisfies (A1)–(A4), then ρ is said to be coherent. It is well known that value-at-risk (VaR) and expected shortfall (ES) are the two extremely important risk measures used in banking and insurance. The VaR and ES at confidence level p ( 0 , 1 ) for a random variable (r.v.) V with cumulative distribution function (cdf) F V are defined as
VaR p ( V ) = F V 1 ( p ) : = inf { p : F V ( x ) p } ,
and
ES p ( V ) = 1 1 p p 1 VaR q ( V ) d q ,
respectively. If F V is continuous, then ES equals the tail conditional expectation (TCE), which is written as
TCE p ( V ) = E [ V | V > v p ] ,
where v p = VaR p ( V ) .
Some risk measures have other desirable properties; for example, (B1) Standardization: For c R , we have ν ( c ) = 0 ; (B2) Location invariance: For V X and c R , we have ν ( V + c ) = ν ( V ) . If a functional ν : V R ¯ + [ 0 , ] satisfies law invariance (A), (B1) and (B2), we say that ν is a measure of variability. If ν further satisfies properties (A3) and (A4), then we say ν is a coherent measure of variability.
To capture the variability of the risk V beyond the quantile v p , Furman and Landsman [4] proposed the tail standard deviation (TSD) risk measure
TSD p λ ( V ) = TCE p ( V ) + λ SD p ( V ) ,
where p ( 0 , 1 ) , λ 0 denotes the loading parameter and SD p ( V ) the tail standard deviation measure defined as
SD p ( V ) = TV p ( V ) .
Here, TV p ( V ) = E [ ( V TCE p ( V ) ) 2 | V > v p ] is the tail variance of V. As its extension, Furman et al. [5] introduced the Gini shortfall (GS), which is defined by
GS p λ ( V ) = ES p ( V ) + λ TGini p ( V ) ,
where TGini p ( V ) = 2 ( 1 p ) 2 p 1 F V 1 ( s ) ( 2 s ( 1 + p ) ) d s is tail-Gini functional. Recently, Hu and Chen [6] further proposed a shortfall of cumulative residual entropy (CRE), defined by
CRES p λ ( V ) = ES p ( V ) + λ TCRE p ( V ) ,
where TCRE p ( V ) = 0 1 F V 1 ( s ) d h v p ( s ) is the tail-based CRE of V. Here, h v p ( s ) = 0 for s [ 0 , p ) , and h v p ( s ) = 1 s 1 p log 1 s 1 p for s [ p , 1 ] .
Inspired by those works, our main motivation is to find coherent shortfalls of entropy, which is the generalization of TSD, GS and CRES. These shortfalls of entropy can be used to capture the variability of a financial position. For specific financial applications, we can refer to [5,7,8]. To this aim, we give covariance and Choquet integral representations for some entropies, and provide upper bounds of those entropies. These representations not only make it easier for us to judge their cohesiveness, but also facilitate the extension of these results to two-dimensional and multi-dimensional cases in the future. Furthermore, we define TCRTE and TRTD, and propose CRTES and RTDS. As illustrated examples, CRTESs of elliptical, inverse Gaussian, gamma and beta distributions are simulated.
The remainder of this paper is structured as follows. Section 2 provides the covariance and Choquet integral representations for some entropies. Section 3 introduces some tail-based entropies. In Section 4, we propose two shortfalls of entropy, and give some equivalent results. The CRTESs of some parametric distributions are presented in Section 5. Finally, Section 6 concludes this paper and summarizes two possible research studies in the future.
Throughout the paper, let ( Ω , F , P ) be an atomless probability space. For a random variable V with cumulative distribution function (cdf) F V , we use U V to denote any uniform [ 0 , 1 ] random variable such that F V 1 ( U V ) = V holds almost everywhere. Let L k = L k ( Ω , F , P ) , k R + , be the set of all random variables on ( Ω , F , P ) with a finite kth-moment. Denote by L + 0 the set of all non-negative random variables. g denotes the first derivative of g. Notation v + = max { v , 0 } , and 1 A ( · ) is the indicator function of set A.

2. Covariance and Choquet Integral Representations for Some Entropies

In this section, we derive covariance and Choquet integral representations for some entropies, which include initial, weighted and dynamic entropies. In addition, the upper bounds of these entropies are established.
Given g defined in [ 0 , 1 ] with g ( 0 ) = g ( 1 ) = 0 , weighted function ψ ( · ) and a r.v. X with cdf F X , the initial and weighted entropies (forms) are defined as, respectively,
+ g ( F X ( x ) ) d x ,
and
+ ψ ( x ) g ( F X ( x ) ) d x .
Further, given two r.v.s X t = [ X t | X > t ] and X ( t ) = [ X | X t ] , the dynamic entropies (forms) are defined as
+ g ( F X t ( x ) ) d x ,
and
+ g ( F X ( t ) ( x ) ) d x .
To derive the covariance of entropy, we first introduce below lemma.
Lemma 1
([5]). Let g be an almost everywhere (a.e.) differentiable function in [ 0 , 1 ] with l = g (a.e.) and g ( 0 ) = g ( 1 ) = 0 . Suppose that X L 2 and l L 2 . Then, we have
0 1 F X 1 ( v ) d g ( v ) = Cov ( X , l ( U X ) ) .
Further,
0 1 F X 1 ( v ) d g ( v ) Var ( X ) Var ( l ( U X ) ) .
Proof. 
Since g is almost everywhere differentiable in [ 0 , 1 ] , we let l ( · ) = g ( · ) ( a . e . ) . Then,
0 1 F X 1 ( v ) d g ( v ) = 0 1 F X 1 ( v ) l ( v ) d v = E [ X l ( U X ) ] .
Note that E [ l ( U X ) ] = 0 1 l ( v ) d v = g ( 1 ) g ( 0 ) = 0 . Therefore,
0 1 F X 1 ( v ) d g ( v ) = Cov ( X , l ( U X ) ) .
Further, we use correlation coefficient ϱ = Cov ( X , l ( U X ) ) Var ( X ) Var ( l ( U X ) ) , 1 ϱ 1 , and the last inequality is immediately obtained. □

2.1. Initial Entropy

To find the covariance represent of initial entropy, we give the following theorem.
Theorem 1.
Let g be a continuous and almost everywhere differentiable function in [ 0 , 1 ] with l = g (a.e.) and g ( 0 ) = g ( 1 ) = 0 . Further, there exists a unique minimum (or maximum) point t 0 ( 0 , 1 ) such that g is decreasing on [ 0 , t 0 ] and increasing on [ t 0 , 1 ] (or there exists t 0 ( 0 , 1 ) such that g is increasing on [ 0 , t 0 ] and decreasing on [ t 0 , 1 ] ). Suppose that X L 2 and l L 2 . Then, we have
+ g ( F X ( x ) ) d x = 0 1 F X 1 ( u ) d g ( u ) = Cov ( X , l ( U X ) ) .
Further,
+ g ( F X ( x ) ) d x Var ( X ) Var ( l ( U X ) ) .
Proof. 
Since g ( u ) is almost everywhere differentiable in [ 0 , 1 ] , and g ( 0 ) = g ( 1 ) = 0 , there exists a unique minimum (or maximum) point t 0 ( 0 , 1 ) . Hence, we can use g to induce Lebesgue–Stieltjes measures on the Borel-measurable spaces ( [ 0 , t 0 ] , B ( [ 0 , t 0 ] ) ) and ( [ t 0 , 1 ] , B ( [ t 0 , 1 ] ) ) , respectively. Denote x 0 = F X 1 ( t 0 ) ; we have
+ g ( F X ( x ) ) d x = x 0 g ( F X ( x ) ) d x + x 0 + g ( F X ( x ) ) d x = x 0 ( 0 , F X ( x ) ] d g ( t ) d x x 0 + ( F X ( x ) , 1 ] d g ( t ) d x = 0 t 0 [ F X 1 ( t ) x 0 ] d g ( t ) t 0 1 [ F X 1 ( t ) x 0 ] d g ( t ) = 0 1 F X 1 ( u ) d g ( u ) ,
where we have used Fubini’s theorem in the third equality. Further, using Lemma 1, we obtain
0 1 F X 1 ( u ) d g ( u ) = Cov ( X , l ( U X ) ) .
Remark 1.
Note that the function g is of bounded variation since g has the following representation g = g 1 + g 2 , where g 1 is increasing and g 2 is decreasing. Similar results can be found in Lemma 3 of [9]. However, the result of this article is different from Lemma 3 of [9]. The integral interval and integrand are different, with one integrand being a function of F (i.e., g ( F ) ) and the other being a function of F ¯ (i.e., g ( F ¯ ) ). So, our result cannot be obtained from theirs.
Corollary 1.
Let g be a concave function in [ 0 , 1 ] with l = g (a.e.) and g ( 0 ) = g ( 1 ) = 0 . Suppose that X L 2 and l L 2 . Then, we have
+ g ( F X ( x ) ) d x = 0 1 F X 1 ( u ) d g ( u ) = Cov ( X , l ( U X ) ) .
Hence,
+ g ( F X ( x ) ) d x Var ( X ) Var ( l ( U X ) ) .

2.2. Weighted Entropy

Weighted entropy is an extension of initial entropy, which is an initial entropy associated with a weight function. We have the corresponding theorem as follows.
Theorem 2.
Let g be a continuous and almost everywhere differentiable function in [ 0 , 1 ] with l = g (a.e.) and g ( 0 ) = g ( 1 ) = 0 . Further, there exists a unique minimum (or maximum) point t 0 ( 0 , 1 ) and ψ ( t ) = Ψ ( t ) . Suppose that Ψ ( X ) L 2 and l L 2 . Then, we have
+ ψ ( x ) g ( F X ( x ) ) d x = 0 1 Ψ F X 1 ( u ) d g ( u ) = Cov [ Ψ ( X ) , l ( U X ) ] .
Further,
+ ψ ( x ) g ( F X ( x ) ) d x Var ( Ψ ( X ) ) Var ( l ( U X ) ) .
Proof. 
Similar to the proof of Theorem 1, we have
+ ψ ( x ) g ( F X ( x ) ) d x = x 0 ψ ( x ) g ( F X ( x ) ) d x + x 0 + ψ ( x ) g ( F X ( x ) ) d x = x 0 ψ ( x ) ( 0 , F X ( x ) ] d g ( t ) d x x 0 + ψ ( x ) ( F X ( x ) , 1 ] d g ( t ) d x = 0 t 0 F X 1 ( t ) x 0 ψ ( x ) d x d g ( t ) t 0 1 x 0 F X 1 ( t ) ψ ( x ) d x d g ( t ) = 0 1 Ψ F X 1 ( u ) d g ( u ) ,
where we have used Fubini’s theorem in the third equality.
Note that
0 1 Ψ F X 1 ( u ) d g ( u ) = 0 1 Ψ F X 1 ( u ) l ( u ) d u = E [ Ψ ( X ) l ( U X ) ] ,
and E [ l ( U X ) ] = 0 1 l ( u ) d u = g ( 1 ) g ( 0 ) = 0 .
Therefore, we obtain
0 1 Ψ F X 1 ( u ) d g ( u ) = Cov [ Ψ ( X ) , l ( U X ) ] ,
ending the proof. □
Corollary 2.
Let ψ ( x ) = x in Theorem 2; we have
+ x g ( F X ( x ) ) d x = 1 2 0 1 F X 1 ( u ) 2 d g ( u ) = 1 2 Cov ( X 2 , l ( U X ) ) .
Further,
+ x g ( F X ( x ) ) d x 1 2 Var ( X 2 ) Var ( l ( U X ) ) .
Corollary 3.
Let g be a concave function in [ 0 , 1 ] with l = g (a.e.) and g ( 0 ) = g ( 1 ) = 0 . Suppose that X 2 L 2 and l L 2 . Then, we have
+ x g ( F X ( x ) ) d x = 1 2 0 1 F X 1 ( u ) 2 d g ( u ) = 1 2 Cov ( X 2 , l ( U X ) ) .
Further,
+ x g ( F X ( x ) ) d x 1 2 Var ( X 2 ) Var ( l ( U X ) ) .

2.3. Dynamic (Weighted) Entropy

Dynamic entropy is also a generalization of an initial entropy that focuses on feasible choices of the ranges (upper tail or lower tail).
The survival function of a random variable X t = [ X t | X > t ] can be represented as
F ¯ X t ( x ) = F ¯ X ( x ) F ¯ X ( t ) , when x > t , 1 , otherwise . F X t ( x ) = F X ( x ) F X ( t ) 1 F X ( t ) , when x > t , 0 , otherwise .
Therefore, for any v ( 0 , 1 ) ,
F X t 1 ( x ) = inf x R : F X ( x ) F X ( t ) 1 F X ( t ) v = inf x R : F X ( x ) F X ( t ) + ( 1 F X ( t ) ) v = F X 1 F X ( t ) + ( 1 F X ( t ) ) v .
Theorem 3.
Let g be a continuous and almost everywhere differentiable function in [ 0 , 1 ] with l = g (a.e.) and g ( 1 ) = g ( 0 ) = 0 . Further, there is a unique minimum (or maximum) point t 0 ( 0 , 1 ) and ψ ( t ) = Ψ ( t ) . Suppose that Ψ ( X ) L 2 and l L 2 . Then, we have
+ ψ ( x ) g ( F X t ( x ) ) d x = 0 1 Ψ F X 1 ( v ) d g t ( v ) = Cov Ψ X , l F X ( X ) F X ( t ) 1 F X ( t ) | X > t ,
where g t ( v ) = 0 for v [ 0 , F X ( t ) ) , and g t ( v ) = h v F X ( t ) 1 F X ( t ) for v [ F X ( t ) , 1 ] .
Proof. 
Using Equation (9) and the same argument of Theorem 2, we can easily obtain Theorem 3. □
Corollary 4.
Let ψ ( x ) = 1 in Theorem 3; we have
+ g ( F X t ( x ) ) d x = 0 1 F X 1 ( v ) d g t ( v ) = Cov X , l F X ( X ) F X ( t ) 1 F X ( t ) | X > t ,
where g t ( v ) = 0 for v [ 0 , F X ( t ) ) , and g t ( v ) = g v F X ( t ) 1 F X ( t ) for v [ F X ( t ) , 1 ] .
Corollary 5.
Let g be a concave function in [ 0 , 1 ] with l = g (a.e.) and g ( 0 ) = g ( 1 ) = 0 . Suppose that X L 2 and l L 2 . Then, we have
+ g ( F X t ( x ) ) d x = 0 1 F X 1 ( u ) d g t ( u ) = Cov X , l F X ( X ) F X ( t ) 1 F X ( t ) | X > t ,
where g t ( v ) = 0 for v [ 0 , F X ( t ) ) , and g t ( v ) = g v F X ( t ) 1 F X ( t ) for v [ F X ( t ) , 1 ] .
The distribution function of a random variable X ( t ) = [ X | X t ] can be written as
F X ( t ) ( x ) = F X ( x ) F X ( t ) , when x t , 0 , otherwise .
Therefore, for any s ( 0 , 1 ) ,
F X ( t ) 1 ( x ) = inf x R : F X ( x ) F X ( t ) s = inf x R : F X ( x ) F X ( t ) s = F X 1 F X ( t ) s .
Theorem 4.
Let g be a continuous and almost everywhere differentiable function in [ 0 , 1 ] with l = g (a.e.) and g ( 1 ) = g ( 0 ) = 0 . Further, there is a unique minimum (or maximum) point t 0 ( 0 , 1 ) and ψ ( t ) = Ψ ( t ) . Suppose that Ψ ( X ) L 2 and l L 2 . Then, we have
+ ψ ( x ) g ( F X ( t ) ( x ) ) d x = 0 1 Ψ F X 1 ( v ) d g ( t ) ( v ) = Cov Ψ ( X ) , l F X ( X ) F X ( t ) | X < t ,
where g ( t ) ( v ) = 0 for v ( F X ( t ) , 1 ] , and g ( t ) ( v ) = g v F X ( t ) for v [ 0 , F X ( t ) ] .
Proof. 
Using Equation (13), Theorem 2 and translation v = F X ( t ) u , we can obtain Theorem 4. □
Corollary 6.
Let ψ ( x ) = 1 in Theorem 4; we have
+ g ( F X ( t ) ( x ) ) d x = 0 1 F X 1 ( v ) d g ( t ) ( v ) = Cov X , l F X ( X ) F X ( t ) | X < t ,
where g ( t ) ( v ) = 0 for v ( F X ( t ) , 1 ] , and g ( t ) ( v ) = g v F X ( t ) for v [ 0 , F X ( t ) ] .
Corollary 7.
Let g be a concave function in [ 0 , 1 ] with l = g (a.e.) and g ( 0 ) = g ( 1 ) = 0 . Suppose that X L 2 and l L 2 . Then, we have
+ g ( F X ( t ) ( x ) ) d x = 0 1 F X 1 ( u ) d g ( t ) ( u ) = Cov X , l F X ( X ) F X ( t ) | X < t ,
where g ( t ) ( v ) = 0 for v ( F X ( t ) , 1 ] , and g ( t ) ( v ) = g v F X ( t ) for v [ 0 , F X ( t ) ] .

2.4. Examples

Example 1.
Let g ( t ) = 1 α 1 ( t t α ) , α > 0 , α 1 , t [ 0 , 1 ] , in Corollary 1, Equation (6) denoted by C T α ( X ) ; we have
C T α ( X ) = + 1 α 1 ( F X ( x ) ( F X ( x ) ) α ) d x = 0 1 F X 1 ( u ) d g ( u ) = α α 1 Cov ( X , ( U X ) α 1 ) .
Further,
C T α ( X ) | α α 1 | Var ( X ) Var ( ( U X ) α 1 ) = Var ( X ) 2 α 1 , α > 1 2 .
In particular, when X L + 0 , the above measure denotes the cumulative Tsallis past entropy introduced by Calì et al. ([10]). Note that when α 1 , it reduces to cumulative entropy ( CE ( X ) ), defined as (see [11])
CE ( X ) = 0 + F X ( x ) log ( F X ( x ) ) d x = 0 1 F X 1 ( u ) d h ( u ) = Cov ( X , log ( U X ) ) .
Further,
CE ( X ) Var ( X ) ,
where h ( u ) = u log u .
In particular, when α = 2 , g ( t ) = t ( 1 t ) , we obtain
Gini ( X ) = C T 2 ( X ) = 0 + F X ( x ) F ¯ X ( x ) d x = 0 1 F X 1 ( u ) d g ( u ) = 2 Cov ( X , U X ) .
Further,
Gini ( X ) Var ( X ) 3 ,
which is Gini mean semi-difference, denoted by Gini ( X ) ; for details, see [6,12].
Example 2.
Let g ( t ) = 1 α 1 [ ( 1 t ) ( 1 t ) α ] , α > 0 , α 1 , t [ 0 , 1 ] , in Corollary 1, Equation (6) denoted by C R T α ( X ) ; we have
C R T α ( X ) = + 1 α 1 ( F ¯ X ( x ) ( F ¯ X ( x ) ) α ) d x = 0 1 F X 1 ( u ) d g ( u ) = α α 1 Cov ( X , ( 1 U X ) α 1 ) .
Further,
C R T α ( X ) | α α 1 | Var ( X ) Var ( ( 1 U X ) α 1 ) = Var ( X ) 2 α 1 , α > 1 2 .
In particular, when X L + 0 , the above measure is the cumulative residual Tsallis entropy of order α introduced by Rajesh and Sunoj [13]. Note that when α 1 , it reduces to cumulative entropy ( E ( X ) ), defined as (see [14])
E ( X ) = 0 + F ¯ X ( x ) log ( F ¯ X ( x ) ) d x = 0 1 F X 1 ( u ) d h ( u ) = Cov ( X , log ( 1 U X ) ) .
Further,
E ( X ) Var ( X ) ,
where h ( u ) = ( 1 u ) log ( 1 u ) .
Example 3.
Let g ( t ) = ( 1 t ) [ log ( 1 t ) ] α , 0 < α 1 , t [ 0 , 1 ] , in Corollary 1, Equation (6) denoted by F E α ( X ) , so that l ( t ) = [ log ( 1 t ) ] α 1 [ α + log ( 1 t ) ] . Then, we have
F E α ( X ) = + F ¯ X ( x ) log F ¯ X ( x ) α d x = 0 1 F X 1 ( u ) d g ( u ) = Cov ( X , l ( U X ) ) .
Further,
F E α ( X ) α Var ( X ) E [ ( log ( 1 U X ) ) 2 α 2 ] .
In particular, when X ( 0 , c ) , the above measure is called the fractional cumulative residual entropy of X by Xiong et al. [15].
Example 4.
Let g ( t ) = ( 1 t ) 1 / 2 ( 1 t ) in Corollary 1, Equation (6) denoted by D ( X ) ; we have
D ( X ) = + F ¯ X ( x ) 1 / 2 F ¯ X ( x ) d x = 0 1 F X 1 ( u ) d g ( u ) .
In particular, when X L + 0 , the above measure is the right-tail deviation introduced by Wang [16].
Example 5.
Let g ( t ) = 2 [ t + ( 1 t ) r 1 ] , r > 1 , in Corollary 1, Equation (6) denoted by EGini r ( X ) ; we have
EGini r ( X ) = + g ( F X ( x ) ) d x = 0 1 F X 1 ( u ) d g ( u ) = 2 r Cov ( X , ( 1 U X ) r 1 ) .
Further,
+ g ( F X ( x ) ) d x 2 ( r 1 ) 2 r 1 Var ( X ) .
In particular, when X L + 0 , the above measure is the extended Gini coefficient (see [7]). As a special case, when r = 2 , the extended Gini coefficient becomes the simple Gini (see [5]).
Example 6.
Let g ( t ) = 1 Γ ( α + 1 ) ( 1 t ) [ log ( 1 t ) ] α , α > 0 , t [ 0 , 1 ] , in Theorem 1, Equation (5) denoted by F G R E α ( X ) , so that l ( t ) = 1 Γ ( α + 1 ) [ log ( 1 t ) ] α 1 [ α + log ( 1 t ) ] . Then, we have
F G R E α ( X ) = 1 Γ ( α + 1 ) + F ¯ X ( x ) log F ¯ X ( x ) α d x = 0 1 F X 1 ( u ) d g ( u ) = Cov ( X , l ( U X ) ) .
Further,
F G E α ( X ) 1 Γ ( α ) Var ( X ) E [ ( log ( 1 U X ) ) 2 α 2 ] .
In particular, when X ( 0 , c ) , the above measure is called the fractional generalized cumulative residual entropy of X by Di Crescenzo et al. [17].
In particular, if α is a positive integer, say α = n N , in this case, l ( t ) = 1 n ! [ log ( 1 t ) ] n 1 [ n + log ( 1 t ) ] . Then, F G R E α ( X ) identifies with the generalized cumulative residual entropy ( G C R E n ( X ) ) that has been introduced by Psarrakos and Navarro [18], i.e.,
G C R E n ( X ) = 1 n ! 0 + F ¯ X ( x ) log F ¯ X ( x ) n d x = 0 1 F X 1 ( u ) d g ( u ) = Cov ( X , l ( U X ) ) .
Further,
G C R E n ( X ) ( 2 n 2 ) ! ( n 1 ) ! Var ( X ) .
Example 7.
Let g ( t ) = 1 Γ ( α + 1 ) t [ log t ] α , α > 0 , t [ 0 , 1 ] , in Theorem 1, Equation (5) denoted by F G E α ( X ) , so that l ( t ) = 1 Γ ( α + 1 ) [ log ( t ) ] α 1 [ α + log ( t ) ] . Then, we have
F G E α ( X ) = 1 Γ ( α + 1 ) + F X ( x ) log F X ( x ) α d x = 0 1 F X 1 ( u ) d g ( u ) = Cov ( X , l ( U X ) ) .
Further,
F G E α ( X ) 1 Γ ( α ) Var ( X ) E [ ( log ( 1 U X ) ) 2 α 2 ] .
In particular, when X ( 0 , c ) , the above measure is called the fractional generalized cumulative entropy of X by Di Crescenzo et al. ([17]).
In particular, if α is a positive integer, say α = n N , in this case, l ( t ) = 1 n ! [ log ( t ) ] n 1 [ n + log ( t ) ] . Then, F G E α ( X ) identifies with the generalized cumulative entropy ( G C E n ( X ) ) that has been introduced by Kayal [19] (see also [20]), i.e.,
G C E n ( X ) = 1 n ! 0 + F X ( x ) log F X ( x ) n d x = 0 1 F X 1 ( u ) d g ( u ) = Cov ( X , l ( U X ) ) .
Further,
G C E n ( X ) ( 2 n 2 ) ! ( n 1 ) ! Var ( X ) .
Example 8.
Let g ( t ) = 1 α 1 ( t t α ) , α > 0 , α 1 , t [ 0 , 1 ] , in Corollary 3, Equation (8) denoted by W C T α ( X ) ; we have
W C T α ( X ) = + 1 α 1 x F X ( x ) ( F X ( x ) ) α d x = 1 2 0 1 F X 1 ( u ) 2 d g ( u ) = α 2 ( α 1 ) Cov ( X 2 , ( U X ) α 1 ) .
Further,
W C T α ( X ) | α 2 ( α 1 ) | Var ( X 2 ) Var ( ( U X ) α 1 ) = Var ( X 2 ) 2 2 α 1 , α > 1 2 .
In particular, when X L + 0 , the above measure is the weighted cumulative Tsallis entropy of order α introduced by Chakraborty and Pradhan [21]. Note that when α 1 , it reduces to weighted cumulative entropy ( CE w ( X ) ), defined as (see [22,23])
CE w ( X ) = 0 + x F X ( x ) log ( F X ( x ) ) d x = 1 2 0 1 F X 1 ( u ) 2 d h ( u ) = 1 2 Cov ( X 2 , log ( U X ) ) .
Further,
CE w ( X ) 1 2 Var ( X 2 ) ,
where h ( u ) = u log u .
In particular, when α = 2 , g ( t ) = t ( 1 t ) , we obtain
W C T 2 ( X ) = + x F X ( x ) F ¯ X ( x ) d x = 1 2 0 1 F X 1 ( u ) 2 d g ( u ) = Cov ( X 2 , U X ) .
Further,
W C T 2 ( X ) Var ( X 2 ) 2 3 .
Example 9.
Let g ( t ) = 1 α 1 [ ( 1 t ) ( 1 t ) α ] , α > 0 , α 1 , t [ 0 , 1 ] , in Corollary 3, Equation (8) denoted by W C R T α ( X ) ; we have
W C R T α ( X ) = + 1 α 1 x F ¯ X ( x ) ( F ¯ X ( x ) ) α d x = 1 2 0 1 F X 1 ( u ) 2 d g ( u ) = α 2 ( α 1 ) Cov ( X 2 , ( 1 U X ) α 1 ) .
Further,
W C R T α ( X ) | α 2 ( α 1 ) | Var ( X 2 ) Var ( ( 1 U X ) α 1 ) = Var ( X 2 ) 2 2 α 1 , α > 1 2 .
In particular, when X L + 0 , the above measure is the weighted cumulative residual Tsallis entropy of order α introduced by Chakraborty and Pradhan [21]. Note that when α 1 , it reduces to weighted cumulative residual entropy ( E w ( X ) ), defined as (see [23,24])
E w ( X ) = 0 + x F ¯ X ( x ) log ( F ¯ X ( x ) ) d x = 1 2 0 1 F X 1 ( u ) 2 d ( h ( u ) ) = 1 2 Cov ( X 2 , log ( 1 U X ) ) .
Further,
E w ( X ) 1 2 Var ( X 2 ) ,
where h ( u ) = ( 1 u ) log ( 1 u ) .
Example 10.
Let g ( t ) = 1 n ! ( 1 t ) [ log ( 1 t ) ] n , t [ 0 , 1 ] , in Theorem 2, Equation (7) denoted by W G C R E n , ψ ( X ) , so that l ( t ) = 1 n ! [ log ( 1 t ) ] n 1 [ n + log ( 1 t ) ] . Then, we have
W G C R E n , ψ ( X ) = 1 n ! + ψ ( x ) F ¯ X ( x ) log F ¯ X ( x ) n d x = 0 1 Ψ F X 1 ( u ) d g ( u ) = Cov [ Ψ ( X ) , l ( U X ) ] .
Hence,
W G C R E n , ψ ( X ) ( 2 n 2 ) ! ( n 1 ) ! Var ( Ψ ( X ) ) .
In particular, when X L + 0 , the above measure is the weighted generalized cumulative residual entropy introduced by Tahmasebi ([25]) (also see [26]). As a special case, when ψ ( x ) = x , W G C R E n , ψ ( X ) reduces to a shift-dependent GCRE of order n ( W G C R E n ( X ) ) defined by Kayal [27], i.e.,
W G C R E n ( X ) = 1 n ! 0 + x F ¯ X ( x ) log F ¯ X ( x ) n d x = 1 2 0 1 F X 1 ( u ) 2 d g ( u ) = 1 2 Cov [ X 2 , l ( U X ) ] .
Further,
W G C R E n ( X ) ( 2 n 2 ) ! 2 [ ( n 1 ) ! ] Var ( X 2 ) .
In particular, when n = 1 , g ( t ) = ( 1 t ) [ log ( 1 t ) ] , the W G C R E n , ψ ( X ) reduces to weighted cumulative residual entropy with weight function ψ ( W G C R E ψ ( X ) ) defined by Suhov and Yasaei Sekeh [28], i.e.,
W G C R E ψ ( X ) = 0 + ψ ( x ) F ¯ X ( x ) log F ¯ X ( x ) d x = 0 1 Ψ F X 1 ( u ) d g ( u ) = Cov [ Ψ ( X ) , log ( 1 U X ) ] .
Further,
W G C R E ψ ( X ) Var ( Ψ ( X ) ) .
They also define weighted cumulative entropy with weight function ψ ( W G C E ψ ( X ) ; in this case, g ( t ) = t [ log ( t ) ] ):
W G C E ψ ( X ) = 0 + ψ ( x ) F X ( x ) log F X ( x ) d x = 0 1 Ψ F X 1 ( u ) d g ( u ) = Cov [ Ψ ( X ) , log ( U X ) ] .
Further,
W G C E ψ ( X ) Var ( Ψ ( X ) ) .
Example 11.
Let g ( u ) = ( 1 u ) log ( 1 u ) in Corollary 5, Equation (12) denoted by D E t ( X ) ; we have
D E t ( X ) = t + F ¯ X ( x ) F ¯ X ( t ) log F ¯ X ( x ) F ¯ X ( t ) d x = 0 1 F X 1 ( v ) d g t ( v ) = Cov X , log F ¯ X ( X ) | X > t ,
where g t ( v ) = 0 for v [ 0 , F X ( t ) ) , and g t ( v ) = 1 v 1 F X ( t ) log 1 v 1 F X ( t ) for v [ F X ( t ) , 1 ] .
In particular, when X L + 0 , the above measure is dynamic cumulative entropy defined by Asadi and Zohrevand [29].
Example 12.
Let g ( t ) = ( 1 t ) [ log ( 1 t ) ] α , 0 < α 1 , t [ 0 , 1 ] in Corollary 5, Equation (12) denoted by DFCRE α , t ( X ) , so that l ( t ) = [ log ( 1 t ) ] α 1 [ α + log ( 1 t ) ] . Then, we have
DFCRE α , t ( X ) = t + F ¯ X ( x ) F ¯ X ( t ) log F ¯ X ( x ) F ¯ X ( t ) α d x = 0 1 F X 1 ( u ) d g t ( u ) = Cov X , l F X ( X ) F X ( t ) 1 F X ( t ) | X > t ,
where g t ( v ) = 0 for v [ 0 , F X ( t ) ) , and g t ( v ) = 1 v 1 F X ( t ) log 1 v 1 F X ( t ) α for v [ F X ( t ) , 1 ] .
Example 13.
Let g ( t ) = 1 α 1 [ ( 1 t ) ( 1 t ) α ] , α > 0 , α 1 , t [ 0 , 1 ] , in Corollary 5, Equation (12) denoted by DCRT α , t ( X ) ; we have
DCRT α , t ( X ) = t + 1 α 1 F ¯ X ( x ) F ¯ X ( t ) F ¯ X ( x ) F ¯ X ( t ) α d x = 0 1 F X 1 ( v ) d g t ( v ) = α α 1 Cov X , F ¯ X ( X ) F ¯ X ( t ) α 1 | X > t ,
where g t ( v ) = 0 for v [ 0 , F X ( t ) ) , and g t ( v ) = 1 α 1 1 v 1 F X ( t ) 1 v 1 F X ( t ) α for v [ F X ( t ) , 1 ] .
In particular, when X L + 0 , the above measure is the dynamic cumulative residual Tsallis entropy of order α introduced by Rajesh and Sunoj [13].
In particular, when X L + 0 and α = 2 , we obtain (dynamic Gini mean semi-difference)
DCRT 2 , t ( X ) = t + F ¯ X ( x ) F ¯ X ( t ) F ¯ X ( x ) F ¯ X ( t ) 2 d x = 0 1 F X 1 ( v ) d g t ( v ) = 2 Cov X , F ¯ X ( X ) F ¯ X ( t ) | X > t ,
where g t ( v ) = 0 for v [ 0 , F X ( t ) ) , and g t ( v ) = 1 v 1 F X ( t ) 1 v 1 F X ( t ) 2 for v [ F X ( t ) , 1 ] .
Example 14.
Let g ( t ) = ( 1 t ) 1 / 2 ( 1 t ) , t [ 0 , 1 ] , in Corollary 5; we have
t + F ¯ X ( x ) F ¯ X ( t ) 1 / 2 F ¯ X ( x ) F ¯ X ( t ) d x = 0 1 F X 1 ( u ) d g t ( u ) = 1 2 Cov X , F ¯ X ( X ) F ¯ X ( t ) | X > t ,
where g t ( v ) = 0 for v [ 0 , F X ( t ) ) , and g t ( v ) = 1 v 1 F X ( t ) 1 / 2 1 v 1 F X ( t ) for v [ F X ( t ) , 1 ] .
Example 15.
Let g ( t ) = 2 [ t + ( 1 t ) r 1 ] , r > 1 , t [ 0 , 1 ] , in Corollary 5; we have
t + 2 F X ( x ) F X ( t ) F ¯ X ( t ) + F ¯ X ( x ) F ¯ X ( t ) r 1 d x = 0 1 F X 1 ( u ) d g t ( u ) = 2 r Cov X , F ¯ X ( X ) F ¯ X ( t ) r 1 | X > t ,
where g t ( v ) = 0 for v [ 0 , F X ( t ) ) , and g t ( v ) = 2 v F X ( t ) 1 F X ( t ) + 1 v 1 F X ( t ) r 1 for v [ F X ( t ) , 1 ] .
Example 16.
Let g ( t ) = 1 n ! ( 1 t ) [ log ( 1 t ) ] n , n { 1 , 2 , } , t [ 0 , 1 ] in Corollary 4, Equation (11) denoted by DGCRE n , t ( X ) , so that l ( t ) = 1 n ! [ log ( 1 t ) ] n 1 [ n + log ( 1 t ) ] . Then, we have
DGCRE n , t ( X ) = 1 n ! t + F ¯ X ( x ) F ¯ X ( t ) log F ¯ X ( x ) F ¯ X ( t ) n d x = 0 1 F X 1 ( u ) d g t ( u ) = Cov X , l F X ( X ) F X ( t ) 1 F X ( t ) | X > t ,
where g t ( v ) = 0 for v [ 0 , F X ( t ) ) , and g t ( v ) = 1 n ! 1 v 1 F X ( t ) log 1 v 1 F X ( t ) n for v [ F X ( t ) , 1 ] .
In particular, when X L + 0 , the above measure denotes the dynamic generalized cumulative residual entropy introduced by Psarrakos and Navarro [18].
Example 17.
Let g ( t ) = 1 n ! ( 1 t ) [ log ( 1 t ) ] n , t [ 0 , 1 ] in Theorem 3, Equation (10) denoted by DWGCRE n , ψ , t ( X ) , so that l ( t ) = 1 n ! [ log ( 1 t ) ] n 1 [ n + log ( 1 t ) ] . Then, we have
DWGCRE n , ψ , t ( X ) = 1 n ! t + ψ ( x ) F ¯ X ( x ) log F ¯ X ( x ) n d x = 0 1 Ψ F X 1 ( u ) d g t ( u ) = Cov Ψ X , l F X ( X ) F X ( t ) 1 F X ( t ) | X > t ,
where g t ( v ) = 0 for v [ 0 , F X ( t ) ) , and g t ( v ) = 1 n ! 1 v 1 F X ( t ) log 1 v 1 F X ( t ) n for v [ F X ( t ) , 1 ] .
In particular, when X L + 0 , the above measure is the dynamic WGCRE introduced by Tahmasebi [25].
Particularly, when n = 1 and ψ ( x ) = x , the DWGCRE n , ψ , t ( X ) reduces to the dynamic weighted cumulative residual entropy ( DWCRE t ( X ) ) defined as
DWCRE t ( X ) = t + x F ¯ X ( x ) log F ¯ X ( x ) d x = 1 2 0 1 F X 1 ( u ) 2 d g t ( u ) = 1 2 Cov X 2 , log F ¯ X ( X ) | X > t ,
where g t ( v ) = 0 for v [ 0 , F X ( t ) ) , and g t ( v ) = 1 v 1 F X ( t ) log 1 v 1 F X ( t ) for v [ F X ( t ) , 1 ] . As a special case, when X L + 0 , the above measure is introduced by Miralia and Baratpour [30]. They also defined DWGCRE n , ψ , t ( X ) , n = 1 , i.e.,
DWGCRE ψ , t ( X ) = t + ψ ( x ) F ¯ X ( x ) log F ¯ X ( x ) d x = 0 1 Ψ F X 1 ( u ) d g t ( u ) = Cov Ψ X , log F ¯ X ( X ) | X > t ,
where g t ( v ) = 0 for v [ 0 , F X ( t ) ) , and g t ( v ) = 1 v 1 F X ( t ) log 1 v 1 F X ( t ) for v [ F X ( t ) , 1 ] .
Example 18.
Let g ( t ) = 1 α 1 ( t t α ) , α > 0 , α 1 , t [ 0 , 1 ] in Corollary 7, Equation (16) denoted by DCT α , ( t ) ( X ) ; we have
DCT α , ( t ) ( X ) = t 1 α 1 F X ( x ) F X ( t ) F X ( x ) F X ( t ) α d x = 0 1 F X 1 ( u ) d g ( t ) ( u ) = α α 1 Cov X , F X ( X ) F X ( t ) α 1 | X < t ,
where g ( t ) ( v ) = 0 for v ( F X ( t ) , 1 ] , and g ( t ) ( v ) = 1 α 1 v F X ( t ) v F X ( t ) α for v [ 0 , F X ( t ) ] .
In particular, when n = 1 and X L + 0 , the above measure is a generalization of the dynamic cumulative Tsallis entropy introduced by Calì et al. [10]. Note that when α 1 , it reduces to (a generalization of) cumulative past entropy, defined as (see, e.g., [31])
C E ( t ) ( X ) = 0 t F X ( x ) F X ( t ) log F X ( x ) F X ( t ) d x = 0 1 F X 1 ( v ) d h ( t ) ( v ) = Cov X , log F X ( X ) | X < t ,
where h ( t ) ( v ) = 0 for v ( F X ( t ) , 1 ] , and h ( t ) ( v ) = v F X ( t ) log v F X ( t ) for v [ 0 , F X ( t ) ] .
Example 19.
Let g ( t ) = 1 n ! t [ log t ] n , n { 1 , 2 , } , t [ 0 , 1 ] , in Corollary 6, Equation (15) denoted by D G C E n , ( t ) ( X ) , so that l ( t ) = 1 n ! [ log t ] n 1 [ n + log t ] . Then, we have
D G C E n , ( t ) ( X ) = 1 n ! t F X ( x ) F X ( t ) log F X ( x ) F X ( t ) n d x = 0 1 F X 1 ( u ) d g ( t ) ( u ) = Cov X , l F X ( X ) F X ( t ) | X < t ,
where g ( t ) ( v ) = 0 for v ( F X ( t ) , 1 ] , and g t ( v ) = 1 n ! v F X ( t ) log v F X ( t ) n for v [ 0 , F X ( t ) ] .
In particular, when X L + 0 , the above measure is the dynamic generalized cumulative entropy introduced by Kayal [19].
Example 20.
Let g ( t ) = 1 n ! t [ log t ] n , n { 1 , 2 , } , t [ 0 , 1 ] in Theorem 4, Equation (14) denoted by D G C E n , ψ , ( t ) ( X ) , so that l ( t ) = 1 n ! [ log t ] n 1 [ n + log t ] . Then, we have
D G C E n , ψ , ( t ) ( X ) = 1 n ! t ψ ( x ) F X ( x ) F X ( t ) log F X ( x ) F X ( t ) n d x = 0 1 Ψ ( F X 1 ( u ) ) d g ( t ) ( u ) = Cov Ψ ( X ) , l F X ( X ) F X ( t ) | X < t ,
where g ( t ) ( v ) = 0 for v ( F X ( t ) , 1 ] , and g t ( v ) = 1 n ! v F X ( t ) log v F X ( t ) n for v [ 0 , F X ( t ) ] .
In particular, when n = 1 and X L + 0 , D G C E n , ψ , ( t ) ( X ) is reduced as (see [30])
D G C E ψ , ( t ) ( X ) = 0 t ψ ( x ) F X ( x ) F X ( t ) log F X ( x ) F X ( t ) d x = 0 1 Ψ ( F X 1 ( u ) ) d h ( t ) ( u ) = Cov Ψ ( X ) , log F X ( X ) | X < t ,
where h ( t ) ( v ) = 0 for v ( F X ( t ) , 1 ] , and h t ( v ) = v F X ( t ) log v F X ( t ) for v [ 0 , F X ( t ) ] .

2.5. Discussion

Note that the above entropy risk measures satisfy (B1) standardization by their covariance representations. For any c R , using F X + c 1 ( u ) = F X 1 ( u ) + c , we obtain that initial entropy and simple dynamic entropy risk measures satisfy (B2) location invariance, but weighted entropy risk measures do not satisfy (B2). Therefore, initial entropy and simple dynamic entropy risk measures are measures of variability.
For any λ R + , using F λ X 1 ( u ) = λ F X 1 ( u ) , we obtain that initial entropy and simple dynamic entropy risk measures satisfy (B3) positive homogeneity.
When g : [ 0 , 1 ] R is finite variation and g ( 1 ) = g ( 0 ) = 0 , the signed Choquet integral is defined by
I ( X ) = g ( F X ( x ) ) d x , for all X χ .
When g is right-continuous, then Equation (20) can be expressed as
I ( X ) = 0 1 F X 1 ( s ) d g ( s ) .
Furthermore, when g is absolutely continuous, with d g ( s ) = l ( s ) d s , then Equation (21) can be expressed as
I ( X ) = 0 1 F X 1 ( s ) l ( s ) d s .
From (21), we can see that the signed Choquet integral satisfies the co-monotonically additive property ([32]). Thus, initial entropy and simple dynamic entropy risk measures are co-monotonically additive measures of variability.
The functional I ( · ) defined by Equation (20) is sub-additive if and only if g is convex (e.g., [33,34]). Hence, initial entropy risk measures, which are shown in Examples 1–5 and (17)–(19), satisfy (A4) sub-additivity. Therefore, Examples 1–5 and (17)–(19) are co-monotonically additive and coherent measures of variability.
These initial entropy risk measures can be applied to the predictability of the failure time of a system (see [11,14]). The weighted entropy risk measures are shift-dependent measures of uncertainty, and can be applied to some practical situations of reliability and neurobiology (see [35,36]). The dynamic entropy risk measures can be used to capture effects of the age t of an individual or an item under study on the information about the residual lifetime (see [29]).
The initial, weighted and dynamic entropy risk measures are closely related, as shown in Figure 1:
From a risk measure point of view, the initial entropy risk measures can capture the variability of a financial position as a whole. The dynamic entropy risk measures can depict the variability of a financial position focused on feasible choices of the ranges (upper tail or lower tail).
In finance and risk management, Markowitz’s mean-variance portfolio theory plays a vital role in modern portfolio theory. It is known that the initial entropy and simple dynamic entropy risk measures are measures of variability. We can replace variance with the initial entropy and simple dynamic entropy risk measures, respectively. The initial entropy risk measure is used to capture ordinary (general) risk, and it is favored by investors, such as the firm’s ordinary business and the shareholders’ interests. The dynamic entropy risk measure is used to depict the tails of risks (extreme events), which is to reduce (or avoid) the impact of extreme events and is favored by regulators and decision makers (see [37]). For example, we give CRTES α , p λ ( X ) for different distributions in Section 5, and also use the R software to compute CRTES α , p λ ( X ) for p [ 0.9 , 1 ) , shown in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8. When α 1 , CRTES α , p λ ( X ) reduces to TCRE p ( X ) given by Hu and Chen [6], we can observe the difference between our results and Hu and Chen’s results through Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8. Other potential applications of these entropy risk measures need to be further explored in the future.

3. Tail-Based Entropies

Let t = x p in Example 13; we obtain tail-based cumulative residual Tsallis entropy of order α :
TCRTE α , p ( X ) = x p + 1 α 1 F ¯ X ( x ) 1 p F ¯ X ( x ) 1 p α d x = 0 1 F X 1 ( s ) d g p ( s ) = α α 1 Cov X , F ¯ X ( X ) 1 p α 1 | X > x p ,
where g p ( s ) = 0 for s [ 0 , p ) , and g p ( s ) = 1 α 1 1 s 1 p 1 s 1 p α for s [ p , 1 ] .
Let t = x p in Example 12; in this case, l ( t ) = [ log ( 1 t ) ] α 1 [ α + log ( 1 t ) ] , 0 < α 1 and we obtain tail-based fractional cumulative residual entropy:
TFCRE α , p ( X ) = x p + F ¯ X ( x ) 1 p log F ¯ X ( x ) 1 p α d x = 0 1 F X 1 ( s ) d g p ( s ) = Cov X , l F X ( X ) p 1 p | X > x p ,
where g p ( s ) = 0 for s [ 0 , p ) , and g p ( s ) = 1 s 1 p log 1 s 1 p α for s [ p , 1 ] .
Remark 2.
Let α = 1 in (24) (or α 1 in (23)); we obtain the tail-based cumulative residual entropy ( TCRE p ( X ) ) given by Hu and Chen [6]:
TCRE p ( X ) = x p + F ¯ X ( x ) 1 p log F ¯ X ( x ) 1 p d x = 0 1 F X 1 ( s ) d h x p ( s ) = Cov X , log F ¯ X ( X ) | X > x p ,
where h x p ( s ) = 0 for s [ 0 , p ) , and h x p ( s ) = 1 s 1 p log 1 s 1 p for s [ p , 1 ] .
Let t = x p in Example 14; we obtain tail-based right-tail deviation:
TRTD p ( X ) = x p + F ¯ X ( x ) 1 p 1 / 2 F ¯ X ( x ) 1 p d x = 0 1 F X 1 ( s ) d g p ( s ) = 1 2 Cov X , F ¯ X ( X ) 1 p | X > x p ,
where g p ( s ) = 0 for s [ 0 , p ) , and g p ( s ) = 1 s 1 p 1 / 2 1 s 1 p for s [ p , 1 ] .
Remark 3.
Let α = 1 2 in (23); we observe that
TRTD p ( X ) = 1 2 TCRTE 1 2 , p ( X ) .
Let t = x p in Example 15; we obtain the tail-based extended Gini coefficient (see [7]):
TEGini r , p = x p + 2 F X ( x ) p 1 p + F ¯ X ( x ) 1 p r 1 d x = 0 1 F X 1 ( s ) d g p ( s ) = 2 r Cov X , F ¯ X ( X ) 1 p r 1 | X > x p ,
where g p ( s ) = 0 for s [ 0 , p ) , and g p ( s ) = 2 s p 1 p + 1 s 1 p r 1 for s [ p , 1 ] .
Remark 4.
Let r = 2 in (26); we obtain tail Gini (see [5]):
TGini p ( X ) = 2 x p + F ¯ X ( x ) 1 p F ¯ X ( x ) 1 p 2 d x = 0 1 F X 1 ( s ) d g p ( s ) = 4 Cov X , F ¯ X ( X ) 1 p | X > x p ,
where g p ( s ) = 0 for s [ 0 , p ) , and g p ( s ) = 2 s 1 1 p + 1 s 1 p 2 for s [ p , 1 ] .

4. Shortfalls of Entropy

We now introduce two risk measures of entropy shortfall, which are linear combinations of ES p , TCRTE α , p and TRTD p , respectively:
CRTES α , p λ ( X ) = ES p ( X ) + λ · TCRTE α , p ( X ) ,
RTDS p λ ( X ) = ES p ( X ) + λ · TRTD p ( X ) ,
where p [ 0 , 1 ) is also a confidence level, and λ 0 is a loading parameter.
Theorem 5.
Assume that p ( 0 , 1 ) , λ [ 0 , ) and the convex cone X = L 2 .
(1) 
CRTES α , p λ is represented as
CRTES α , p λ ( X ) = 0 1 F X 1 ( v ) d g p , λ ( v )
                              = 0 1 F X 1 ( v ) ϖ p , λ ( v ) d v ,
where
g p , λ ( v ) = 1 1 p ( v p ) + λ α 1 · 1 v 1 p 1 v 1 p α · 1 [ p , 1 ] ( v ) , α > 0 , α 1 , ϖ p , λ ( v ) = 1 1 p λ α 1 · 1 1 p + α 1 p 1 v 1 p α 1 · 1 [ p , 1 ] ( v ) , α > 0 , α 1 .
(2) 
CRTES α , p λ satisfies translation invariance and positive homogeneous and comonotonic additive properties.
(3) 
The below statements are equivalent: (i) CRTES α , p λ satisfies the monotone property; (ii) CRTES α , p λ satisfies the sub-additive property; (iii) CRTES α , p λ holds the increasing convex order; (iv) CRTES α , p λ is a coherent risk measure; (v) λ [ 0 , 1 ] .
Proof. 
(1)
Using (1) and (23) of ES p ( X ) and TCRTE α , p ( X ) , we immediately obtain (27) and (28).
(2)
From (23), the positive homogeneous and comonotonic additive properties of CRTES α , p λ are obtained. Further, since
ES p ( X + c ) = ES p ( X ) + c , TCRTE α , p ( X + c ) = TCRTE α , p ( X ) , c R ,
the translation invariance of CRTES α , p λ follows.
(3)
Noting that ϖ p , λ ( v ) = 0 for all v [ 0 , p ) , and that ϖ p , λ ( v ) is an increasing function on [ p , 1 ] , therefore, we have that ϖ p , λ ( v ) is non-negative if and only if λ [ 0 , 1 ] . Furthermore, ϖ p , λ ( v ) is non-decreasing if and only if λ [ 0 , 1 ] . Using Lemma 4.2 of [5], one obtains that CRTES α , p λ satisfies the monotone property if and only if ϖ p , λ ( v ) for all v [ 0 , 1 ] , and that CRTES α , p λ is sub-additive if and only if ϖ p , λ ( v ) is increasing in v [ 0 , 1 ] . Therefore, ( i ) ( i i ) ( v ) .
Next, by Theorem 2.1 of [38] and (28), we know that if ϖ p , λ ( v ) is increasing in v [ 0 , 1 ] , then ( i i i ) follows; that is to say, ( i i ) ( i i i ) ( i ) . Then, ( i i i ) ( i ) .
Furthermore, by Corollary 4.65 of [3], we learn that a law-invariant coherent risk measure holds the increasing convex order. This reveals that ( i v ) ( i i i ) , then ( i v ) ( v ) . On the contrary, since CRTES α , p λ satisfies translation invariance and positive homogeneous properties, and ( v ) ( i ) ( i i ) , we have ( v ) ( i v ) . Hence, ( i v ) ( v ) . □
Remark 5.
Let α = 2 in Theorem 5; we obtain tail-Gini shortfall (see [6,39]), which is different from the results in [5]. In addition, let α 1 in Theorem 5; we obtain the CRE shortfall in [6].
Theorem 6.
Assume that p ( 0 , 1 ) , λ [ 0 , ) and the convex cone X = L 2 .
(1) 
RTDS p λ is represented as:
RTDS p λ ( X ) = 0 1 F X 1 ( v ) d g p , λ ( v ) = 0 1 F X 1 ( v ) ϖ p , λ ( v ) d v ,
where
g p , λ ( v ) = 1 1 p ( v p ) + λ · 1 v 1 p 1 / 2 1 v 1 p · 1 [ p , 1 ] ( v ) , ϖ p , λ ( v ) = 1 1 p + λ 1 p · 1 2 1 v 1 p 1 / 2 1 · 1 [ p , 1 ] ( v ) .
(2) 
RTDS p λ satisfies translation invariance and positive homogeneous and comonotonic additive properties.
(3) 
The below statements are equivalent: (i) RTDS p λ satisfies the monotone property; (ii) RTDS p λ satisfies the sub-additive property; (iii) RTDS p λ holds the increasing convex order; (iv) RTDS p λ is a coherent risk measure; (v) λ [ 0 , 2 ] .
Proof. 
Let α = 1 2 in Theorem 5; combining with (25), we obtain the desired results. □
Theorem 7.
Assume that U U ( 0 , 1 ) and X α L 2 are independent. Then,
TCRTE α , p ( X ) = α E 1 U 1 p α 1 ES U ( X ) | U > p ES p ( X ) .
Furthermore,
CRTES α , p λ ( X ) = λ α E 1 U 1 p α 1 ES U ( X ) | U > p + ( 1 λ ) ES p ( X ) .
Proof. 
By (23), we have
TCRTE α , p ( X ) = α α 1 Cov F X 1 ( U ) , 1 U 1 p α 1 | U > p = α α 1 1 1 p p 1 F X 1 ( u ) 1 u 1 p α 1 d u ES p ( X ) · E 1 U 1 p α 1 | U > p = α ( α 1 ) ( 1 p ) p 1 F X 1 ( u ) 1 u 1 p α 1 d u + 1 α 1 ES p ( X ) ,
where the last equality follows by using relation
E 1 U 1 p α 1 | U > p = 1 α .
Note that
α ( α 1 ) ( 1 p ) p 1 F X 1 ( u ) 1 u 1 p α 1 d u = α ( α 1 ) ( 1 p ) α p 1 F X 1 ( u ) u 1 ( α 1 ) ( 1 v ) α 2 d v d u = α ( 1 p ) α p 1 p v F X 1 ( u ) ( 1 v ) α 2 d u d v = α ( 1 p ) α p 1 ( 1 p ) ES p ( X ) ( 1 v ) ES v ( X ) ( 1 v ) α 2 d v = α α 1 ES p ( X ) + α E 1 U 1 p α 1 ES U ( X ) | U > p ,
combining with (31), we obtain (29). Therefore, we obtain (30), ending the proof. □
Theorem 8.
Assume that U U ( 0 , 1 ) and X L 2 are independent. Then,
TRTD p ( X ) = 1 4 E 1 U 1 p 1 / 2 ES U ( X ) | U > p ES p ( X ) .
Furthermore,
RTDS p λ ( X ) = λ 4 E 1 U 1 p 1 / 2 ES U ( X ) | U > p + ( 1 λ ) ES p ( X ) .
Proof. 
Let α = 1 2 in Theorem 7; combining with (25), we obtain the desired results. □

5. CRTES α , p λ for Some Distributions

5.1. Elliptical Distributions

Consider an elliptical random variable X E 1 ( μ , σ 2 , g 1 ) . If the probability density function (pdf) of X exists, its form will be (see [40])
f X ( v ) : = c 1 σ g 1 1 2 v μ σ 2 , v R ,
where μ and σ > 0 are location and scale parameters, respectively. Moreover, g 1 ( t ) , t 0 , is the density generator of X, and is denoted by X E 1 ( μ , σ 2 , g 1 ) . The density generator g 1 satisfies the condition
0 s 1 / 2 g 1 ( s ) d s < ,
and the normalizing constant c 1 is given by
c 1 = 1 2 0 s 1 / 2 g 1 ( s ) d s 1 .
Cumulative generator G ¯ 1 ( u ) and normalizing constant c 1 * are, respectively, defined as follows:
G ¯ 1 ( u ) = u g 1 ( v ) d v
and
c 1 * = 1 2 0 s 1 / 2 G ¯ 1 ( s ) d s 1 < .
Landsman and Valdez [40] proved that
ES p ( X ) = μ + c 1 σ G ¯ 1 ( 1 2 w p 2 ) 1 p ,
where w p = x p μ σ .
Now, several important cases, including normal, Student-t, logistic and Laplace distributions, are given as follows.
Example 21.
(Normal distribution) Let X N 1 ( μ , σ 2 ) . In this case, the density generators are written as
g 1 ( s ) = G ¯ 1 ( s ) = exp { s } ,
and the normalization constants are given by
c 1 = c 1 * = ( 2 π ) 1 2 .
Then,
ES p ( X ) = μ + ( 2 π ) 1 2 σ exp ( 1 2 w p 2 ) 1 p ,
where w p = x p μ σ .
Without loss of generality [for the convenience of simulation], let X N 1 ( 0 , 1 ) ; by Equations (29), (30) and (34), we can use the R software to compute TCRTE α , p ( X ) and CRTES α , p λ ( X ) for p [ 0.9 , 1 ) , and the results are shown in Figure 2.
From Figure 2a, we find that when α is fixed, TCRTE α , p is decreasing in p. Moreover, when p is fixed, TCRTE α , p is also decreasing in α . As we can see in Figure 2b, when α is fixed, CRTES α , p λ is increasing in p, while CRTES α , p λ will be decreasing in α when p is fixed. In Figure 2c, we observe that when λ is fixed, CRTES α , p λ is increasing in p. Moreover, when p is fixed, TCRTE α , p is also increasing in λ .
Example 22.
(Student-t distribution). Let X S t 1 μ , σ 2 , m . In this case, the density generators are written as
g 1 ( s ) = 1 + 2 s m ( m + 1 ) / 2 , and G ¯ 1 ( s ) = m m 1 1 + 2 s m ( m 1 ) / 2 .
The normalization constants are given by
c 1 = Γ ( m + 1 ) / 2 Γ ( m / 2 ) ( m π ) 1 2 , and c 1 * = ( m 1 ) 2 m 0 s 1 / 2 1 1 + 2 t m ( m 1 ) / 2 d s 1 = ( m 1 ) m 3 / 2 B ( 1 2 , m 2 2 ) , if m > 2 ,
where Γ ( · ) and B ( · , · ) denote gamma and beta functions, respectively. Then,
ES p ( X ) = μ + σ m ( 1 p ) ( m 1 ) Γ ( m + 1 ) / 2 Γ ( m / 2 ) ( m π ) 1 2 1 + w p 2 m ( m 1 ) / 2 ,
where w p = x p μ σ .
Let X S t 1 0 , 1 , m ; by Equations (29), (30) and (35), we can use the R software to compute TCRTE α , p ( X ) and CRTES α , p λ ( X ) for p [ 0.9 , 1 ) , and the results are shown in Figure 3.
From Figure 3a, we find that the degree of freedom, m, has a great impact on TCRTE α , p . TCRTE α , p is decreasing in m. When m is small, TCRTE α , p is increasing in p, while TCRTE α , p will be decreasing in p instead of increasing when m is larger than a threshold. From Figure 3b, we find that when α is fixed, TCRTE α , p is increasing in p. However, when p is fixed, TCRTE α , p is decreasing in α . In Figure 3c, we observe that when α is fixed, CRTES α , p λ is increasing in p. However, when p is fixed, CRTES α , p λ is decreasing in α . From Figure 3d, we find that when λ is fixed, CRTES α , p λ is increasing in p. Moreover, when p is fixed, CRTES α , p λ is also increasing in λ .
Example 23.
(Logistic distribution). Let X L o 1 μ , σ 2 . In this case, the density generators are written as
g 1 ( s ) = exp ( 2 s ) 1 + exp ( 2 s ) 2 , and G ¯ 1 ( s ) = 2 s exp ( 2 s ) 1 + exp ( 2 s ) + log 1 + exp ( 2 s ) .
The normalization constants are given by
c 1 = 1 , and c 1 * = 1 2 0 s 1 / 2 2 s exp ( 2 s ) 1 + exp ( 2 s ) + log 1 + exp ( 2 s ) d s 1 .
Then,
ES p ( X ) = μ + σ 1 p w p exp ( w p ) 1 + exp ( w p ) + log 1 + exp ( w p ) ,
where w p = x p μ σ .
Let X L o 1 0 , 1 ; by Equations (29), (30) and (36), we can use the R software to compute TCRTE α , p ( X ) and CRTES α , p λ ( X ) for p [ 0.9 , 1 ) , and the results are shown in Figure 4.
It is seen from Figure 4a that the α has a little impact on the values of TCRTE α , p . For fixed p, TCRTE α , p is decreasing in α . From Figure 4b, we observe that when α is fixed, CRTES α , p λ is increasing in p. However, when p is fixed, CRTES α , p λ is decreasing in α . In Figure 4c, we find that when λ is fixed, CRTES α , p λ is increasing in p. Moreover, when p is fixed, CRTES α , p λ is also increasing in λ .
Example 24.
(Laplace distribution). Let X L a 1 μ , σ 2 . In this case, the density generators are written as
g 1 ( s ) = exp ( 2 s ) ,
and
G ¯ 1 ( s ) = ( 1 + 2 s ) exp ( 2 s ) .
The corresponding normalization constants are given by
c 1 = 1 2 , and c 1 * = 1 4 .
Then,
ES p ( X ) = μ + σ ( 1 + | w p | ) exp ( | w p | ) 2 ( 1 p ) ,
where w p = x p μ σ .
Let X L a 1 0 , 1 ; by Equations (29), (30) and (37), we can use the R software to compute TCRTE α , p ( X ) and CRTES α , p λ ( X ) for p [ 0.9 , 1 ) , and the results are shown in Figure 5.
It is seen from Figure 5a that p has almost no impact on TCRTE α , p . For fixed α , TCRTE α , p is almost the same in p. However, α has a great impact on TCRTE α , p . For fixed p, TCRTE α , p is decreasing in α . In Figure 5b, we observe that when α is fixed, CRTES α , p λ is increasing in p. However, when p is fixed, CRTES α , p λ is decreasing in α . From Figure 5c, we find that when λ is fixed, CRTES α , p λ is increasing in p. Moreover, when p is fixed, CRTES α , p λ is also increasing in λ .

5.2. Inverse Gaussian, Gamma and Beta Distributions

Example 25.
(Inverse Gaussian distribution) An inverse Gaussian random variable X I G ( β , μ ) , with parameters β > 0 and μ > 0 , has its probability density function (pdf) as
f ( x ; β , μ ) = β 2 π x 3 exp β ( x μ ) 2 2 μ 2 x , x > 0 .
From Example 4.3 of Landsman and Valdez (2005), we can obtain
ES p ( X ) = μ + μ β ( 1 p ) β x p ϕ ( a p ) + exp 2 β μ 2 β Φ ( b p ) β x p ϕ ( b p ) ,
where a p = β x p μ β x p , b p = β x p μ β x p , and x p is the pth quantile of X.
Let X I G 10 , 1 ; by Equations (29), (30) and (38), we can use the R software to compute TCRTE α , p ( X ) and CRTES α , p λ ( X ) for p [ 0.9 , 1 ) , and the results are shown in Figure 6.
Example 26.
(Gamma distribution) A random variable X Γ ( β , γ ) , with parameters β > 0 and γ > 0 , follows Gamma distribution if its pdf is
f ( x ; β , γ ) = x β 1 Γ ( β ) exp { γ x + β log γ } , x > 0 .
Landsman and Valdez [41] provided that
ES p ( X ) = β F ¯ ( x p ; β + 1 , γ ) γ ( 1 p ) ,
where F ¯ ( x p ; β + 1 , γ ) = 1 F ( x p ; β + 1 , γ ) is the tail distribution function of Y Γ ( β + 1 , γ ) , and x p is the pth quantile of X.
Let X Γ 5 , 1 ; by Equations (29), (30) and (39), we can use the R software to compute TCRTE α , p ( X ) and CRTES α , p λ ( X ) for p [ 0.9 , 1 ) , and the results are shown in Figure 7.
Example 27.
(Beta distribution) A random variable X B ( β , γ ) , with parameters β > 0 and γ > 0 , follows Beta distribution if its pdf is
f ( x ; β , γ ) = Γ ( β + γ ) ( Γ ( β ) Γ ( γ ) ) x β 1 ( 1 x ) γ 1 , 0 < x < 1 .
We can obtain
ES p ( X ) = β F ¯ ( x p ; β + 1 , γ ) ( β + γ ) ( 1 p ) ,
where F ¯ ( x p ; β + 1 , γ ) = 1 F ( x p ; β + 1 , γ ) is the tail distribution function of Y B ( β + 1 , γ ) , and x p is the pth quantile of X.
Let X B 5 , 2 ; by Equations (29), (30) and (40), we can use the R software to compute TCRTE α , p ( X ) and CRTES α , p λ ( X ) for p [ 0.9 , 1 ) , and the results are shown in Figure 8.

6. Concluding Remarks

This paper has derived covariance and Choquet integral representations of some entropies, and has proposed shortfalls of entropy CRTES and RTDS. In particular, CRTESs of elliptical, inverse Gaussian, gamma and beta distributions are computed. Furthermore, Hou and Wang [42] generalized the tail-Gini functional of a random variable to a case of a two-dimensional random vector, and Sun et al. [8] extended the TCRE to the two risks (random vector). In the future, we will try to extend the TCRTE and TRTD of a random variable in this paper to a two-dimensional random vector.

Author Contributions

Conceptualization, B.Z.; methodology, B.Z. and C.Y.; investigation, B.Z.; writing—original draft, B.Z.; writing—review and editing, C.Y.; software, B.Z. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the National Natural Science Foundation of China (No. 12071251, 12301605).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank two anonymous reviewers and the editor for their helpful comments and suggestions, which have led to the improvement of this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Artzner, P.; Delbaen, F.; Eber, J.M.; Heath, D. Coherent measures of risk. Math. Financ. 1999, 9, 203–228. [Google Scholar] [CrossRef]
  2. Delbaen, F. Monetary Utility Functions; Osaka University Press: Osaka, Japan, 2012. [Google Scholar]
  3. Föllmer, H.; Schied, A. Stochastic Finance: An Introduction in Discrete Time, 3rd ed.; Walter de Gruyter: Berlin, Germany, 2011. [Google Scholar]
  4. Furman, E.; Landsman, Z. Tail variance premium with applications for elliptical portfolio of risks. Astin Bull. 2006, 36, 433–462. [Google Scholar] [CrossRef]
  5. Furman, E.; Wang, R.; Zitikis, R. Gini-type measures of risk and variability: Gini shortfall, capital allocations, and heavy-tailed risks. J. Bank. Financ. 2017, 83, 70–84. [Google Scholar] [CrossRef]
  6. Hu, T.; Chen, O. On a family of coherent measures of variability. Insur. Math. Econ. 2020, 95, 173–182. [Google Scholar] [CrossRef]
  7. Berkhouch, M.; Lakhnatia, G.; Righi, M.B. Extended Gini-type measures of risk and variability. Appl. Math. Financ. 2018, 25, 295–314. [Google Scholar] [CrossRef]
  8. Sun, H.; Chen, Y.; Hu, T. Statistical inference for tail-based cumulative residual entropy. Insur. Math. Econ. 2022, 103, 66–95. [Google Scholar] [CrossRef]
  9. Wang, R.; Wei, Y.; Willmot, G.E. Characterization, robustness, and aggregation of signed Choquet integrals. Math. Oper. Res. 2020, 45, 993–1015. [Google Scholar] [CrossRef]
  10. Calì, C.; Longobardi, M.; Ahmadi, J. Some properties of cumulative Tsallis entropy. Phys. A Stat. Mech. Its Appl. 2017, 486, 1012–1021. [Google Scholar] [CrossRef]
  11. Di Crescenzo, A.; Longobardi, M. On cumulative entropies. J. Stat. Plan. Inference 2009, 139, 4072–4087. [Google Scholar] [CrossRef]
  12. Yin, X.; Balakrishnan, N.; Yin, C. Bounds for Gini’s mean difference based on first four moments, with some applications. Stat. Pap. 2022. [Google Scholar] [CrossRef]
  13. Rajesh, G.; Sunoj, S.M. Some properties of cumulative Tsallis entropy of order α. Stat. Pap. 2019, 60, 933–943. [Google Scholar] [CrossRef]
  14. Rao, M.; Chen, Y.; Vemuri, B.; Wang, F. Cumulative residual entropy: A new measure of information. IEEE Trans. Inf. Theory 2004, 50, 1220–1228. [Google Scholar] [CrossRef]
  15. Xiong, H.; Shang, P.; Zhang, Y. Fractional cumulative residual entropy. Commun. Nonlinear Sci. Numer. Simul. 2019, 78, 104879. [Google Scholar] [CrossRef]
  16. Wang, S. An actuarial index of the right-tail risk. N. Am. Actuar. J. 1998, 2, 88–101. [Google Scholar] [CrossRef]
  17. Di Crescenzo, A.; Kayal, S.; Meoli, A. Fractional generalized cumulative entropy and its dynamic version. Commun. Nonlinear Sci. Numer. Simul. 2021, 102, 105899. [Google Scholar] [CrossRef]
  18. Psarrakos, G.; Navarro, J. Generalized cumulative residual entropy and record values. Metrika 2013, 76, 623–640. [Google Scholar] [CrossRef]
  19. Kayal, S. On generalized cumulative entropies. Probab. Eng. Inf. Sci. 2016, 30, 640–662. [Google Scholar] [CrossRef]
  20. Calì, C.; Longobardi, M.; Navarro, J. Properties for generalized cumulative past measures of information. Probab. Eng. Inf. Sci. 2020, 34, 92–111. [Google Scholar] [CrossRef]
  21. Chakraborty, S.; Pradhan, B. On weighted cumulative Tsallis residual and past entropy measures. Commun. Stat. Simul. Comput. 2023, 52, 2058–2072. [Google Scholar] [CrossRef]
  22. Mirali, M.; Baratpour, S. Some results on weighted cumulative entropy. J. Iran. Stat. Soc. 2017, 16, 21–32. [Google Scholar]
  23. Misagh, F.; Panahi, Y.; Yari, G.H.; Shahi, R. Weighted cumulative entropy and its estimation. In Proceedings of the IEEE International Conference on Quality and Reliability (ICQR), Bangkok, Thailand, 14–17 September 2011; pp. 477–480. [Google Scholar]
  24. Mirali, M.; Baratpour, S.; Fakoor, V. On weighted cumulative residual entropy. Commun. Stat.-Theory Methods 2017, 46, 2857–2869. [Google Scholar] [CrossRef]
  25. Tahmasebi, S. Weighted extensions of generalized cumulative residual entropy and their applications. Commun. Stat.-Theory Methods 2020, 49, 5196–5219. [Google Scholar] [CrossRef]
  26. Toomaj, A.; Di Crescenzo, A. Connections between weighted generalized cumulative residual entropy and variance. Mathematics 2020, 8, 1072. [Google Scholar] [CrossRef]
  27. Kayal, S. On weighted generalized cumulative residual entropy of order n. Methodol. Comput. Appl. Probab. 2018, 20, 487–503. [Google Scholar] [CrossRef]
  28. Suhov, Y.; Yasaei Sekeh, S. Weighted cumulative entropies: An extention of CRE and CE. arXiv 2015, arXiv:1507.07051v1. [Google Scholar]
  29. Asadi, M.; Zohrevand, Y. On the dynamic cumulative residual entropy. J. Stat. Plan. Inference 2007, 137, 1931–1941. [Google Scholar] [CrossRef]
  30. Mirali, M.; Baratpour, S. Dynamic version of weighted cumulative residual entropy. Commun. Stat.-Theory Methods 2017, 46, 11047–11059. [Google Scholar] [CrossRef]
  31. Navarro, J.; Aguila, Y.; Asadi, M. Some new results on the cumulative residual entropy. J. Stat. Plan. Inference 2010, 140, 310–322. [Google Scholar] [CrossRef]
  32. Schmeidler, D. Integral representation without additivity. Proc. Am. Math. Soc. 1986, 97, 255–261. [Google Scholar] [CrossRef]
  33. Acerbi, C. Spectral measures of risk: A coherent representation of subjective risk aversion. J. Bank. Financ. 2002, 26, 1505–1518. [Google Scholar] [CrossRef]
  34. Yaari, M.E. The dual theory of choice under risk. Econometrica 1987, 55, 95–115. [Google Scholar] [CrossRef]
  35. Di Crescenzo, A.; Longobardi, M. On weighted residual and past entropies. Sci. Math. Jpn. 2006, 64, 255–266. [Google Scholar]
  36. Misagh, F.; Yari, G.H. On weighted interval entropy. Stat. Probab. Lett. 2011, 29, 167–176. [Google Scholar] [CrossRef]
  37. Zuo, B.; Yin, C.; Yao, J. Multivariate range Value-at-Risk and covariance risk measures for elliptical and log-elliptical distributions. arXiv 2023, arXiv:2305.09097. [Google Scholar]
  38. Sordo, M.A.; Ramos, H.M. Characterization of stochastic orders by L-functionals. Stat. Pap. 2007, 48, 249–263. [Google Scholar] [CrossRef]
  39. Denneberg, D. Premium calculation: Why standard deviation should be replaced by absolute deviation. Astin Bull. 1990, 20, 181–190. [Google Scholar] [CrossRef]
  40. Landsman, Z.M.; Valdez, E.A. Tail conditional expectations for elliptical distributions. N. Am. Actuar. J. 2003, 7, 55–71. [Google Scholar] [CrossRef]
  41. Landsman, Z.; Valdez, E.A. Tail conditional expectation for exponential dispersion models. Astin Bull. 2005, 35, 189–209. [Google Scholar] [CrossRef]
  42. Hou, Y.; Wang, X. Extreme and inference for tail Gini functional with applications in tail risk measurement. J. Am. Stat. Assoc. 2021, 535, 1428–1443. [Google Scholar] [CrossRef]
Figure 1. The relationship between three entropy risk measures.
Figure 1. The relationship between three entropy risk measures.
Entropy 25 01525 g001
Figure 2. N 1 ( 0 , 1 ) : (a) TCRTE α , p ( X ) of α = 1 2 , α 1 and α = 3 2 ; (b) CRTES α , p λ ( X ) of α = 1 2 , α 1 and α = 3 2 with λ = 0.5 ; (c) CRTES α , p λ ( X ) of λ = 0 , λ = 0.5 and λ = 1 with α = 1 2 .
Figure 2. N 1 ( 0 , 1 ) : (a) TCRTE α , p ( X ) of α = 1 2 , α 1 and α = 3 2 ; (b) CRTES α , p λ ( X ) of α = 1 2 , α 1 and α = 3 2 with λ = 0.5 ; (c) CRTES α , p λ ( X ) of λ = 0 , λ = 0.5 and λ = 1 with α = 1 2 .
Entropy 25 01525 g002
Figure 3. S t 1 0 , 1 , m : (a) TCRTE α , p ( X ) of m = 2 , m = 3 , m = 4 and m = 100 with α = 3 2 ; (b) TCRTE α , p ( X ) of α = 4 5 , α 1 and α = 3 2 with m = 4 ; (c) CRTES α , p λ ( X ) of α = 4 5 , α 1 and α = 3 2 with m = 4 and λ = 0.5 ; (d) CRTES α , p λ ( X ) of λ = 0 , λ = 0.5 and λ = 1 with m = 4 and α = 3 2 .
Figure 3. S t 1 0 , 1 , m : (a) TCRTE α , p ( X ) of m = 2 , m = 3 , m = 4 and m = 100 with α = 3 2 ; (b) TCRTE α , p ( X ) of α = 4 5 , α 1 and α = 3 2 with m = 4 ; (c) CRTES α , p λ ( X ) of α = 4 5 , α 1 and α = 3 2 with m = 4 and λ = 0.5 ; (d) CRTES α , p λ ( X ) of λ = 0 , λ = 0.5 and λ = 1 with m = 4 and α = 3 2 .
Entropy 25 01525 g003
Figure 4. L o 1 0 , 1 : (a) TCRTE α , p ( X ) of α = 1 2 , α 1 and α = 3 2 ; (b) CRTES α , p λ ( X ) of α = 1 2 , α 1 and α = 3 2 with λ = 0.5 ; (c) CRTES α , p λ ( X ) of λ = 0 , λ = 0.5 and λ = 1 with α = 3 2 .
Figure 4. L o 1 0 , 1 : (a) TCRTE α , p ( X ) of α = 1 2 , α 1 and α = 3 2 ; (b) CRTES α , p λ ( X ) of α = 1 2 , α 1 and α = 3 2 with λ = 0.5 ; (c) CRTES α , p λ ( X ) of λ = 0 , λ = 0.5 and λ = 1 with α = 3 2 .
Entropy 25 01525 g004
Figure 5. L a 1 0 , 1 : (a) TCRTE α , p ( X ) of α = 1 2 , α 1 and α = 3 2 ; (b) CRTES α , p λ ( X ) of α = 1 2 , α 1 and α = 3 2 with λ = 0.5 ; (c) CRTES α , p λ ( X ) of λ = 0 , λ = 0.5 and λ = 1 with α = 3 2 .
Figure 5. L a 1 0 , 1 : (a) TCRTE α , p ( X ) of α = 1 2 , α 1 and α = 3 2 ; (b) CRTES α , p λ ( X ) of α = 1 2 , α 1 and α = 3 2 with λ = 0.5 ; (c) CRTES α , p λ ( X ) of λ = 0 , λ = 0.5 and λ = 1 with α = 3 2 .
Entropy 25 01525 g005
Figure 6. I G ( 10 , 1 ) : (a) TCRTE α , p ( X ) of α = 1 2 , α 1 and α = 3 2 ; (b) CRTES α , p λ ( X ) of α = 1 2 , α 1 and α = 3 2 with λ = 0.5 .
Figure 6. I G ( 10 , 1 ) : (a) TCRTE α , p ( X ) of α = 1 2 , α 1 and α = 3 2 ; (b) CRTES α , p λ ( X ) of α = 1 2 , α 1 and α = 3 2 with λ = 0.5 .
Entropy 25 01525 g006
Figure 7. Γ ( 5 , 1 ) : (a) TCRTE α , p ( X ) of α = 1 2 , α 1 and α = 3 2 ; (b) CRTES α , p λ ( X ) of α = 1 2 , α 1 and α = 3 2 with λ = 0.5 .
Figure 7. Γ ( 5 , 1 ) : (a) TCRTE α , p ( X ) of α = 1 2 , α 1 and α = 3 2 ; (b) CRTES α , p λ ( X ) of α = 1 2 , α 1 and α = 3 2 with λ = 0.5 .
Entropy 25 01525 g007
Figure 8. B ( 5 , 2 ) : (a) TCRTE α , p ( X ) of α = 1 2 , α 1 and α = 3 2 ; (b) CRTES α , p λ ( X ) of α = 1 2 , α 1 and α = 3 2 with λ = 0.5 .
Figure 8. B ( 5 , 2 ) : (a) TCRTE α , p ( X ) of α = 1 2 , α 1 and α = 3 2 ; (b) CRTES α , p λ ( X ) of α = 1 2 , α 1 and α = 3 2 with λ = 0.5 .
Entropy 25 01525 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zuo, B.; Yin, C. Covariance Representations and Coherent Measures for Some Entropies. Entropy 2023, 25, 1525. https://doi.org/10.3390/e25111525

AMA Style

Zuo B, Yin C. Covariance Representations and Coherent Measures for Some Entropies. Entropy. 2023; 25(11):1525. https://doi.org/10.3390/e25111525

Chicago/Turabian Style

Zuo, Baishuai, and Chuancun Yin. 2023. "Covariance Representations and Coherent Measures for Some Entropies" Entropy 25, no. 11: 1525. https://doi.org/10.3390/e25111525

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop