Next Article in Journal
Simplifying Implications with Positive and Negative Attributes: A Logic-Based Approach
Previous Article in Journal
Why Subsidize Independent Schools? Estimating the Effect of a Unique Canadian Schooling Model on Educational Attainment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tail Conditional Moments for Location-Scale Mixture of Elliptical Distributions

School of Statistics and Data Science, Qufu Normal University, Qufu 273165, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(4), 606; https://doi.org/10.3390/math10040606
Submission received: 28 January 2022 / Revised: 11 February 2022 / Accepted: 12 February 2022 / Published: 16 February 2022

Abstract

:
We present the general results on the univariate tail conditional moments for a location-scale mixture of elliptical distributions. Examples include the location-scale mixture of normal, location-scale mixture of Student’s t, location-scale mixture of logistic, and location-scale mixture of Laplace distributions. More specifically, we give the tail variance, the tail conditional skewness, and the tail conditional kurtosis of generalised hyperbolic distribution and Student–GIG mixture distribution. We give an illustrative example, which discusses the TCE, TV, TCS and TCK of three stocks, including Amazon, Google and Apple.

1. Introduction and Motivation

Let X be a random variable with a cumulative distribution function F X ( x ) . Value-at-risk (VaR) of the random variable X, computed at a probability level q ( 0 , 1 ) , is defined as
VaR q ( X ) = inf { x R | P ( X x ) q } .
VaR is one of the best known and the most frequently used measures of financial risk. Another important risk measure is the Tail Conditional Expectation (TCE), which is defined as
TCE q ( X ) = E ( X | X > V a R q ( X ) ) .
The TCE has already been discussed in many literatures. In this paper, we introduce the nth Tail Conditional Moments (TCM) for random variable X, which is defined as
TCM q ( X n ) = E ( [ X TCE q ( X ) ] n | X > x q ) ,
where
x q = inf { x R | P ( X x ) q } .
We note that the proposed TCM takes the form of E [ ( X TCE q ( X ) ) n | X > x q ] instead of E [ ( X E ( X ) ) n | X > x q ] , because we think that these moments provide a better interpretation of tail trajectories. We care more about X > x q than X. It is interesting to note that TCM can be found in asymptotic expansion of the conditional characteristic function:
φ q ( t ) : = E ( e i t X | X > x q ) = 1 TV q ( X ) t 2 2 + j = 3 k TCM q ( X j ) ( i t ) j j ! + o ( | t | k ) ,
where
TV q ( X ) : = Var ( X | X > VaR q ( X ) ) = TCM q ( X 2 )
is a special case of (1), which is a risk measure that examines the dispersion of the tail of a distribution for some quantile q. The tail variance risk measure (TV) was proposed by Furman and Landsman [1] and as a measure for the optimal portfolio selection in Landsman [2].
The risk measures, such as VaR, TCE and TV, do not provide sufficient information on skewness and kurtosis of the tail of a distribution. Therefore, we use the spread of the skewness and kurtosis of the tail of a distribution for some q-quantiles, which are proposed and studied by Landsman et al. (2016b). The Tail Conditional Skewness (TCS) and the Tail Conditional Kurtosis (TCK) are defined as follows, respectively,
TCS q ( X ) = E ( [ X TCE q ( X ) ] 3 | X > x q ) TV q ( X ) 3 2 ,
TCK q ( X ) = E ( [ X TCE q ( X ) ] 4 | X > x q ) TV q ( X ) 2 3 .
The TCS can help us comprehend whether the distribution is left-skewed or right-skewed. If TCS q ( X ) < 0 , the tail is on the left side of the distribution, that is, the probability for X being below TCE q ( X ) will be higher than that of being above it. If TCS q ( X ) > 0 , the tail is on the right side of the distribution, that is, the probability for X being above TCE q ( X ) will be higher than that of being below it. By comparing the TCK of the normal distribution, we can know the shape of the probability distribution.
It is well-known that TCE describes the extreme expected losses. Recently, there have been many studies on TCE. For example, Landsman and Valdez [3] introduced tail conditional expectations for elliptical distributions. Ignatieva and Landsman [4] discussed conditional tail risk measures for the skewed generalised hyperbolic family. Deng and Yao [5] extended the Stein-type inequality to the multivariate generalized hyperbolic distribution. Li et al. [6] derived the conditional tail expectation for log-multivariate generalized hyperbolic distribution. In a more recent paper of Ignatieva and Landsman [7], where the location-scale mixture of elliptical distributions was introduced (and called the “generalised hyper-elliptical distributions”), they considered the tail conditional risk measures for that distributions; see also Zuo and Yin [8]. Kim [9] presented the conditional tail moments for an exponential family, and Landsman et al. [10] and Eini et al. [11] presented the conditional tail moments for elliptical distributions and generalized skew-elliptical distributions, respectively. In this paper, the main consideration is tail conditional moments for a location-scale mixture of elliptical distributions, which extends the conclusion of Zuo and Li. Since the location-scale mixture of elliptical distributions can be composed of any elliptical distribution and Generalised Inverse Gaussian distribution, which has generality, we found that location-scale mixtures of elliptical distributions are suitable for heavy-tailed distributions. Therefore, the location-scale mixture of elliptical distributions is more flexible, and we can choose different elliptic distributions and parameters to fit the models.
The paper is organized as follows. In Section 2 we introduce the location-scale mixture of elliptical distributions. In Section 3 we obtain the expression of the nth TCM for univariate cases of mixture of elliptical distributions. In Section 4 we give the expressions of TV, TCS, and TCK for some important cases of mixture of elliptical distributions. Section 5 offers the numerical analysis of GH and the mixture of the Student’s t distribution. Section 6 shows an illustrative example, and discusses the TCE, TV, TCS, TCK of three stocks. Section 7 gives concluding remarks.

2. Location-Scale Mixture of Elliptical Distributions

In this section, we introduce the mixture of elliptical distributions. Firstly, let us introduce the elliptical family distributions. The random vector X is said to have an elliptical distribution with parameters μ and Σ if its characteristic function can be expressed as
φ X ( t ) = exp ( i t T μ ) ψ ( 1 2 t T Σ t ) ,
for some function ϕ : R R . Then we denote X E n ( μ , Σ , ψ ) .
An elliptically distributed random vector X does not necessarily have a multivariate density function f X . If X E n ( μ , Σ , ψ ) has a density, then it will be of the form
f X ( x ) : = c n | Σ | g n 1 2 ( x μ ) T Σ 1 ( x μ ) , x R n ,
where μ is an n × 1 location vector, Σ is an n × n scale matrix, and g n ( u ) , u 0 , is the density generator of X . This density generator satisfies condition:
0 u n 2 1 g n ( u ) d u < ,
and the normalizing constant c n is given by
c n = Γ ( n / 2 ) ( 2 π ) n / 2 0 t ( n / 2 ) 1 g n ( t ) d t 1 .
A necessary condition for this covariance matrix to exist is
| ψ ( 0 ) | < ,
(see Fang et al. [12]). Suppose A is a k × n matrix, and b is a k × 1 vector. Then
AX + b E k A μ + b , A Σ A T , g k , n ,
where g k , n ( u ) = 0 s n k 2 1 g n ( s + u ) d s .
We denote the cumulative generator G ¯ 1 ( u ) = u g 1 ( t ) d t , and a sequence of cumulative generators
G ¯ 1 ( u ) = u g 1 ( t ) d t , G ¯ 2 ( u ) = u G ¯ 1 ( t ) d t , , G ¯ n ( u ) = u G ¯ n 1 ( t ) d t .
Meanwhile, assume the variance of X exists, and let Z = x μ σ , and
1 2 σ Z 2 = c 1 0 z 2 g ( 1 2 z 2 ) d z = 0 c 1 z d G 1 ( 1 2 z 2 ) < .
Consequently,
0 c 1 G ¯ 1 ( 1 2 z 2 ) d z = 1 2 σ Z 2 ,
and c 1 σ Z 2 G ¯ 1 ( 1 2 z 2 ) = f Z 1 ( z ) , which is a density of random variable Z 1 , defined on R . Similarly,
0 c 2 G ¯ 2 ( 1 2 z 2 ) d z = 1 2 σ Z 2 ,
and c 2 σ Z 2 G ¯ 2 1 2 z 2 = f Z 2 ( z ) is a density of random variable Z 2 , defined on R .
Next, we introduce the Location-Scale Mixture of Elliptical (LSME) distributions. Y L S M E n ( μ , Σ , β , Θ , g n ) is an n-dimensional LSME distribution with location parameter μ and positive definite scale matrix Σ = ( σ i , j ) i , j = 1 n , if
Y = μ + Θ β + Θ 1 2 Σ 1 2 X ,
where β R n , and X E n ( 0 , I n , g n ) . Assuming that X is independent of non-negative scalar random variable Θ , we have
Y | Θ = θ E n ( μ + θ β , θ Σ , g n ) .
If Θ 1 / α 0 , α > 1 , Θ has a Generalised Inverse Gaussian distribution, G I G ( λ , χ , ψ ) , given by pdf
ϖ λ , χ , ψ ( θ ) = f λ , χ , ψ ( θ ) = χ λ ( χ ψ ) λ 2 K λ ( χ ψ ) θ λ 1 exp 1 2 χ θ 1 + ψ θ , θ > 0 ,
where K λ ( · ) denotes a modified Bessel function of the third kind with index λ :
K λ ( ω ) = 1 2 0 χ λ 1 exp { 1 2 ω ( χ 1 + χ ) } d χ .
Here the parameters satisfy χ > 0 , ψ 0 if λ < 0 ; χ > 0 , ψ > 0 if λ = 0 ; χ 0 , ψ > 0 if λ > 0 .
When Θ 1 B ( α , 1 ) , the random vector Y has a new distribution, which is also a special case of LSME distributions. Here, B ( · , · ) is the Beta function.
We list some examples of the mixture elliptical family, including the Location-Scale Mixture of Normal (LSMN), Location-Scale Mixture of Student’s t (LSMSt), Location-Scale Mixture of Logistic (LSMLo) and Location-Scale Mixture of Laplace (LSMLa) distributions.
Example 1.
(Mixture of normal distribution). An n-dimensional normal random vector X with location parameter μ and scale matrixΣhas density function
f X ( x ) = c n | Σ | exp 1 2 ( x μ ) T Σ 1 ( x μ ) , x R n ,
where c n = ( 2 π ) n 2 , and is denoted by X N n μ , Σ . Therefore, the location-scale mixture of normal random vector Y L S M N n ( μ , Σ , β , Θ ) is defined as
Y = μ + Θ β + Θ 1 2 Σ 1 2 X ,
where X N n 0 , I n , and μ,Σ, Θ and β are the same as those in (11).
Example 2.
(Mixture of logistic distribution). Density function of an n-dimensional logistic random vector X with location parameter μ and scale matrixΣcan be expressed as
f X ( x ) = c n | Σ | exp 1 2 ( x μ ) T Σ 1 ( x μ ) 1 + exp 1 2 ( x μ ) T Σ 1 ( x μ ) 2 , x R n ,
where
c n = Γ ( n / 2 ) ( 2 π ) n / 2 0 t n / 2 1 exp ( t ) [ 1 + exp ( t ) ] 2 d t 1 = 1 ( 2 π ) n / 2 Ψ 2 * ( 1 , n 2 , 1 ) ,
and is denoted by X L o n μ , Σ . The location-scale mixture of logistic random vector Y L S M L o n ( μ , Σ , β , Θ ) satisfies
Y = μ + Θ β + Θ 1 2 Σ 1 2 X ,
where X L o n 0 , I n , and μ,Σ, Θ and β are the same as those in (11).
Remark 1.
Here, Ψ μ * ( z , s , a ) is the generalized Hurwitz–Lerch zeta function defined by (cf. Lin et al. (2006))
Ψ μ * ( z , s , a ) = 1 Γ ( μ ) n = 0 Γ ( μ + n ) n ! z n ( n + a ) s ,
which has an integral representation
Ψ μ * ( z , s , a ) = 1 Γ ( s ) 0 t s 1 e a t ( 1 z e t ) μ d t ,
where R ( a ) > 0 , R ( s ) > 0 when | z | 1 ( z 1 ) , R ( s ) > 1 when z = 1 .
Example 3.
(Mixture of Laplace distribution). The density of Laplace random vector X with location parameter μ and scale matrixΣis given by
f X ( x ) = c n | Σ | exp ( x μ ) T Σ 1 ( x μ ) 1 / 2 , x R n ,
where c n = Γ ( n / 2 ) 2 π n / 2 Γ ( n ) , and is denoted by X L a n μ , Σ . Hence, the location-scale mixture of Laplace random vector Y L S M L a n ( μ , Σ , β , Θ ) is defined as
Y = μ + Θ β + Θ 1 2 Σ 1 2 X ,
where X L a n 0 , I n , and μ,Σ, Θ and β are the same as those in (11).

3. Tail Conditional Moments

In this section, we present the TCM for a univariate case of the mixture of elliptical distributions. We assume that the conditional and mixture distributions are continuous.
Consider Y L S M E 1 ( μ , σ 2 , β , Θ , g 1 ) , then Y | Θ E 1 ( μ + θ β , θ σ 2 , g 1 ) . Before giving the TCM , we calculate the following conditional moments
E Y | Θ Y n | Y > y q , E [ Y n | Y > y q ] .
Lemma 1.
Let Y L S M E 1 ( μ , σ 2 , β , Θ , g 1 ) be a univariate location-scale mixture of an elliptical random variable defined as (11). Let Y | Θ E 1 ( μ + θ β , θ σ 2 , g 1 ) , which implies
0 u 1 / 2 G ¯ 1 ( u ) d u < .
Then
E Y | Θ [ Y n | Y > y q ] = ( μ + θ β ) n + n θ σ ( μ + θ β ) n 1 T C E q ( Z ) + k = 2 n C n k θ σ k ( μ + θ β ) n k [ c 1 λ 1 , q z q k 1 + ( k 1 ) σ Z 2 r 1 , q ( z q ) TCE Z 1 k 2 ( z q ) ] ,
where
z = x μ θ β θ σ , λ 1 , q = G ¯ 1 ( 1 2 z q 2 ) 1 q ,
r 1 , q ( z q ) = F ¯ Z 1 ( z q ) 1 q , TCE Z 1 k 2 ( z q ) = E Z 1 Z k 2 | Z > z q .
Proof. 
The nth conditional moments are
E Y | Θ [ Y n | Y > y q ] = 1 F ¯ Y | Θ y q y q c 1 θ σ y n g 1 1 2 y μ θ β θ σ 2 d y = 1 1 q z q c 1 θ σ z + μ + θ β n g 1 1 2 z 2 d z = 1 1 q k = 0 n C n k z q c 1 θ σ z k μ + θ β n k g 1 1 2 z 2 d z = 1 1 q z q c 1 ( μ + θ β ) n g 1 1 2 z 2 d z + k = 1 n C n k z q c 1 θ σ z k ( μ + θ β ) n k g 1 1 2 z 2 d z = ( μ + θ β ) n + 1 1 q n z q c 1 θ σ z ( μ + θ β ) n 1 g 1 1 2 z 2 d z + 1 1 q k = 2 n C n k c 1 θ σ k ( μ + θ β ) n k z q z k g 1 1 2 z 2 d z = ( μ + θ β ) n + n θ σ ( μ + θ β ) n 1 T C E q ( Z ) + k = 2 n C n k θ σ k ( μ + θ β ) n k c 1 λ 1 , q z q k 1 + ( k 1 ) σ Z 2 z q z k 2 f Z 1 ( z ) d z = ( μ + θ β ) n + n θ σ ( μ + θ β ) n 1 T C E q ( Z ) + k = 2 n C n k θ σ k ( μ + θ β ) n k c 1 λ 1 , q z q k 1 + ( k 1 ) σ Z 2 r 1 , q ( z q ) T C E Z 1 k 2 ( z q ) .
Next, we calculate E [ Y n | Y > y q ] , which plays an important role in TCM.
Lemma 2.
Let Y L S M E 1 ( μ , σ 2 , β , Θ , g 1 ) be a univariate location-scale mixture of an elliptical random variable defined as (11). Then
E [ Y n | Y > y q ] = Ω θ ϖ ( θ ) E Y | Θ [ Y n | y > y q ] d θ .
Proof. 
The nth conditional moments is
E [ Y n | Y > y q ] = 1 F ¯ Y ( y q ) y q y n f Y ( y ) d y = 1 1 q y q y n Ω θ f Y | Θ ( y | θ ) ϖ ( θ ) d θ d y = 1 1 q Ω θ ϖ ( θ ) y q y n f Y | Θ ( y | θ ) d y d θ = 1 1 q Ω θ ϖ ( θ ) F ¯ Y | Θ ( y q ) 1 F ¯ Y | Θ ( y q ) y q y n f Y | Θ ( y | θ ) d y d θ = Ω θ ϖ ( θ ) E Y | Θ [ Y n | Y > y q ] d θ ,
where Y | Θ E 1 ( μ + θ β , θ σ 2 , g 1 ) .
Remark 2.
In particular, when n = 1 , the above equation degenerates into
TCE q ( Y ) = E [ Y | Y > y q ] = Ω θ ϖ ( θ ) E Y | Θ [ Y n | Y > y q ] d θ = μ + β 1 q Ω θ F ¯ Y | Θ ( y q ) θ ϖ ( θ ) d θ + c 1 σ 2 c 2 ( 1 q ) Ω θ f Y 1 | Θ ( y q ) θ ϖ ( θ ) d θ = μ + β 1 q δ 1 + c 1 σ 2 c 2 ( 1 q ) δ 2 ,
where c 2 is the normalizing constant of Y 1 ,
δ 1 = Ω θ F ¯ Y | Θ ( y q ) θ ϖ ( θ ) d θ , δ 2 = Ω θ f Y 1 | Θ ( y q ) θ ϖ ( θ ) d θ ,
and Y 1 | Θ E 1 ( μ + θ β , θ σ 2 , G ¯ 1 ) .
Theorem 1.
Let Y L S M E 1 ( μ , σ 2 , β , Θ , g 1 ) be a univariate location-scale mixture of elliptical random variables defined as (11). Then
TCM q ( Y n ) = E [ ( Y T C E q ) n | Y > y q ] = i = 0 n C n i ( 1 ) n i ( β 1 q δ 1 + c 1 σ 2 c 2 ( 1 q ) δ 2 ) n i μ i ,
where
μ i = Ω θ ϖ ( θ ) Λ i , 0 + i Λ i , 1 λ 1 , q + l = 2 i C i l Λ i , l c 1 z q l 1 λ 1 , q + l 1 σ Z 2 r 1 , q z q TCE Z 1 l 2 z q d θ ,
Λ i , l = θ i l 2 σ l β i l , λ 1 , q = G ¯ 1 ( 1 2 z q 2 ) 1 q , r 1 , q ( z q ) = F ¯ Z 1 ( z q ) 1 q .
Proof. 
Using the Binomial Theorem, we can get
E ( Y TCE q ) n | Y > y q = E [ ( Y ( μ + β 1 q δ 1 + c 1 σ 2 c 2 ( 1 q ) δ 2 ) ) n | Y > y q ] = E [ ( Y μ ) ( β 1 q δ 1 + c 1 σ 2 c 2 ( 1 q ) δ 2 ) ) n | Y > y q ] = i = 0 n ( 1 ) n i ( β 1 q δ 1 + c 1 σ 2 c 2 ( 1 q ) δ 2 ) n i E [ ( Y μ ) i | Y > y q ] = i = 0 n ( 1 ) n i ( β 1 q δ 1 + c 1 σ 2 c 2 ( 1 q ) δ 2 ) n i μ i ,
where μ 0 = E [ ( Y μ ) 0 | Y > y q ] = 1 , μ 1 = E [ ( Y μ ) | Y > y q ] = Ω θ ϖ θ β + θ 1 2 σ λ 1 , q d θ , and μ i = E [ ( Y μ ) i | Y > y q ] , i 2 . The measures μ i are calculated in the spirit of the proof of Lemma 1. Taking the transformation z = x μ σ , we note that the tail function is F ¯ Z ( z q ) = 1 q , which is, in fact, the 1 q percentile. Furthermore, the transformation simplifies the integral in μ i , which is as follows:
μ i = E [ ( Y μ ) i | Y > y q ] = 1 F ¯ Y ( y q ) y q ( y μ ) i f Y ( y ) d y = 1 F ¯ Y ( y q ) y q ( y μ ) i Ω θ ϖ ( θ ) f Y | Θ ( y | θ ) d θ d y = 1 F ¯ Y ( y q ) Ω θ ϖ ( θ ) y q ( y μ ) i f Y | Θ ( y | θ ) d y d θ = 1 F ¯ Y ( y q ) Ω θ ϖ ( θ ) y q ( y μ ) i c 1 θ σ g 1 1 2 y μ θ β θ σ 2 d y d θ = 1 F ¯ Y ( y q ) Ω θ ϖ ( θ ) z q θ σ z + θ β i c 1 g 1 1 2 z 2 d z d θ ,
where
z q θ σ z + θ β i c 1 g 1 1 2 z 2 d z = z q l = 0 i C i l θ σ z l θ β i l c 1 g 1 1 2 z 2 d z = z q l = 0 i C i l θ i l 2 σ l β i l z l c 1 g 1 1 2 z 2 d z = θ β i F ¯ Z z q + i θ i 1 2 σ β i 1 c 1 G ¯ 1 1 2 z q 2 z q l = 2 i C i l θ i l 2 σ l β i l c 1 z l 1 d G ¯ 1 1 2 z 2 = Λ i , 0 F ¯ Z z q + i Λ i , 1 G ¯ 1 1 2 z q 2 + l = 2 i C i l Λ i , l c 1 z q l 1 G ¯ 1 1 2 z q 2 + ( l 1 ) σ Z 2 F ¯ Z 1 z TCE Z 1 l 2 z q ,
and here,
Λ i , l = θ i l 2 σ l β i l .
Taking into account (21), we get
μ i = Ω θ ϖ ( θ ) Λ i , 0 + i Λ i , 1 λ 1 , q + l = 2 i C i l Λ i , l c 1 z q l 1 λ 1 , q + l 1 σ Z 2 r 1 , q z q TCE Z 1 l 2 z q d θ .
This ends the proof of Theorem 1. □
The GIG is involved in the mixture of elliptical distributions, resulting in different processes when calculating μ i . At the same time, there is some confusion in the normalization constants corresponding to different generating functions in Landsman (2016), which we pay special attention to when calculating.
Remark 3.
We can express E [ ( Y TCE q ) n | Y > y q ] in another way.
E ( Y TCE q ) n | Y > y q = k = 0 n ( 1 ) n k ( TCE q ( Y ) ) n k E [ Y k | Y > y 1 ] = k = 0 n ( 1 ) n k 1 1 q Ω θ ϖ ( θ ) E Y | Θ [ Y n | Y > y q ] d θ .
Corollary 1.
The tail variance of the elliptical distribution can be derived by considering the cases n = 1 and n = 2 . This risk measure takes the form
TV q ( Y ) = β 1 q δ 1 + c 1 σ 2 c 2 ( 1 q ) δ 2 2 2 β 1 q δ 1 + c 1 σ 2 c 2 ( 1 q ) δ 2 μ 1 + μ 2 ,
where
μ 2 = Ω θ ϖ ( θ ) Λ 2 , 0 + 2 Λ 2 , 1 λ 1 , q + Λ 2 , 2 c 1 z q λ 1 , q + σ Z 2 r 1 , q d θ .
Proof. 
μ 2 = E [ ( Y μ ) 2 | Y > y q ] = Ω θ ϖ ( θ ) θ 2 β 2 + 2 θ 3 2 σ β λ 1 , q + θ σ 2 c 1 z q λ 1 , q + σ Z 2 r 1 , q d θ = Ω θ ϖ ( θ ) Λ 2 , 0 + 2 Λ 2 , 1 λ 1 , q + Λ 2 , 2 c 1 z q λ 1 , q + σ Z 2 r 1 , q d θ .
For convenience, we write γ = β 1 q δ 1 + c 1 σ 2 c 2 ( 1 q ) δ 2 , so T V q = γ 2 2 γ μ 1 + μ 2 . □
Corollary 2.
The TCS of Y takes the form
TCS q ( Y ) = i = 0 3 C 3 i ( 1 ) 3 i γ 3 i μ i ( γ 2 2 γ μ 1 + μ 2 ) 3 2 ,
where
μ 3 = Ω θ ϖ ( θ ) Λ 3 , 0 + 3 Λ 3 , 1 λ 1 , q + 3 Λ 3 , 2 z q λ 1 , q + σ Z 2 r 1 , q ( z q ) + Λ 3 , 3 c 1 z q 2 λ 1 , q + 2 λ 2 , q d θ .
Proof. 
Through (21), we can get
μ 3 = E ( Y μ ) 3 | Y > y q = Ω θ ϖ ( θ ) Λ 3 , 0 + 3 Λ 3 , 1 λ 1 , q + l = 2 3 C 3 l Λ 3 , l c 1 z q l 1 λ 1 , q + l 1 σ Z 2 r 1 , q ( z q ) E Z 1 Z l 2 | Z > z q d θ = Ω θ ϖ ( θ ) Λ 3 , 0 + 3 Λ 3 , 1 λ 1 , q + 3 Λ 3 , 2 z q λ 1 , q + σ Z 2 r 1 , q ( z q ) + Λ 3 , 3 c 1 z q 2 λ 1 , q + 2 σ Z 2 r 1 , q ( z q ) E Z 1 Z | Z > z q d θ ,
and we have
2 σ Z 2 r 1 , q ( z q ) E Z 1 ( Z | Z > z q ) = 2 1 q z q c 1 z G ¯ 1 1 2 z 2 d z = 2 c 1 G ¯ 2 1 2 z q 2 1 q = 2 c 1 λ 2 , q ,
where
λ 2 , q = G ¯ 2 ( 1 2 z q 2 ) 1 q .
Thus, we get
μ 3 = Ω θ ϖ ( θ ) Λ 3 , 0 + 3 Λ 3 , 1 λ 1 , q + 3 Λ 3 , 2 z q λ 1 , q + σ Z 2 r 1 , q ( z q ) + Λ 3 , 3 c 1 z q 2 λ 1 , q + 2 λ 2 , q d θ .
Corollary 3.
The TCK of Y takes the form
TCK q ( Y ) = i = 0 4 C 4 i ( 1 ) 4 i γ 4 i μ i ( γ 2 2 γ μ 1 + μ 2 ) 2 3 ,
where
μ 4 = Ω θ ϖ ( θ ) Λ 4 , 0 + 4 Λ 4 , 1 + 6 Λ 4 , 2 c 1 z q + 4 Λ 4 , 3 c 1 z q 2 + Λ 4 , 4 c 1 z q 3 λ 1 , q + 3 Λ 4 , 4 c 1 z q + 8 Λ 4 , 3 λ 2 , q + 6 Λ 4 , 2 σ Z 2 r 1 , q ( z q ) + 3 c 2 Λ 4 , 4 σ Z 2 r 2 , q ( z q ) d θ .
Proof. 
Letting i = 4 in (21), we get
μ 4 = E [ ( Y μ ) 4 | Y > y q ] = Ω θ ϖ ( θ ) Λ 4 , 0 + 4 Λ 4 , 1 λ 1 , q + l = 2 4 C 4 l Λ 4 , l c 1 z q l 1 λ 1 , q + ( l 1 ) σ Z 2 r 1 , q ( z q ) E Z 1 Z l 2 | Z > z q d θ = Ω θ ϖ ( θ ) Λ 4 , 0 + 4 Λ 4 , 1 λ 1 , q + C 4 3 Λ 4 , 3 c 1 z q 2 λ 1 , q + 2 λ 2 , q + Λ 4 , 4 c 1 z q 3 λ 1 , q + 3 σ Z 2 r 1 , q ( z q ) E Z 1 Z 2 | Z > z q d θ = Ω θ ϖ ( θ ) Λ 4 , 0 + 4 Λ 4 , 1 + 6 Λ 4 , 2 c 1 z q + 4 Λ 4 , 3 c 1 z q 2 + Λ 4 , 4 c 1 z q 3 λ 1 , q + 3 Λ 4 , 4 c 1 z q + 8 Λ 4 , 3 λ 2 , q + 6 Λ 4 , 2 σ Z 2 r 1 , q ( z q ) + 3 c 2 Λ 4 , 4 σ Z 2 r 2 , q ( z q ) d θ ,
where
r 2 , q ( z q ) = F ¯ Z 2 ( z q ) 1 q .
Corollary 4.
The nth moments of X takes the form
E Y | Θ [ Y n ] = ( μ + θ β ) n + k = 1 n C n k ( θ σ ) k ( μ + θ β ) n k ( k 1 ) σ Z 2 E Z * [ Z k 2 ] .
Proof. 
E Y | Θ [ Y n ] = lim q 0 E Y | Θ [ Y n | Y > y q ] = ( μ + θ β ) n + k = 1 n C n k ( θ σ ) k ( μ + θ β ) n k lim q 0 c 1 λ 1 , q z q k 1 + ( k 1 ) r 1 , q ( z q ) σ Z 2 E Z * [ Z k 2 | z > z q ] = ( μ + θ β ) n + k = 1 n C n k ( θ σ ) k ( μ + θ β ) n k ( k 1 ) σ Z 2 E Z * [ Z k 2 ] .
Consider an n × 1 random vector Y = ( Y 1 , Y 2 , , Y n ) with a location-scale mixture of elliptical distribution Y L S M E n ( μ , Σ , β , Θ , g n ) . Then using Landsman and Valdez [3], the distribution of the return R = y T Y , where y R n , is as follows:
R = y T Y L S M E 1 ( μ R = y T μ , σ R 2 = y T Σ y , β R , Θ , g 1 ) .
Using (27) and by (19), (22), (23), (25) we obtain the TV , TCS and TCK , respectively,
TCE q ( R ) = y T μ + β R 1 q δ 1 , R + c 1 y T Σ y c 2 ( 1 q ) δ 2 , R ,
where
δ 1 , R = Ω θ F ¯ Y | Θ ( y q , R ) θ ϖ ( θ ) d θ , δ 2 , R = Ω θ f Y 1 | Θ ( y q , R ) θ ϖ ( θ ) d θ ,
y q , R = V a R q R μ R σ R ,
TV q ( R ) = β R 1 q δ 1 , R + c 1 y T Σ y c 2 ( 1 q ) δ 2 , R 2 2 β R 1 q δ 1 , R + c 1 y T Σ y c 2 ( 1 q ) δ 2 , R μ 1 + μ 2 ,
TCS q ( R ) = i = 0 3 C 3 i ( 1 ) 3 i β R 1 q δ 1 , R + c 1 y T Σ y c 2 ( 1 q ) δ 2 , R 3 i μ i β R 1 q δ 1 , R + c 1 y T Σ y c 2 ( 1 q ) δ 2 , R 2 2 β R 1 q δ 1 , R + c 1 y T Σ y c 2 ( 1 q ) δ 2 , R μ 1 + μ 2 3 2 ,
and
TCK q ( R ) = i = 0 4 C 4 i ( 1 ) 4 i β R 1 q δ 1 , R + c 1 y T Σ y c 2 ( 1 q ) δ 2 , R 4 i μ i β R 1 q δ 1 , R + c 1 y T Σ y c 2 ( 1 q ) δ 2 , R 2 2 β R 1 q δ 1 , R + c 1 y T Σ y c 2 ( 1 q ) δ 2 , R μ 1 + μ 2 2 3 .
Remark 4.
Now we compare the different distributions of TCM form. The TCM for elliptical distributions (see Landsman et al. [10]) and the TCM for generalized skew-elliptical distributions (see Eini et al. [11]) are presented as follows, respectively,
TCM q ( X n ) = i = 0 n ( 1 ) i C n i λ 1 , q n i μ i ,
where X E 1 ( μ , σ 2 , g 1 ) .
TCM q ( Y n ) = i = 0 n ( 1 ) i Λ 1 , q σ n i μ i ,
where Y S E 1 ( μ , σ 2 , γ , H ) . The mixture of elliptical distributions has a similar form as above. However, the three of them have different μ i forms. Because the expectation and TCE of the mixture of ellipitical distributions lead to some differences in the calculation processes, the final form is different.

4. Some Special

In this section, we discuss some measures related to several mixtures of ellipitical distributions.
Example 4.
Let Y L S M N 1 ( μ , σ 2 , β , Θ ) be a univariate location-scale mixture of normal random variables, defined as (14). In this case, we notice that G ¯ 1 ( u ) = G ¯ 2 ( u ) = = g 1 ( u ) = e u , and c = c 1 = ( 2 π ) 1 2 . We have σ Z 2 = 1 . Thus, we obtain the TCE for a location-scale mixture of normal distributions:
TCE Y ( y q ) = μ + β 1 q δ 1 + σ 2 1 q δ 2 ,
where
δ 1 = Ω θ F ¯ Y | Θ ( y q ) θ ϖ ( θ ) d θ , δ 2 = Ω θ f Y | Θ ( y q ) θ ϖ ( θ ) d θ
and Y | Θ N 1 ( μ + θ β , θ σ 2 ) .
Meanwhile, we have
G ¯ j ( 1 2 u 2 ) = ϕ ( u ) , j = 1 , 2 , 3 ,
f Z 1 ( z ) = f Z 2 ( z ) , λ 1 , q = λ 2 , q = ϕ ( z q ) 1 q ,
r 1 , q ( z q ) = r 2 , q ( z q ) = Φ ¯ ( z q ) 1 q .
Accordingly, we have
μ 1 = Ω θ ϖ ( θ ) θ β + θ 1 2 σ ϕ ( z q ) 1 q d θ ,
μ 2 = Ω θ ϖ ( θ ) Λ 2 , 0 + 2 Λ 2 , 1 λ 1 , q + Λ 2 , 2 c 1 z q λ 1 , q + σ Z 2 r 1 , q d θ = Ω θ ϖ ( θ ) Λ 2 , 0 + 2 Λ 2 , 1 ϕ ( z q ) 1 q + Λ 2 , 2 ( 2 π ) 1 z q ϕ ( z q ) 1 q + Φ ¯ ( z q ) 1 q d θ ,
μ 3 = Ω θ ϖ ( θ ) Λ 3 , 0 + 3 Λ 3 , 1 λ 1 , q + 3 Λ 3 , 2 z q λ 1 , q + σ Z 2 r 1 , q ( z q ) + Λ 3 , 3 c 1 z q 2 λ 1 , q + 2 λ 2 , q d θ = Ω θ ϖ ( θ ) Λ 3 , 0 + 3 Λ 3 , 1 ϕ ( z q ) 1 q + 3 Λ 3 , 2 z q ϕ ( z q ) 1 q + Φ ¯ ( z q ) 1 q + Λ 3 , 3 ( 2 π ) 1 ϕ ( z q ) 1 q ( z q 2 + 2 ) d θ ,
μ 4 = Ω θ ϖ ( θ ) Λ 4 , 0 + 4 Λ 4 , 1 + 6 Λ 4 , 2 c 1 z q + 4 Λ 4 , 3 c 1 z q 2 + Λ 4 , 4 c 1 z q 3 λ 1 , q + 3 Λ 4 , 4 c 1 z q + 8 Λ 4 , 3 λ 2 , q + 6 Λ 4 , 2 σ Z 2 r 1 , q ( z q ) + 3 c 2 Λ 4 , 4 σ Z 2 r 2 , q ( z q ) d θ = Ω θ ϖ ( θ ) Λ 4 , 0 + 4 Λ 4 , 1 + 6 Λ 4 , 2 ( 2 π ) 1 z q + 4 Λ 4 , 3 ( 2 π ) 1 z q 2 + Λ 4 , 4 ( 2 π ) 1 z q 3 ϕ ( z q ) 1 q + 3 Λ 4 , 4 ( 2 π ) 1 z q + 8 Λ 4 , 3 ϕ ( z q ) 1 q + 6 Λ 4 , 2 Φ ¯ ( z q ) 1 q + 6 π Λ 4 , 4 Φ ¯ ( z q ) 1 q d θ .
Then TV q ( Y ) , TCS q ( Y ) , and TCK q ( Y ) are obtained through substituting the above formulas in (22), (23) and (25).
Example 5.
Let Y L S M S t 1 ( μ , σ 2 , β , Θ ) be a univariate location-scale mixture of Student’s t random variables defined a (11). We know that there is a variance of Z for m > 2 and σ Z 2 = m m 2 .
Thus, the cumulative generators of Y are shown as follows:
g 1 ( u ) = 1 + 2 u m ( m + 1 ) / 2 , G ¯ 1 ( u ) = m m 1 1 + 2 u m ( m 1 ) / 2 ,
G ¯ 2 ( u ) = u G ¯ 1 ( t ) d t = u m m 1 1 + 2 t m m 1 2 d t = m m 1 m m 3 1 + 2 u m m 3 2 ,
and the normalizing constants are
c 1 = Γ ( ( m + 1 ) / 2 ) Γ ( m / 2 ) ( m π ) 1 2 , c 2 = m 1 m 3 2 B ( 1 2 , m 2 2 ) ( m > 2 ) .
Thus,
f Z 1 ( z ) = m 2 ( m 1 ) ( m π ) 1 2 Γ ( m + 1 2 ) Γ ( m 2 ) 1 + z 2 m m 1 2 ,
f Z 2 ( z ) = m 2 m 1 2 ( m 3 ) B ( 1 2 , m 2 2 ) 1 + 2 z 2 m m 3 2 .
Accordingly, we have
λ 1 , q = m ( m 1 ) ( 1 q ) 1 + z q 2 m m 1 2 ,
λ 2 , q = m 2 ( m 1 ) ( m 3 ) ( 1 q ) 1 + z q 2 m m 3 2 ,
r 1 , q ( z q ) = F ¯ Z 1 ( z q ) 1 q , r 2 , q ( z q ) = F ¯ Z 2 ( z q ) 1 q .
Thus, we obtain the TCE for the location-scale mixture of Student’s t distributions:
TCE Y ( y q ) = μ + β 1 q δ 1 + m 2 σ 2 ( m 2 ) ( 1 q ) δ 2 , m > 2 ,
where
δ 1 = Ω θ F ¯ Y | Θ ( y q ) θ ϖ ( θ ) d θ , δ 2 = Ω θ f Y 1 | Θ ( y q ) θ ϖ ( θ ) d θ ,
and Y 1 | Θ E 1 ( μ + θ β , θ σ 2 , G ¯ 1 ) .
Meanwhile, we have
μ 1 = Ω θ ϖ ( θ ) θ β + θ 1 2 σ λ 1 , q d θ = Ω θ ϖ ( θ ) θ β + θ 1 2 σ m ( m 1 ) ( 1 q ) 1 + z q 2 m m 1 2 d θ ,
μ 2 = Ω θ ϖ ( θ ) Λ 2 , 0 + 2 Λ 2 , 1 ϕ ( z q ) 1 q + Λ 2 , 2 ( 2 π ) 1 z q ϕ ( z q ) 1 q + Φ ¯ ( z q ) 1 q d θ = Ω θ ϖ ( θ ) Λ 2 , 0 + 2 Λ 2 , 1 m ( m 1 ) ( 1 q ) 1 + z q 2 m m 1 2 + Λ 2 , 2 Γ ( ( m + 1 ) / 2 ) Γ ( m / 2 ) ( m π ) 1 2 z q F ¯ Z 1 ( z q ) 1 q + m m 2 F ¯ Z 1 ( z q ) 1 q d θ .
The measures u i are calculated in the spirit of the Example 4. TV q ( Y ) , TCS q ( Y ) , and TCK q ( Y ) are obtained through substituting the above formulas in (22), (23) and (25).
Example 6.
Let Y L S M L o 1 ( μ , σ 2 , β , Θ ) be a univariate location-scale mixture of a logistic random variable defined as (15). We find that σ Z 2 = π 2 3 . The cumulative generators of Y are shown as follows:
g 1 ( u ) = exp ( u ) [ 1 + exp ( u ) ] 2 , G ¯ 1 ( u ) = exp ( u ) 1 + exp ( u ) ,
G ¯ 2 ( u ) = u G ¯ 1 ( t ) d t = u m m 1 1 + 2 t m m 1 2 d t = m m 1 m m 3 1 + 2 u m m 3 2 ,
and the normalizing constants are
c 1 = 1 2 π Ψ 2 * ( 1 , 1 2 , 1 ) , c 2 = 1 2 π Ψ 1 * ( 1 , 1 2 , 1 ) .
Meanwhile, we have
λ 1 , q = exp 1 2 z q 2 ( 1 q ) 1 + exp 1 2 z q 2 , λ 2 , q = ln 1 + exp 1 2 z q 2 1 q ,
r 1 , q = F ¯ Z 1 ( z q ) 1 q , r 2 , q = F ¯ Z 2 ( z q ) 1 q .
Accordingly, we have
f Z 1 ( z ) = 3 2 π 5 2 Ψ 2 * ( 1 , 1 2 , 1 ) exp 1 2 z 2 1 + exp 1 2 z 2 ,
f Z 2 ( z ) = 3 x π 5 2 Ψ 1 * ( 1 , 1 2 , 1 ) ln 1 + exp 1 2 z 2 .
Thus, we obtain the TCE for the location-scale mixture of Logistic distributions:
TCE Y ( y q ) = μ + β 1 q δ 1 + Ψ 1 * ( 1 , 1 2 , 1 ) σ 2 Ψ 2 * ( 1 , 1 2 , 1 ) ( 1 q ) δ 2 ,
where
δ 1 = Ω θ F ¯ Y | Θ ( y q ) θ ϖ ( θ ) d θ , δ 2 = Ω θ f Y 1 | Θ ( y q ) θ ϖ ( θ ) d θ ,
and Y 1 | Θ E 1 ( μ + θ β , θ σ 2 , G ¯ 1 ) .
Meanwhile,
μ 1 = Ω θ ϖ ( θ ) θ β + θ 1 2 σ λ 1 , q d θ = Ω θ ϖ ( θ ) θ β + θ 1 2 σ exp 1 2 z q 2 ( 1 q ) 1 + exp 1 2 z q 2 d θ ,
μ 2 = Ω θ ϖ ( θ ) Λ 2 , 0 + 2 Λ 2 , 1 ϕ ( z q ) 1 q + Λ 2 , 2 ( 2 π ) 1 z q ϕ ( z q ) 1 q + Φ ¯ ( z q ) 1 q d θ = Ω θ ϖ ( θ ) [ Λ 2 , 0 + 2 Λ 2 , 1 exp 1 2 z q 2 ( 1 q ) 1 + exp 1 2 z q 2 + Λ 2 , 2 1 2 π Ψ 2 * ( 1 , 1 2 , 1 ) z q exp 1 2 z q 2 ( 1 q ) 1 + exp 1 2 z q 2 + π 2 3 F ¯ Z 1 ( z q ) 1 q ] d θ .
The measures u i are calculated in the spirit of Example 4. TV q ( Y ) , TCS q ( Y ) , and TCK q ( Y ) are obtained through substituting the above formulas in (22), (23) and (25).
Example 7.
Let Y L S M L a 1 ( μ , σ 2 , β , Θ ) be a univariate location-scale mixture of a Laplace random variable, defined as (15). We find that σ Z 2 = 2 , the cumulative generators of Y are shown as follows:
g 1 ( u ) = exp ( 2 u ) , G ¯ 1 ( u ) = ( 1 + 2 u ) exp ( 2 u ) ,
G ¯ 2 ( u ) = ( 3 + 2 u + 3 2 u ) exp ( 2 u ) ,
and the normalizing constants
c 1 = 1 2 , c 2 = 1 4 .
Accordingly, we have
f Z 1 ( z ) = G ¯ 1 1 2 z 2 = 1 + | z | exp 2 | z | ,
f Z 2 ( z ) = G ¯ 2 1 2 z 2 2 = 3 + z 2 + 3 | z | 2 exp ( 2 | z | ) ,
λ 1 , q = 1 + | z q | 1 q exp ( | z q | ) , λ 2 , q = 3 + z q 2 + 3 | z q | 1 q exp ( | z q | ) ,
r 1 , q ( z q ) = F ¯ Z 1 ( z q ) 1 q , r 2 , q ( z q ) = F ¯ Z 2 ( z q ) 1 q .
Thus, we obtain the TCE for a location-scale mixture of Laplace distributions:
TCE Y ( y q ) = μ + β 1 q δ 1 + 2 σ 2 1 q δ 2 ,
where
δ 1 = Ω θ F ¯ Y | Θ ( y q ) θ ϖ ( θ ) d θ , δ 2 = Ω θ f Y 1 | Θ ( y q ) θ ϖ ( θ ) d θ ,
and Y 1 | Θ E 1 ( μ + θ β , θ σ 2 , G ¯ 1 ) .
Note that
μ 1 = Ω θ ϖ ( θ ) θ β + θ 1 2 σ λ 1 , q d θ = Ω θ ϖ ( θ ) θ β + θ 1 2 σ 1 + | z q | 1 q exp ( | z q | ) d θ ,
μ 2 = Ω θ ϖ ( θ ) Λ 2 , 0 + 2 Λ 2 , 1 λ 1 , q + Λ 2 , 2 c 1 z q λ 1 , q + σ Z 2 r 1 , q d θ = Ω θ ϖ ( θ ) Λ 2 , 0 + 2 Λ 2 , 1 1 + | z q | 1 q exp ( | z q | ) + Λ 2 , 2 1 2 z q 1 + | z q | 1 q exp ( | z q | ) ) + 2 F ¯ Z 1 ( z q ) 1 q d θ .
The measures u i are calculated in the spirit of the Example 4. TV q ( Y ) , TCS q ( Y ) , and TCK q ( Y ) are obtained through substituting the above formulas in (22), (23) and (25).

5. Numerical Analysis

In this section, two numerical examples are presented. We first consider TCE , TCS ,   TCK for a mixture of normal distributions and mixture of Student’s t distributions.
Example 8.
When X N ( 0 , 1 ) is a standard normal random variable, Θ G I G ( λ , χ , ψ ) . We denote Y G H ( λ , χ , ψ , μ , σ , β ) , and its pdf is
f X ( x ) = ψ λ ( ψ + β 2 σ 2 ) 1 2 λ ( χ ψ ) λ K λ 1 2 ( ( χ + ( x μ ) 2 σ 2 ) ( ψ + β 2 σ 2 ) ) 2 π σ K λ ( χ ψ ) ( ( χ + ( x μ ) 2 σ 2 ) ( ψ + β 2 σ 2 ) ) .
Using Example 4, we obtain
TV q ( Y ) = ( β 1 q δ 1 + σ 2 ( 1 q ) δ 2 ) 2 2 ( β 1 q δ 1 + σ 2 ( 1 q ) δ 2 ) Ω θ ϖ ( θ ) θ β + θ 1 2 σ ϕ ( z q ) 1 q d θ + Ω θ ϖ ( θ ) Λ 2 , 0 + 2 Λ 2 , 1 ϕ ( z q ) 1 q + Λ 2 , 2 ( 2 π ) 1 z q ϕ ( z q ) 1 q + Φ ¯ ( z q ) 1 q d θ .
TCS q ( Y ) , and TCK q ( Y ) are obtained through substituting the above formulas in (22), (23) and (25). Next, we show the images of GH distribution under different parameters in Figure 1.
By assigning different values to each parameter, we can see the influence of each parameter, and adjust the appropriate parameters in the actual fitting to make the fitting effect better.
Example 9.
(Ignatieva and Landsman (2020)) Now, we consider the random variable X which has a Student’s t distribution, and Θ G I G ( λ , χ , ψ ) . Then Y has the following pdf:
f X ( x ) = c p χ λ ( χ ψ ) λ 2 σ K λ ( χ ψ ) 0 ( 1 + 1 m ( x μ γ θ σ θ ) 2 ) p θ λ 3 2 e x p ( 1 2 ( χ θ + ψ θ ) ) d θ ,
where c p = Γ ( p ) π m Γ ( p 1 2 ) .
This case is the same as in Example 5, so we can get TV , TCS q ( Y ) , and TCK q ( Y ) from Example 5. Next, we show the images of Student-t-GIG distribution under different parameters in Figure 2.
Next, we consider TCE, TV, TCS and TCK for the universe GH distribution and mixture of Student’s t distribution. Let the parameters of GH: μ = 0 , σ = 1.5 , λ = 1.7 , χ = 2.1 , ψ = 6.7 , β = 11 × 10 3 , and the parameters of Student’s t–GIG: μ = 0 , σ = 1.5 , λ = 1.7 , ψ = 6.7 , c h i = 2.1 , γ = 11 × 10 3 . At the same time, we select the skew-normal distribution and skew Student’s t-normal distribution to compare with them. Let the parameters of skew-normal and skew Student’s t-normal be μ = 0 , σ = 1.5 , γ = 1.7 . We chose q = 0.75, 0.8, 0.9, 0.95, 0.98, and the results are given in the Table 1.
By doing the calculations, we found that the TCK of the Student’s t-GIG mixture distribution and skew Student’s t normal distribution tended to infinity. By comparing the TCK of the GH and skew-normal distribution, we found that GH is a heavy-tailed distribution and skew-normal is a light-tailed distribution. In general, the heavy-tailed distribution appears mainly in financial data, such as the return on securities. Comparing Student’s t–GIG to the skew Student’s t-normal distribution, we found that the Student’s t–GIG distribution was right-skewed and the skew Student’s t-normal distribution was left-skewed. It is easy to discover that the values of the measures of risk increase by raising the q-quantile value, which is not unexpected. Another phenomenon is that the measures of Student’s t–GIG are larger than that of GH.

6. Illustrative Example

We discuss TCE, TV, TCS and TCK of three stocks (Amazon, Google and Apple) covering a time frame from January 2016 to January 2019 by using the results of parameter estimates in Ignatieva and Landsman [4]. In order to fit GH distributions to the univariate data, we select parameter estimates from the univariate fit of the GH family of distributions to the losses arising from Amazon, Google and Apple stocks. Fixed parameter values are used: μ 1 = 1.1230 , μ 2 = 1.0584 , μ 3 = 1.1280 , σ 1 = 1.1614 , σ 2 = 0.9954 , σ 3 = 0.9760 . The results are shown in Table 2.
As we can see, the TCE, TV, TCS and TCK of Amazon, Google and Apple are increasing with the increase of q-quantile, which helps investors understand extreme losses by showing risk measures under different q-quantile values.

7. Concluding Remarks

In this paper, we have considered the univariate location-scale mixture of elliptical distributions, which has received much attention in finance and insurance applications, since this distribution not only includes the location-scale mixture of normal (LSMN) distributions, Student’s t (LSMSt) distributions, logistic (LSMLo) distributions and Laplace (LSMLa) distributions, but also includes the generalized hyperbolic distribution (GHD) and the slash distribution. We have given the general form of TCM and the expressions of TV, TCS, and TCK.

Author Contributions

Formal analysis, X.H.; Funding acquisition, C.Y.; Methodology, X.H. and C.Y.; Project administration, C.Y.; Validation, X.H.; Writing—original draft, X.H.; Writing—review & editing, X.H. and C.Y. All authors have read and agreed to the published version of the manuscript.

Funding

The authors thank the anonymous referees and the Editor for their helpful comments and suggestions, which have led to the improvement of this paper. The research was supported by the National Natural Science Foundation of China (No. 12071251,11571198).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Furman, E.; Landsman, Z. Tail variance premium with applications for elliptical portfolio of risks. ASTIN Bull. 2006, 36, 433–462. [Google Scholar] [CrossRef] [Green Version]
  2. Landsman, Z. On the tail mean-variance optimal portfolio selection. Insur. Math. Econ. 2010, 46, 547–553. [Google Scholar] [CrossRef]
  3. Landsman, Z.; Valdez, E.A. Tail conditional expectations for elliptical distributions. N. Am. Actuar. J. 2003, 7, 55–71. [Google Scholar] [CrossRef] [Green Version]
  4. Ignatieva, K.; Landsman, Z. Conditional tail risk measures for the skewed generalised hyperbolic family. Insur. Math. Econ. 2019, 86, 98–114. [Google Scholar] [CrossRef]
  5. Deng, X.; Yao, J. On the property of multivariate generalized hyperbolic distribution and the Stein-type inequality. Commun.-Stat.-Theory Methods 2018, 47, 5346–5356. [Google Scholar] [CrossRef]
  6. Li, Z.; Luo, J.; Yao, J. Convex bound approximations for sums of random variables under multivariate log-generalized hyperbolic distribution and asymptotic equivalences. J. Comput. Appl. Math. 2021, 391, 113459. [Google Scholar] [CrossRef]
  7. Ignatieva, K.; Landsman, Z. A new class of generalised hyper-elliptical distributions and their applications in computing conditional tail risk measures. Insur. Math. Econ. 2020, 101, 437–465. [Google Scholar] [CrossRef]
  8. Zuo, B.S.; Yin, C.C. Tail conditional risk measures for location-scale mixture of elliptical distributions. J. Stat. Comput. Simul. 2021, 91, 3653–3677. [Google Scholar] [CrossRef]
  9. Kim, J.H.T. Conditional tail moments of the exponential family and its related distributions. N. Am. Actuar. J. 2010, 14, 198–216. [Google Scholar] [CrossRef]
  10. Landsman, Z.; Makov, U.; Shushi, T. Tail conditional moments for elliptical and log-elliptical distributions. Insur. Math. Econ. 2016, 71, 179–188. [Google Scholar] [CrossRef]
  11. Eini, E.J.; Khaloozadeh, H. Tail conditional moment for generalized skew-elliptical distributions. J. Appl. Stat. 2021, 48, 2285–2305. [Google Scholar] [CrossRef]
  12. Fang, K.T.; Kotz, S.; Ng, K.W. Symmetric Multivariate and Related Distributions; Chapman and Hall: New York, NY, USA, 1990; Volume 391, p. 113418. [Google Scholar]
Figure 1. GH distribution with parameters specified as above.
Figure 1. GH distribution with parameters specified as above.
Mathematics 10 00606 g001
Figure 2. Student-t–GIG mixture distribution with m = 4 and other parameters specified as above.
Figure 2. Student-t–GIG mixture distribution with m = 4 and other parameters specified as above.
Mathematics 10 00606 g002
Table 1. Comparison of different q-quantile values between different distributions.
Table 1. Comparison of different q-quantile values between different distributions.
Value x q TCETVTCSTCK
q-Quantile
GH distribution
0.750.90061.79990.64311.58543.5278
0.81.13461.99620.61011.63463.7742
0.91.77922.56420.54101.74044.3403
0.952.35143.09150.49921.80534.7143
0.983.04813.75140.46401.85935.0427
Skew-normal distribution
0.751.71872.46850.41221.3481−2.1265
0.81.91862.63150.38151.3836−2.2297
0.92.46673.09390.31081.4679−2.4515
0.952.93983.50670.26261.5292−2.5876
0.983.48953.99780.21791.5901−2.7008
Student’s t-GIG distribution
0.750.99392.33302.56385.1605
0.81.27612.63362.75125.3499
0.92.13893.60973.53655.8106
0.953.04084.68664.68776.1435
0.984.37026.33186.99856.4558
Skew Student’s t-normal distribution
0.751.69324.35586.33241.1997
0.81.89184.99718.95510.5104
0.92.40357.863239.5208−0.5493
0.952.766113.1504252.5795−0.5746
0.983.050628.5219 3.5449 × 10 3 −0.4057
Table 2. Comparison of univariate TCE, TV, TCS, and TCK computed for Amazon, Google, Apple at different quantile levels.
Table 2. Comparison of univariate TCE, TV, TCS, and TCK computed for Amazon, Google, Apple at different quantile levels.
StockAmazonGoogleApple
q-Quantile
TCE
0.80.63960.66730.5643
0.91.18811.10950.9236
0.951.32401.20171.1297
TV
0.80.40170.35060.3815
0.90.44590.47680.4108
0.950.45610.49770.4626
TCS
0.81.18811.11741.1565
0.91.24741.25761.3365
0.951.32401.30291.3407
TCK
0.83.28953.766013.3459
0.93.66034.00373.8228
0.953.90374.25504.0485
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Han, X.; Yin, C. Tail Conditional Moments for Location-Scale Mixture of Elliptical Distributions. Mathematics 2022, 10, 606. https://doi.org/10.3390/math10040606

AMA Style

Han X, Yin C. Tail Conditional Moments for Location-Scale Mixture of Elliptical Distributions. Mathematics. 2022; 10(4):606. https://doi.org/10.3390/math10040606

Chicago/Turabian Style

Han, Xiangyu, and Chuancun Yin. 2022. "Tail Conditional Moments for Location-Scale Mixture of Elliptical Distributions" Mathematics 10, no. 4: 606. https://doi.org/10.3390/math10040606

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop