Multivariate Tail Moments for Log-Elliptical Dependence Structures as Measures of Risks

: The class of log-elliptical distributions is well used and studied in risk measurement and actuarial science. The reason is that risks are often skewed and positive when they describe pure risks, i.e., risks in which there is no possibility of proﬁt. In practice, risk managers confront a system of mutually dependent risks, not only one risk. Thus, it is important to measure risks while capturing their dependence structure. In this short paper, we compute the multivariate risk measures, multivariate tail conditional expectation, and multivariate tail covariance measure for the family of log-elliptical distributions, which captures the dependence structure of the risks while focusing on the tail of their distributions, i.e., on extreme loss events. We then study our result and examine special cases, as well as the optimal portfolio selection using such measures. Finally, we show how the given multivariate tail moments can also be computed for log-skew elliptical models based on similar approaches given for the log-elliptical case.


Introduction
The family of log-elliptical (LE) distributions is a family of continuous distributions that includes the log-normal (LN) distribution as a special case. This family of distributions is extensively used in quantitative finance to model financial returns and losses (Chriss [1], Valdez and Dhaene [2], Hamada and Valdez [3], Valdez et al. [4], Klebaner and Landsman [5], Kortschak and Hashorva [6], Landsman-Makov-Shushi [7]). Let X LE n (µ,Σ,g n ) be an n × 1 random vector with the multivariate LE distribution. Then, its probability density function (pdf) takes the form (see, for instance, Valdez et al. [4]) Here µ is an n × 1 vector of locations, Σ is an n × n scale matrix, and g n (u), u ≥ 0, is called the density generator which satisfies the following condition ∞ 0 u n/2 g n (u)du > 0.
The LN pdf is obtained by taking the density generator g n (u) = (2π) −n/2 e −u . Another important example is the log-Laplace distribution which is obtained by taking g n (u) = c n e − The LE family of distributions can be derived from the transformation X = e Y 1 , e Y 2 , . . . , e Y n T of an elliptical random vector Y E n (µ,Σ,g n ) whose pdf takes the form with the characteristic function for some function ψ(t) : [0, ∞) → R, called the characteristic generator. For the normal distribution, the characteristic generator is the exponential function ψ(t) = e −t . For the Laplace distribution ψ(t) = 1/(1 + t), and for the generalized stable laws distributions Any n × 1 log-elliptical random vector Y has a unique density generator g (n) (u) and thus has a unique characteristic generator ψ(u); this can be shown from the existence and uniqueness theorem. If the expectation of Y j exists, the characteristic generator can be extended on the negative part and the expectation has the following explicit form where µ j and σ 2 j are the location and scale parameters of Y j , respectively (see, Klebaner and Landsman [5], formula (2.13). Furthermore, if the covariance between two log-elliptical random variables Y j , Y k , j, k = 1, 2, . . . , n, exists, then, the covariance Cov(Y j , Y k ) follows the form (see Valdez et al. [4]) where µ k and σ 2 k are the location and scale parameter of Y k , respectively, and is the scale matrix of the log-elliptical bivariate random vector (Y j , Y k ) T .
The following are some special members of the class of log-elliptical distributions.

1.
Multivariate log-normal distribution: In the case that the density generator is g n (u) = (2π) −n/2 e −u , the pdf of the multivariate log-normal distribution is and we write Y LN n (µ, Σ).

2.
Multivariate log-Student-t distribution: The pdf of the log-Student-t distribution is given by with m > 0 degrees of freedom, and we write Y LSt n (µ, Σ, m).

3.
Multivariate log-logistic distribution: An elliptical vector Y is log-logistic distributed if its pdf takes the form and we write Y LLo n (µ, Σ) is 4.
Multivariate log-Laplace distribution: we say that Y is a multivariate log-Laplace random vector if its pdf is A well-known property of the LE family of distributions is that the moments of this family can be computed explicitly using the elliptical characteristic generator ψ.
The moments of LE distributions can be computed explicitly, by the following celebrated Lemma: Then, if moment of X α 1 1 · · · · · X α n n exists then the characteristic generator ψ can be extended on the negative part and the mentioned moment takes the form where α ∈ R n , α 1 + · · · + α n = k, and ψ is the characteristic generator of the associated elliptical random vector Y E n (µ,Σ,g n ).
In this paper, we focus on the following projection of a random vector of risks X, where VaR q (X) = (VaR q 1 (X 1 ), VaR q 2 (X 2 ), . . . , VaR q n (X n )) T is n × 1 vector, where VaR q j (X) = x q j is the value at risk of X j under the q j − th quantile, q j ∈ [0, 1), j = 1, . . . , n, q = (q 1 , . . . , q n ). For simplicity, we also write VaR q (X) = x q , having in mind that we have a vector of q-th quantiles.
Following the asymptotic expansion of the conditional characteristic function ϕ q (t) := The first measure, E X|X > x q , has been introduced in Landsman-Makov-Shushi [8] and is called the multivariate tail conditional (MTCE) measure, where the second measure centralized around MTCE was introduced in Landsman-Makov-Shushi [9] in order to capture the dispersion of the random vector of risks when focusing on extreme losses. Analyzing the moments and the tail moments of random variables has been studied and well investigated in the literature, and there is a still active research in this area-in its applications in different fields, from data analysis to actuarial science (Loperfido [10], Loperfido-Mazur-Podgórski [11], Ogasawara [12]).

Multivariate Tail Conditional Expectation for Log-Elliptical Models
The multivariate tail conditional expectation (MTCE) measure is a risk measure that naturally extends the tail conditional expectation (TCE) from a univariate risk into a multivariate system of mutually depend risks. This multivariate risk measure has been introduced in Landsman-Makov-Shushi [8]. For additional literature about the MTCE measure we refer to Cai-Wang-Mao [13], Hashorva [14], Mousavi et al. [14], Frei [15], Ling [16], Shushi and Yao [17], among others. Define X = (X 1 , X 2 , . . . , X n ) T an n × 1 vector of random risks that are mutually dependent on each other with cumulative distribution function F X (x). Using this notation, for two n-variate random vectors X and This definition is essentially more realistic than that introduced in Landsman-Makov-Shushi [8], where all the quantile levels q j were the same. Now each risk or loss may exceed its own VaR, which can be large, small, or even equal to 0, implying total flexibility as to the degree of riskiness of any of the underlying risks. The TCE, which is the tail conditional expectation measure, TCE q (X) = E X|X > VaR q (X) . Proposition 1. The MTCE q (X) measure satisfies the following expression, Proof. Please see Landsman-Makov-Shushi [9].
We introduce a new random vector Z * i associated with the element Z i of vector Z with the pdf ., n Theorem 1. Let X LE n (µ,Σ,g n ) be an n × 1 random vector of risks with characteristic generator ψ, which can be extended on the negative part. Then, the MTCE measure is given by Here e µ = (e µ 1 , . . . , e µ n ) T , ψ = ψ − 1 2 σ 11 , . . . , ψ − 1 2 σ nn T , δ = (δ 1 , . . . , δ n ) T , is the tail function of the random vector Z * i associated with Z i and symbol • is the Hadamard product.
Proof. From the definition of MTCE, and using the transformation where e µ+Σ 1/2 z = e µ 1 +Σ 1/2 1 z , . . . , e µ n +Σ 1/2 n z T . Now, notice that the following integral is, in fact, the moment generated function of an associated elliptical distribution, allowing us to introduce a new random vector Z * i associated with the random vector where z q = Σ −1/2 ln x q −µ Example 1. Log-normal distribution. Let X LN n (µ,Σ) be a log-elliptical random vector of risks. Then, the density function and the characteristic generator are given by g n (u) = e −u and ψ(u) = e −u , respectively. Then Z * i N n (Σ i ,I n ), where I n is the n × n identity matrix and whereΦ n (x) is the tail function of n-variate standard normal random vector. This result well conforms with Valdez et al. [4], Equation (39).

Example 2.
Log-Laplace distribution. We say that X LL n (µ,Σ) is multivariate Log Laplace random vector if its characteristic generator The density generator is equaled to g n (u) = 2 (2π) n/2 u (2−n)/4 K n/2−1 (2 √ u) where K υ is the modified Bessel function of the third kind. Then the density of the multivariate Log Laplace distribution is equal that coincides with univariate Log-Laplace distribution. (Eltoft et al. [18], Equation (9)). The MTCE of multivariate Log-Laplace can be calculated as follows Here the random vector Z * . . , n.

Numerical Illustration
Let us now examine a random vector of five risks X with the multivariate log-normal distribution X = (X 1 , X 2 , X 3 , X 4 ,  One can see that as it is expected the value of the components of the vector MTCE is increased if the level q of the corresponded observation increased but is increased not proportionally. On the same graph we presented MTCE for q = (0.9 0.7 0.98 0.95 0.91), (dot-dashed line), i.e., we changed only level q 3 from 0.8 to 0.98, and we can see that all system of components of vector MTCE stretched out mostly in the direction of increasing.

Multivariate Tail Covariance for Log-Elliptical Models
The multivariate tail covariance (MTCov) measure provides a variation of the MTCE measure for the dispersion of the random vector of risks. This measure was introduced in Landsman-Makov-Shushi [9].
While we have established that MTCE q (X) = arg inf c∈R n E (X − c)(X − c) T |X > VaR q (X) , the proposed multivariate tail covariance (MTCov) measure is given by Theorem 2. Let X LE n (µ,Σ,g n ) be an n × 1 random vector of risks with characteristic generator ψ, which can be extended on the negative part. Then, the (i, j) component of the MTCov measure is given by Here F Z * * ij (z) is the tail function of vector Z * * ij associated with the vector Z and having density Proof. From the definition of MTCov with the form (17), we have and using the transformation The proof of the Theorem follows from (19) and Theorem 1.

Extensions to the Class of Log-Elliptical Models
We note that the MTCE and MTCOV measures can naturally be extended to the class of log-skew-elliptical distributions Here π : y ∈ R n =⇒ π(y) ∈ [0, 1] such that π(−y) = 1 − π(y), ∀y ∈ R n . In particular, after some calculations one would find that the formulas remain the same, but, with new random vectors X * , X * * ij with the same pdfs as the log-elliptical case multiplied by the associated skewness term, as follows: and f X * * ij (u) = 2ψ

Optimal Portfolio Selection with Log-Elliptical Distributions
In risk management, the optimal portfolio selection problem is one of the most profound and well studied areas in both the theoretical and also the practical aspects of portfolio decision making (Castellano and Cerqueti [19], Li and Hoi [20], Shen et al. [21], and Fletcher [22]).
Modern portfolio theory (MPT), developed by Markowitz in the year 1952 provided the foundations of optimal portfolio theory based on the moments of the portfolio risk (see, Markowitz, Elton, and Gruber [23], and Francis and Kim [24]). In MPT the classical mean-variance (MV) is introduced and is given by where L = π T X is the portfolio loss, π and X are a n × 1 vectors of weights and losses (equaled the minus portfolio returns), respectively, and 1 T π = 1, where 1 is vector of n ones. E(L) = π T µ, where µ = (µ 1 , µ 2 , . . . , µ n ) T is the vector of expected losses and Var(L) = π T Σπ, where Σ is the n × n covariance matrix of X, the vector of portfolio losses. Here λ is the risk aversion parameter, which can be interpreted as the parameter of a trade-off between the expected loss, E(L), and the variance, Var(L), or the standard deviation, Var(L), of the portfolio. Suppose that we have a system of n risks of losses such that X LE n (µ,Σ,g n ). Such considered risks that are having LE distributions describe pure risks, i.e., risks that can only bring losses without the possibility of profit. Then, we aim at finding the optimal weights that would minimize such a portfolio of risks. Since the classical MV model only focuses on the mean and variance of the risk, it does not capture the skewness appearing in the LE distribution. To avoid this problem, we suggest taking the MTCE and MTCov measures instead of the expectation and covariance matrix of the portfolio risk, i.e., Thus, we focus on the tails of the risks, leading to the following projection of the portfolio risk L → L|X > VaR q (X), so E L|X > VaR q (X) = π T MTCE q (X), and Var L|X > VaR q (X) = π T MTCov q (X)π.
In that case, instead of the classical MV, we would have the following measure which the both capture the skewness of the distribution and focusing on extreme loss events, much like the optimal portfolio problem with value-at-risk, VaR q (L) and expected-shortfall ES q (L). Our goal is to minimize the MV measure g(π) of the losses X 1 , . . . , X n .
Resuming this section we would point out that the most optimal results listed in Table 3 [25] can be provided here and can be outspread in the context of MTCE q (X) and MTCov q (X).

Conclusions
In this paper we derived the MTCE and MTCov measures for the class of log-elliptical distributions. The importance of the results stems from the fact that in risk management, risks are often skewed and non-negative, but their logarithms can still preserve the symmetric property. We found the analytic forms for MTCE and MTCov multivariate functionals and also used these measures in the formulation and solving the problem of the optimal portfolio selection.