Open Access
This article is

- freely available
- re-usable

*J. Risk Financial Manag.*
**2019**,
*12*(3),
107;
https://doi.org/10.3390/jrfm12030107

Article

CVaR Regression Based on the Relation between CVaR and Mixed-Quantile Quadrangles

^{1}

V.M. Glushkov Institute of Cybernetics, 40, pr. Akademika Glushkova, 03187 Kyiv, Ukraine

^{2}

Applied Mathematics & Statistics, Stony Brook University, B-148 Math Tower, Stony Brook, NY 11794, USA

^{*}

Author to whom correspondence should be addressed.

Received: 16 May 2019 / Accepted: 20 June 2019 / Published: 26 June 2019

## Abstract

**:**

A popular risk measure, conditional value-at-risk (CVaR), is called expected shortfall (ES) in financial applications. The research presented involved developing algorithms for the implementation of linear regression for estimating CVaR as a function of some factors. Such regression is called CVaR (superquantile) regression. The main statement of this paper is: CVaR linear regression can be reduced to minimizing the Rockafellar error function with linear programming. The theoretical basis for the analysis is established with the quadrangle theory of risk functions. We derived relationships between elements of CVaR quadrangle and mixed-quantile quadrangle for discrete distributions with equally probable atoms. The deviation in the CVaR quadrangle is an integral. We present two equivalent variants of discretization of this integral, which resulted in two sets of parameters for the mixed-quantile quadrangle. For the first set of parameters, the minimization of error from the CVaR quadrangle is equivalent to the minimization of the Rockafellar error from the mixed-quantile quadrangle. Alternatively, a two-stage procedure based on the decomposition theorem can be used for CVaR linear regression with both sets of parameters. This procedure is valid because the deviation in the mixed-quantile quadrangle (called mixed CVaR deviation) coincides with the deviation in the CVaR quadrangle for both sets of parameters. We illustrated theoretical results with a case study demonstrating the numerical efficiency of the suggested approach. The case study codes, data, and results are posted on the website. The case study was done with the Portfolio Safeguard (PSG) optimization package, which has precoded risk, deviation, and error functions for the considered quadrangles.

Keywords:

quantile; VaR; quadrangle; CVaR; conditional value-at-risk; expected shortfall; ES; superquantile; deviation; risk; error; regret; minimization; CVaR estimation; regression; linear regression; linear programming; portfolio safeguard; PSG## 1. Introduction

We start the introduction with a quick outline of the main result of this paper. The conditional value-at-risk (CVaR) is a popular risk measure. It is called expected shortfall (ES) in financial applications and it is included in financial regulations. This paper provides algorithms for the estimation of CVaR with linear regression as a function of factors. This task is of critical importance in practical applications involving low probability events.

By definition, CVaR is an integral of the value-at-risk (VaR) in the tail of a distribution. VaR can be estimated with the quantile regression by minimizing the Koenker–Bassett error function. This paper shows that CVaR can be estimated by minimizing a mixture of the Koenker–Bassett errors with an additional constraint. This mixture is called the Rockafellar error and it has been earlier used for CVaR estimation without a rigorous mathematical justification. One more equivalent variant of CVaR regression can be done by minimizing a mixture of CVaR deviations for finding all coefficients, except the intercept. In this case, the intercept is calculated using an analytical expression, which is the CVaR of the optimal residual without an intercept. The new mathematical result links quantile and CVaR regressions and shows that convex and linear programming methods can be straightforwardly used for CVaR estimation. Mathematical justification of the results involves a risk quadrangle concept combining regret, error, risk, deviation, and statistic notions.

Quantiles evaluating different parts of a distribution of a random value are quite popular in various applications. In particular, quantiles are used to estimate tail of a distribution (e.g., 90%, 95%, and 99% quantiles). This paper is motivated by finance applications, where a quantile is called VaR. Risk measure VaR is included in finance regulations for the estimation of market risk. VaR has several attractive properties, such as the simplicity of calculation, stability of estimation, and availability of quantile regression, for the estimation of VaR as a function of explanatory factors. The quantile regression (see Koenker and Bassett (1978), Koenker (2005)) is an important factor supporting the popularity of VaR. For instance, a quantile regression was used by Adrian and Brunnermeier (2016) to estimate institution’s contribution to systemic risk.

However, VaR also has some undesirable properties:

- Lack of convexity: portfolio diversification may increase VaR.
- VaR is not sensitive to outcomes exceeding VaR, which allows for stretching of the distribution without an increasing of the risk measured by VaR.
- VaR has poor mathematical properties, such as discontinuity with respect to (w.r.t.) portfolio positions for discrete distributions based on historical data.

Shortcomings of VaR led financial regulators to use an alternative measure of risk, which is called conditional value-at-risk (CVaR) in this paper. This risk measure was introduced in Rockafellar and Uryasev (2000) and further studied in Rockafellar and Uryasev (2002) and many other papers. CVaR for continuous distributions equals the conditional expectation of losses exceeding VaR. An important mathematical fact is that CVaR is a coherent risk measure (see Acerbi and Tasche (2002), Rockafellar and Uryasev (2002)). Ziegel (2014) shows that CVaR is elicitable in a week sense. Fissler and Ziegel (2015) proved that (VaR, CVaR) is jointly elicitable, meaning elic(CVaR) ≤ 2, and more generally, that spectral risk measures have a low elicitation complexity. These results clarify the regression procedure of Rockafellar et al. (2014); their algorithm implicitly tracks the quantiles suggested by elicitation complexity.

Rockafellar and Uryasev (2000, 2002) have shown that CVaR of a convex function of variables is also a convex function. Due to this property, CVaR optimization problems can be reduced to convex and linear optimization problems.

This paper is based on risk quadrangle theory, which defines quadrangles (i.e., groups) of stochastic functionals Rockafellar and Uryasev (2013). Every quadrangle contains risk, deviation, error, and regret (negative utility). These elements of the quadrangle are linked by the statistic function.

The relation of quantile regression and CVaR optimization was explained using a quantile quadrangle (see Rockafellar and Uryasev (2013)). It was shown that the Koenker–Bassett error function and CVaR belong to the same quantile quadrangle. By minimizing the Koenker–Bassett error function with respect to one parameter, we obtain the CVaR deviation (which is the CVaR for the centered random value). The optimal value of the parameter, which is called statistic, equals VaR. Therefore, the linear regression with the Koenker–Bassett error estimates VaR as a function of factors. The fact that statistic equals VaR and is also used for building the optimization approach for CVaR (see Rockafellar and Uryasev (2000, 2002)).

Another important contribution that takes advantage of quadrangle theory is the regression decomposition theorem proved in Rockafellar et al. (2008). With this decomposition theorem, the regression problem is decomposed in two steps: (1) minimization of deviation from the corresponding quadrangle, and (2) calculation of the intercept by using statistic from this quadrangle. For instance, by applying the decomposition theorem to the quantile quadrangle, we can do quantile regression by minimizing CVaR deviation for finding all regression coefficients, except the intercept. Then, the intercept is calculated by using VaR statistic.

CVaR can be approximated using the weighted average of VaRs with different confidence levels, which is called the mixed VaR method. Rockafellar and Uryasev (2013) demonstrated that mixed VaR is a statistic in the mixed-quantile quadrangle. The error function, corresponding to this quadrangle (called the Rockafellar error) can be minimized for the estimation of the mixed VaR with linear regression. The Rockafellar error is a solution of a minimization problem with one linear constraint. Linear regression for estimating mixed VaR can be done by minimizing the Rockafellar error with convex and linear programming (Appendix A contains these formulations). Alternatively, this regression can be done in two steps with the decomposition theorem. The deviation in the mixed-quantile quadrangle is the mixed CVaR deviation, therefore all regression coefficients, except the intercept, can be found by minimizing this deviation. Further, the intercept can be found by using statistic, which is the mixed VaR.

Rockafellar et al. (2014) developed the CVaR quadrangle with the statistic equal to CVaR. Risk envelopes and identifiers for this quadrangle were calculated in Rockafellar and Royset (2018). This CVaR quadrangle is a theoretical basis for constructing the regression for estimating CVaR. Rockafellar et al. (2014) called the linear regression for estimation of CVaR using superquantile (CVaR) regression. The superquantile is an equivalent term for CVaR. Here we use the term “CVaR regression”. The CVaR regression plays a major role in various engineering areas, especially in financial applications. For instance, Huang and Uryasev (2018) used CVaR regression for the estimation of risk contributions of financial institutions and Beraldi et al. (2019) used CVaR for solving portfolio optimization problems with transaction costs.

This paper considers only discrete random values with a finite number of equally probable atoms. This special case is considered because it is needed for the implementation of the linear regression for the CVaR estimation. We have explained with an example how parameters of the optimization problems are calculated.

The equal probabilities property was used for calculating parameters of optimization problems. It is possible to calculate parameters with non-equal probabilities of atoms, but this is beyond the scope of the paper, which is focused on the linear regression.

We suggested two sets (Sets 1 and 2) of parameters for the mixed-quantile quadrangle. Set 1 corresponds to the two-step implementation of the CVaR regression in Rockafellar et al. (2014), and Set 2 is a new set of parameters. We proved that with Set 1, the statistic, risk, and deviation of the mixed-quantile and CVaR quadrangles coincide. Therefore, CVaR regression can be done by minimizing the Rockafellar error with convex and linear programming. For Set 2, the mixed-quantile and CVaR quadrangle share risk and deviation parameters. Also, the statistic of this mixed-quantile quadrangle (which may not be unique) includes statistic of the CVaR quadrangle. Therefore, minimizing the Rockafellar error correctly calculates all regression coefficients, but may provide an incorrect intercept. This is actually not a big concern because we know that the intercept is equal to the CVaR of an optimal residual without intercept.

Also, we demonstrated that the CVaR regression can be done in two steps with the decomposition theorem by using parameters from Sets 1 and 2 in the mixed-quantile deviation. A similar two-step procedure was used for CVaR regression in Rockafellar et al. (2014). Here we justify this two-step procedure through the equivalence of deviations in CVaR and mixed-quantile quadrangles with parameters from Sets 1 and 2.

This paper is organized as follows. Section 2 provides general results about quadrangles. In particular, we considered quantile, mixed-quantile, and CVaR quadrangles. Section 3 and Section 4 introduced and investigated the parameters from Sets 1 and 2, respectively. Section 5 provided optimization problem statements based on CVaR and mixed-quantile quadrangles and described the linear regression for CVaR estimation. Section 6 presented a case study and applied CVaR regression to the financial style classification problem. The case study is posted on the web with codes, data, and solutions. Appendix A provides convex and linear programming problems for minimization of the Rockafellar error; Appendix B provides Portfolio Safeguard (PSG) codes implementing regression optimization problems.

## 2. Quantile, Mixed-Quantile, and CVaR Quadrangles

Rockafellar and Uryasev (2013) developed a new paradigm called the risk quadrangle, which linked risk management, reliability, statistics, and stochastic optimization theories. The risk quadrangle methodology united risk functions for a random value $X$ in groups (quadrangles) consisting of five elements:

- Risk $\mathcal{R}\left(X\right)$, which provides a numerical surrogate for the overall hazard in $X$.
- Deviation $\mathcal{D}\left(X\right)$, which measures the “nonconstancy” in $X$ as its uncertainty.
- Error $\mathcal{E}\left(X\right)$, which measures the “nonzeroness” in $X$.
- Regret $\mathcal{V}\left(X\right)$, which measures the “regret” in facing the mix of outcomes of $X$.
- Statistic $\mathcal{S}\left(X\right)$ associated with $X$ through $\mathcal{E}$ and $\mathcal{V}$.

These elements of a risk quadrangle are related as follows:
where $E\left(X\right)$ denotes the mean of $X$ and the statistic, $\mathcal{S}\left(X\right)$, can be a set, if the minimum is achieved for multiple points.

$$\mathcal{V}\left(X\right)=\mathcal{E}\left(X\right)+E\left(X\right)$$

$$\mathcal{R}\left(X\right)=\mathcal{D}\left(X\right)+E\left(X\right)$$

$$\mathcal{R}\left(X\right)=\underset{C}{\mathrm{min}}\left\{C+\mathcal{V}\left(X-C\right)\right\}$$

$$\mathcal{D}\left(X\right)=\underset{C}{\mathrm{min}}\left\{\mathcal{E}\left(X-C\right)\right\}$$

$$\underset{C}{\mathrm{argmin}}\left\{C+\mathcal{V}\left(X-C\right)\right\}=\mathcal{S}\left(X\right)=\mathrm{arg}\underset{C}{\mathrm{min}}\left\{\mathcal{E}\left(X-C\right)\right\}$$

Further, we use the following notations. The cumulative distribution function is denoted by ${F}_{X}\left(x\right)=prob\left\{X\le x\right\}$. The positive and negative part of a number are denoted using:

$${\left[t\right]}^{+}=\{\begin{array}{c}t,\mathrm{for}t0\\ 0,\mathrm{for}t\le 0\end{array}\mathrm{and}{\left[t\right]}^{-}=\{\begin{array}{c}-t,\mathrm{for}t0\\ 0,\mathrm{for}t\ge 0\end{array}$$

The lower and upper VaR (quantile) are defined as follows:

lower VaR:
upper VaR:

$$Va{R}_{\alpha}^{-}\left(X\right)=\{\begin{array}{c}sup\left\{x,{F}_{X}\left(x\right)\alpha \right\}for0\alpha \le 1\\ inf\left\{x,{F}_{X}\left(x\right)\ge \alpha \right\}for\alpha =0\end{array}$$

$$Va{R}_{\alpha}^{+}\left(X\right)=\{\begin{array}{c}inf\left\{x,{F}_{X}\left(x\right)\alpha \right\}for0\le \alpha 1\\ sup\left\{x,{F}_{X}\left(x\right)\le \alpha \right\}for\alpha =1\end{array}$$

VaR (quantile) is a set if the lower and upper quantiles do not coincide:
otherwise VaR is a singleton $Va{R}_{\alpha}\left(X\right)=Va{R}_{\alpha}^{-}\left(X\right)=Va{R}_{\alpha}^{+}\left(X\right)$.

$$Va{R}_{\alpha}\left(X\right)=\left[Va{R}_{\alpha}^{-}\left(X\right),Va{R}_{\alpha}^{+}\left(X\right)\right]$$

Conditional value-at-risk (CVaR) with the confidence level $\alpha \in \left(0,1\right)$ can be defined in many ways. We prefer the following constructive definition:

$$CVa{R}_{\alpha}\left(X\right)=\underset{C}{\mathrm{min}}\left\{C+\frac{1}{1-\alpha}E{[X-C]}^{+}\right\}$$

In financial applications, however, the most popular definition of CVaR is

$$CVa{R}_{\alpha}\left(X\right)=\frac{1}{1-\alpha}{\int}_{\alpha}^{1}Va{R}_{\beta}^{-}\left(X\right)d\beta .$$

For $\alpha =0,$ $CVa{R}_{0}\left(X\right)$ is defined as $CVa{R}_{0}\left(X\right)=\underset{\epsilon \to 0}{\mathrm{lim}}CVa{R}_{\epsilon}\left(X\right)=E\left(X\right)$.

For $\alpha =1,CVa{R}_{1}\left(X\right)$ is defined as $CVa{R}_{1}\left(X\right)=Va{R}_{1}^{-}\left(X\right)$ if a finite value of $Va{R}_{1}^{-}\left(X\right)$ exists.

Quadrangles are named after statistic functions. The most famous quadrangle is the quantile quadrangle (see Rockafellar and Uryasev (2013)), named after the VaR (quantile) statistic. This quadrangle establishes relations between the CVaR optimization technique described in Rockafellar and Uryasev (2000, 2002) and quantile regression (see Koenker and Bassett (1978), Koenker (2005)). In particular, it was shown that CVaR minimization and the quantile regression are similar procedures based on the VaR statistic in the regret and error representation of risk and deviation.

Here is the definition of the quantile quadrangle for $\alpha \in \left(0,1\right)$:

- Statistic: ${\mathcal{S}}_{\alpha}\left(X\right)=Va{R}_{\alpha}\left(X\right)$ = VaR (quantile) statistic.
- Risk: ${\mathcal{R}}_{\alpha}\left(X\right)=CVa{R}_{\alpha}\left(X\right)=\underset{C}{\mathrm{min}}\left\{C+{\mathcal{V}}_{\alpha}\left(X-C\right)\right\}=$ CVaR risk.
- Deviation: ${\mathcal{D}}_{\alpha}\left(X\right)=CVa{R}_{\alpha}\left(X\right)-E\left[X\right]=\underset{C}{\mathrm{min}}\left\{{\mathcal{E}}_{\alpha}\left(X-C\right)\right\}=$ CVaR deviation.
- Regret: ${\mathcal{V}}_{\alpha}\left(X\right)=\frac{1}{1-\alpha}E{\left[X\right]}^{+}=$ average absolute loss, scaled.
- Error: ${\mathcal{E}}_{\alpha}\left(X\right)=E\left[\frac{\alpha}{1-\alpha}{\left[X\right]}^{+}+{\left[X\right]}^{-}\right]={\mathcal{V}}_{\alpha}\left(X\right)-E\left[X\right]=$ normalized Koenker–Bassett error.

The quantile quadrangle sets an example for development of more advances quadrangles. The following mixed-quantile quadrangle includes statistic, which is equal to the weighted average of VaRs (quantiles) with specified positive weights. Therefore, the error in this quadrangle can be used to build a regression for the weighted average of VaRs (quantiles). Since CVaR can be approximated by a weighted average of VaRs, the error function in this quadrangle can be used to build linear regression for the estimation of CVaR.

Confidence levels ${\alpha}_{k}\in \left(0,1\right),k=1,\dots ,r$, and weights ${\lambda}_{k}>0$, ${\sum}_{k=1}^{r}{\lambda}_{k}=1$. The error in this quadrangle is called the Rockafellar Error.

- Statistic: $\mathcal{S}\left(X\right)={\sum}_{k=1}^{r}{\lambda}_{k}Va{R}_{{\alpha}_{k}}\left(X\right)=$ mixed VaR (quantile).
- Risk: $\mathcal{R}\left(X\right)={\sum}_{k=1}^{r}{\lambda}_{k}CVa{R}_{{\alpha}_{k}}\left(X\right)=$ mixed CVaR.
- Deviation: $\mathcal{D}\left(X\right)={\sum}_{k=1}^{r}{\lambda}_{k}CVa{R}_{{\alpha}_{k}}\left(X-E\left[X\right]\right)=$ mixed CVaR deviation.
- Regret: $\mathcal{V}\left(X\right)=\underset{{B}_{1},\dots ,{B}_{r}}{\mathrm{min}}\left\{{\sum}_{k=1}^{r}{\lambda}_{k}{\mathcal{V}}_{{\alpha}_{k}}\left(X-{B}_{k}\right)|{\sum}_{k=1}^{r}{\lambda}_{k}{B}_{k}=0\right\}=$ the minimal weighted average of regrets ${\mathcal{V}}_{{\alpha}_{k}}\left(X-{B}_{k}\right)=\frac{1}{1-{\alpha}_{k}}E{\left[X-{B}_{k}\right]}^{+}$ satisfying the linear constraint on ${B}_{1},\dots ,{B}_{r}$.
- Error: $\mathcal{E}\left(X\right)=\underset{{B}_{1},\dots ,{B}_{r}}{\mathrm{min}}\left\{{\sum}_{k=1}^{r}{\lambda}_{k}{\mathcal{E}}_{{\alpha}_{k}}\left(X-{B}_{k}\right)|{\sum}_{k=1}^{r}{\lambda}_{k}{B}_{k}=0\right\}=$ Rockafellar error $=$ the minimal weighted average of errors ${\mathcal{E}}_{{\alpha}_{k}}\left(X-{B}_{k}\right)=E\left[\frac{{\alpha}_{k}}{1-{\alpha}_{k}}{\left[X-{B}_{k}\right]}^{+}+{\left[X-{B}_{k}\right]}^{-}\right]$ satisfying the linear constraint on ${B}_{1},\dots ,{B}_{r}.$

The following CVaR quadrangle can be considered as the limiting case of the mixed-quantile quadrangle when the number of terms in this quadrangle tends to infinity. The statistic in this quadrangle is CVaR; therefore, the error in this quadrangle can be used for the estimation of CVaR with linear regression.

- Statistic: ${\overline{\mathcal{S}}}_{\alpha}\left(X\right)=CVa{R}_{\alpha}\left(X\right)$ = CVaR.
- Risk: ${\overline{\mathcal{R}}}_{\alpha}\left(X\right)=\frac{1}{1-\alpha}{\int}_{\alpha}^{1}CVa{R}_{\beta}\left(X\right)d\beta =$ CVaR2 risk.
- Deviation: ${\overline{\mathcal{D}}}_{\alpha}\left(X\right)={\overline{\mathcal{R}}}_{\alpha}\left(X\right)-E\left[X\right]=\frac{1}{1-\alpha}{\int}_{\alpha}^{1}CVa{R}_{\beta}\left(X\right)d\beta -E\left[X\right]=$ CVaR2 deviation.
- Regret: ${\overline{\mathcal{V}}}_{\alpha}\left(X\right)=\frac{1}{1-\alpha}{\int}_{0}^{1}{\left[CVa{R}_{\beta}\left(X\right)\right]}^{+}d\beta $ = CVaR2 regret.
- Error: ${\overline{\mathcal{E}}}_{\alpha}\left(X\right)={\overline{\mathcal{V}}}_{\alpha}\left(X\right)-E\left[X\right]$ = CVaR2 error.

The following section proves that for a discretely distributed random value with equally probable atoms, the CVaR quadrangle is “equivalent” to a mixed-quantile quadrangle with some parameters in the sense that statistic, risk, and deviation in these quadrangles coincide. This fact was proved for a set of random values with equal probabilities and variable locations of atoms.

The set of parameters considered in the following section is used in two-step CVaR regression in Rockafellar et al. (2014).

## 3. Set 1 of Parameters for Mixed-Quantile Quadrangle

Set 1 of parameters for the mixed-quantile quadrangle for a discrete uniformly distributed random value $X$ consists of confidence levels ${\alpha}_{k}\in \left(0,1\right),k=1,\dots ,r$ and weights ${\lambda}_{k}>0$ such that ${\sum}_{k=1}^{r}{\lambda}_{k}=1$. Parameter $r$ depends only on the number of atoms in $X$ and the confidence level $\alpha $ of the CVaR quadrangle. We proved that statistic, risk, and deviation of the mixed-quantile quadrangle with the Set 1 of parameters coincide with the statistic, risk, and deviation of the CVaR quadrangle.

Let $X$ be a discrete random value with support ${x}^{i}$ and $\mathrm{Prob}\left(X={x}^{i}\right)=1/\nu $ for $i=1,2,\dots ,\nu $, where $\nu $ is the number of atoms. Denote ${x}^{max}=\underset{i=1,\dots ,\nu}{\mathrm{max}}{x}^{i}$. For this random value, $CVa{R}_{1}\left(X\right)=Va{R}_{1}^{-}\left(X\right)={x}^{max}$.

**Set 1 of parameters:**

- partition of the interval $\left[\alpha ,1\right]$: ${\beta}_{{\nu}_{\alpha}-1}=\alpha $, and ${\beta}_{i}=i\delta $, for $i={\nu}_{\alpha},{\nu}_{\alpha}+1,\dots ,\nu $, where $\delta =1/\nu $, ${\nu}_{\alpha}=\lfloor \nu \alpha \rfloor +1$, with $\lfloor z\rfloor $ being the largest integer less than or equal to $z$; ${\delta}_{\alpha}={\beta}_{{\nu}_{\alpha}}-\alpha $.
- weights: ${p}_{{\nu}_{\alpha}}=\frac{{\delta}_{\alpha}}{1-\alpha},{p}_{i}=\frac{\delta}{1-\alpha},i={\nu}_{\alpha}+1,\dots ,\nu $.
- confidence levels: ${\gamma}_{i}=1-\frac{{\beta}_{i}-{\beta}_{i-1}}{\mathrm{ln}\left(\frac{1-{\beta}_{i-1}}{1-{\beta}_{i}}\right)},i={\nu}_{\alpha},\dots ,\nu -1$; ${\gamma}_{\nu}=1$.

**Lemma**

**1.**

Let$X$be a discrete random value with$\nu $equally probable atoms. Then, statistic, risk, and deviation of the CVaR quadrangle for$X$are given by the following expressions with parameters specified by Set 1:

- 1.
- CVaR statistic:$${\overline{\mathcal{S}}}_{\alpha}\left(X\right)=CVa{R}_{\alpha}\left(X\right)={\sum}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}Va{R}_{{\gamma}_{i}}\left(X\right)$$
- 2.
- CVaR2 risk:$${\overline{\mathcal{R}}}_{\alpha}\left(X\right)=\frac{1}{1-\alpha}{\int}_{\alpha}^{1}CVa{R}_{\beta}\left(X\right)d\beta ={\sum}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}CVa{R}_{{\gamma}_{i}}\left(X\right)$$
- 3.
- CVaR2 deviation:$${\overline{\mathcal{D}}}_{\alpha}\left(X\right)=\frac{1}{1-\alpha}{\int}_{\alpha}^{1}CVa{R}_{\beta}\left(X\right)d\beta -E\left[X\right]={\sum}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}CVa{R}_{{\gamma}_{i}}\left(X\right)-E\left[X\right]$$

**Proof.**

Appendix C contains proof of the lemma. □

**Note.**

Expression (1) is valid for arbitrary ${\gamma}_{i}\in \left({\beta}_{i-1},{\beta}_{i}\right)$, $i={\nu}_{\alpha},\dots ,\nu $. Equations (2) and (3) are valid for arbitrary ${\gamma}_{\nu}\in [{\beta}_{\nu -1},1].$

We want to emphasize that the statement of Lemma 1 is valid for any discrete random value with equally probable atoms. The statement does not depend upon atom locations.

**Corollary**

**1.**

For the random value$X$defined in Lemma 1, statistic, risk, and deviation of the CVaR quadrangle coincide with statistic, risk, and deviation of the mixed-quantile quadrangle with$r=\nu -{\nu}_{\alpha}+1$,${\lambda}_{k}={p}_{{\nu}_{\alpha}-1+k}$,${\alpha}_{k}={\gamma}_{{\nu}_{\alpha}-1+k}$,$k=1,\dots ,r$.

**Proof.**

Right hand sides in Equations (1)–(3) define statistic, risk, and deviation of the mixed-quantile quadrangle because ${p}_{i}>0,i={\nu}_{\alpha},\dots ,\nu $, ${\sum}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}$ = 1 and $Va{R}_{{\gamma}_{i}}\left(X\right),i={\nu}_{\alpha},\dots ,\nu $, are singletons. □

**Example**

**1.**

Let$X$be a discrete random value with five atoms (−40; −10; 20; 60 100) and equal probabilities = 0.2.

Figure 1 explains how to calculate statistic in the CVaR quadrangle and mixed-quantile quadrangle with Set 1 of parameters for $\alpha =0.5$. Bold lines show $Va{R}_{\alpha}^{-}\left(X\right)$ as a function of $\alpha .$ $CVa{R}_{0.5}\left(X\right)$ equals the dark area under the $Va{R}_{\alpha}\left(X\right)$ divided by $1-\alpha $. CVaR can be calculated as integral of $Va{R}_{\alpha}\left(X\right)$ or as the sum of areas of rectangles. Figure 2 explains how to calculate risk in the CVaR quadrangle and mixed-quantile quadrangle with Set 1 of parameters for $\alpha =0.5$. The bold continuous curve shows $CVa{R}_{\alpha}\left(X\right)$ as a function of $\alpha $. Risk ${\overline{\mathcal{R}}}_{0.5}\left(X\right)$ is equal to the area under the CVaR curve divided by $1-\alpha $. This area can be calculated as the integral of CVaR or as the sum of areas of rectangles. The area of every rectangle is equal to the area under CVaR in the appropriate range of $\alpha $. The equality of areas defines values of ${\gamma}_{i}$. Parameters ${p}_{i},{\gamma}_{i}$ do not depend on the values of atoms.

## 4. Set 2 of Parameters for the Mixed-Quantile Quadrangle

This section gives an alternative expression for the risk ${\overline{\mathcal{R}}}_{\alpha}\left(X\right)$ and deviation ${\overline{\mathcal{D}}}_{\alpha}\left(X\right)$ in the CVaR quadrangle for a discrete uniformly distributed random variable. This expression is based on the following Set 2 of parameters. This set of parameters has the same number of parameters as Set 1 but different values of weights and confidence levels. Similar to Section 3, let $X$ be a discrete random value with support, ${x}^{i}$, $i=1,2,\dots ,\nu $, and $\mathrm{Prob}\left(X={x}^{i}\right)=1/\nu $ for $i=1,2,\dots ,\nu $. Denote ${x}^{max}=\underset{i=1,\dots ,\nu}{\mathrm{max}}{x}^{i}$. For this random value $CVa{R}_{1}\left(X\right)=Va{R}_{1}^{-}\left(X\right)={x}^{max}$.

**Set 2 of parameters:**

- partition of the interval $\left[{\beta}_{{\nu}_{\alpha}-1},1\right]$: $\delta =1/\nu $, ${\beta}_{i}=i\delta $, for $i={\nu}_{\alpha}-1,{\nu}_{\alpha},\dots ,\nu $, where ${\nu}_{\alpha}=\lfloor \nu \alpha \rfloor +1$, with $\lfloor z\rfloor $ being the largest integer less than or equal to $z$; ${\delta}_{\alpha}={\beta}_{{\nu}_{\alpha}}-\alpha $.
- confidence levels: ${\beta}_{i}$, $i={\nu}_{\alpha}-1,{\nu}_{\alpha},\dots ,\nu $.
- weights:${q}_{\nu}=0$;${q}_{\nu -1}=\frac{\delta}{1-\alpha}\times \left[2\mathrm{ln}\left(2\right)\right]\approx \frac{\delta}{1-\alpha}\times 1.386294361$, $(\mathrm{if}\nu -1{\nu}_{\alpha})$${q}_{\nu -2}=\frac{\delta}{1-\alpha}\times 2\left[3\mathrm{ln}\left(\frac{3}{2}\right)+\mathrm{ln}\left(\frac{1}{2}\right)\right]\approx \frac{\delta}{1-\alpha}\times 1.046496288$, $(\mathrm{if}\nu -2{\nu}_{\alpha})$${q}_{\nu -j}=\frac{\delta}{1-\alpha}\times j\left[\left(j+1\right)\mathrm{ln}\left(\frac{j+1}{j}\right)+\left(j-1\right)\mathrm{ln}\left(\frac{j-1}{j}\right)\right]$, $(\mathrm{if}j2,\nu -j{\nu}_{\alpha})$${q}_{{\nu}_{\alpha}}=\frac{\delta}{1-\alpha}\times j\left[\delta -{\delta}_{\alpha}+\left(j+1\right)\mathrm{ln}\left(\frac{1-\alpha}{\delta j}\right)+\left(j-1\right)\mathrm{ln}\left(\frac{j-1}{j}\right)\right]$, ($\mathrm{if}{\nu}_{\alpha}\nu -1,j=\nu -{\nu}_{\alpha})$${q}_{{\nu}_{\alpha}-1}=\frac{\delta}{1-\alpha}\times j\left[{\delta}_{\alpha}+\left(j-1\right)\mathrm{ln}\left(\frac{\delta \left(j-1\right)}{1-\alpha}\right)\right]$, $(\mathrm{if}{\nu}_{\alpha}-1\nu -1,j=\nu -{\nu}_{\alpha}+1$)if ${\nu}_{\alpha}=\nu -1$, then ${q}_{{\nu}_{\alpha}}=\frac{\delta}{1-\alpha}\times 2\left[1+\mathrm{ln}\left(\frac{1-\alpha}{\delta}\right)\right]-1$, ${q}_{{\nu}_{\alpha}-1}=\frac{\delta}{1-\alpha}\times 2\left[\mathrm{ln}\left(\frac{\delta}{1-\alpha}\right)-1\right]+2$if ${\nu}_{\alpha}=\nu $, then ${q}_{{\nu}_{\alpha}-1}=1.$

**Lemma**

**2.**

Let$X$be a discrete random value with$\nu $equally probable atoms. Then, risk and deviation of the CVaR quadrangle for$X$are given by the following expressions with parameters from Set 2.

- 1.
- CVaR2 Risk:$${\overline{\mathcal{R}}}_{\alpha}\left(X\right)=\frac{1}{1-\alpha}{\int}_{\alpha}^{1}CVa{R}_{\beta}\left(X\right)d\beta ={\sum}_{i={\nu}_{\alpha}-1}^{\nu -1}{q}_{i}CVa{R}_{{\beta}_{i}}\left(X\right)$$
- 2.
- CVaR2 Deviation:$${\overline{\mathcal{D}}}_{\alpha}\left(X\right)=\frac{1}{1-\alpha}{\int}_{\alpha}^{1}CVa{R}_{\beta}\left(X\right)d\beta -E\left[X\right]={\sum}_{i={\nu}_{\alpha}-1}^{\nu -1}{q}_{i}CVa{R}_{{\beta}_{i}}\left(X\right)-E\left[X\right]$$

**Proof.**

Appendix D contains proof of the lemma. □

**Note.**

Equations (4) and (5) are valid if $CVa{R}_{{\beta}_{\nu -1}}$ is replaced by $CVa{R}_{\gamma}$ with an arbitrary $\gamma \in [{\beta}_{\nu -1},1].$

**Corollary**

**2.**

For the random value$X$defined in Lemma 2, risk and deviation of the CVaR quadrangle coincide with risk and deviation of the mixed-quantile quadrangle with$r=\nu -{\nu}_{\alpha}+1$,${\lambda}_{k}={q}_{{\nu}_{\alpha}-2+k}$,${\alpha}_{k}={\beta}_{{\nu}_{\alpha}-2+k}$,$k=1,\dots ,r$.

**Proof.**

Right hand sides in Equations (4) and (5) define risk and deviation of the mixed-quantile quadrangle because ${q}_{i}>0,i={\nu}_{\alpha}-1,\dots ,\nu -1$, and ${\sum}_{i={\nu}_{\alpha}-1}^{\nu -1}{q}_{i}=1$. □

**Lemma**

**3.**

Let$X$be a discrete random value with equally probable atoms${x}^{i}$,$i=1,2,\dots ,\nu $,$Prob\left(X={x}^{i}\right)=1/\nu $. Then, statistic of the mixed-quantile quadrangle defined by the Set 2 of parameters is a range containing statistic of the CVaR quadrangle.

**Proof.**

Appendix E contains proof of the lemma. □

## 5. On the Estimation of CVaR with Mixed-Quantile Linear Regression

This section formulates regression problems using the CVaR quadrangle and mixed-quantile quadrangle. For discrete final distributions with equally probable atoms, we prove some equivalence statements for the CVaR and mixed-quantile quadrangles. Further, we demonstrate how to estimate CVaR by using the linear regression with error and deviation from the mixed-quantile quadrangle.

We want to estimate variable $V$ using a linear function $f\left(\mathit{Y}\right)={C}_{0}+{\mathit{C}}^{T}\mathit{Y}$ of the explanatory factors $\mathit{Y}=$ $\left({Y}_{1},\dots ,{Y}_{n}\right).$ Let $\tilde{\mathcal{E}}$ be an error from some quadrangle (further we consider, mixed-quantile and CVaR quadrangles), and $\tilde{\mathcal{D}}$ and $\tilde{\mathcal{S}}$ be a deviation and a statistic, respectively, corresponding to this quadrangle. Below we consider optimization statements for solving regression problems.

**General Optimization Problem 1**

Minimize error $\tilde{\mathcal{E}}$ and find optimal ${C}_{0}^{*},{\mathit{C}}^{*}$
where Z$\left({C}_{0},\mathit{C}\right)=V-{C}_{0}-{\mathit{C}}^{T}\mathit{Y}$.

$$\underset{{C}_{0}\in \mathbb{R},\mathit{C}\in {\mathbb{R}}^{m}}{min}\tilde{\mathcal{E}}\left(Z\left({C}_{0},\mathit{C}\right)\right)$$

**General Optimization Problem 2**

- Step 1. Find an optimal vector${\mathit{C}}^{*}$by minimizing deviation:$$\underset{\mathit{C}\in {\mathbb{R}}^{m}}{min}\tilde{\mathcal{D}}\left({Z}_{0}\left(\mathit{C}\right)\right)$$
- Step 2. Assign${C}_{0}^{*}$:$${C}_{0}^{*}\in \tilde{\mathcal{S}}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)$$

Error $\tilde{\mathcal{E}}\left(X\right)$ is called nondegenerate if:

$$\underset{X:EX=D}{inf}\tilde{\mathcal{E}}\left(X\right)0\mathrm{for}\mathrm{constants}D\ne 0.$$

Rockafellar et al. (2008), p. 722, proved the following decomposition theorem.

**Theorem**

**1.**

(Error-Shaping Decomposition of Regression). Let$\tilde{\mathcal{E}}$be a nondegenerate error and$\tilde{\mathcal{D}}\underset{C}{=\mathit{min}}\left\{\tilde{\mathcal{E}}\left(X-C\right)\right\}$be the corresponding deviation, and let$\tilde{\mathcal{S}}$be the associated statistic. Point (${C}_{0}^{*}$,${\mathit{C}}^{*}$) is a solution of the General Optimization Problem 1 if and only if${\mathit{C}}^{*}$is a solution of the General Optimization Problem 2, Step 1 and${C}_{0}^{*}\in \tilde{\mathcal{S}}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)$with Step 2.

According to the decomposition theorem, when $\tilde{\mathcal{E}}$, $\tilde{\mathcal{D}}$, and $\tilde{\mathcal{S}}$ are elements of the CVaR quadrangle, the following Optimization Problems 1 and 2 are equivalent.

**Optimization Problem 1**

Minimize error from the CVaR quadrangle:
where Z$\left({C}_{0},\mathit{C}\right)=V-{C}_{0}-{\mathit{C}}^{T}\mathit{Y}$.

$$\underset{{C}_{0}\in \mathbb{R},\mathit{C}\in {\mathbb{R}}^{m}}{min}{\overline{\mathcal{E}}}_{\alpha}\left(Z\left({C}_{0},\mathit{C}\right)\right)$$

**Optimization Problem 2**

- Step 1. Find an optimal vector${\mathit{C}}^{*}$by minimizing deviation from the CVaR quadrangle:$$\underset{\mathit{C}\in {\mathbb{R}}^{m}}{min}{\overline{\mathcal{D}}}_{\alpha}\left({Z}_{0}\left(\mathit{C}\right)\right)$$
- Step 2. Calculate:$${C}_{0}^{*}=CVa{R}_{\alpha}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)$$

In Optimization Problem 2 in Step 2 the statistic equals $\overline{S}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)=CVa{R}_{\alpha}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)$, which is the specification of the inclusion operation in the Optimization Problem General 2 in Step 2.

Optimization Problem 2 is used in Rockafellar et al. (2014) for the construction of the linear regression algorithms for estimating CVaR.

According to the decomposition theorem, when $\tilde{\mathcal{E}}$, $\tilde{\mathcal{D}}$, and $\tilde{\mathcal{S}}$ are elements of a mixed-quantile quadrangle, the following Optimization Problems 3 and 4 are equivalent.

**Optimization Problem 3**

Minimize error from the mixed-quantile quadrangle:

$$\underset{{C}_{0}\in \mathbb{R},\mathit{C}\in {\mathbb{R}}^{m}}{min}\mathcal{E}\left(Z\left({C}_{0},\mathit{C}\right)\right)$$

**Optimization Problem 4**

- Step 1. Find an optimal vector${\mathit{C}}^{*}$by minimizing deviation from the mixed-quantile quadrangle:$$\underset{\mathit{C}\in {\mathbb{R}}^{m}}{min}\mathcal{D}\left({Z}_{0}\left(\mathit{C}\right)\right)$$
- Step 2. Assign:$${C}_{0}^{*}\in \mathcal{S}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)$$

Corollaries 1 and 2 can be used for constructing the linear regression for estimating CVaR. Let $\mathit{Y}$ be a random vector of factors for estimating the random value $V$. We consider that the linear regression function $f\left(\mathit{Y}\right)={C}_{0}+{\mathit{C}}^{T}\mathit{Y}$ approximates CVaR of $V$, where ${C}_{0}\in \mathbb{R},\mathit{C}\in {\mathbb{R}}^{m}$ are variables in the linear regression. The residual is denoted by $Z\left({C}_{0},\mathit{C}\right)=V-\left({C}_{0}+{\mathit{C}}^{T}\mathit{Y}\right)$ and ${Z}_{0}\left(\mathit{C}\right)=V-{\mathit{C}}^{T}\mathit{Y}$.

Further we provide a lemma about linear regression problems based on Corollary 1 with the Set 1 of parameters. The main statement here is that the Optimization Problems 3 and 4 for the mixed-quantile quadrangle can be used to solve linear regression problems for estimating CVaR. This is the case because CVaR and mixed-quantile quadrangles have the same Statistic and Deviation.

**Lemma**

**4.**

Let the residual random value$Z\left({C}_{0},\mathit{C}\right)=V-\left({C}_{0}+{\mathit{C}}^{T}\mathit{Y}\right)$be discretely distributed with$\nu $equally probable atoms. Let us consider the CVaR quadrangle with error${\overline{\mathcal{E}}}_{\alpha}\left(Z\left({C}_{0},\mathit{C}\right)\right)$, deviation${\overline{\mathcal{D}}}_{\alpha}\left({Z}_{0}\left(\mathit{C}\right)\right)$, and statistic${\overline{S}}_{\alpha}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)$. Let us also consider the mixed-quantile quadrangle with the error$\mathcal{E}\left(Z\left({C}_{0},\mathit{C}\right)\right)$, deviation$\mathcal{D}\left({Z}_{0}\left(\mathit{C}\right)\right)$, and statistic$\mathcal{S}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)$with parameters defined by Set 1 and$r=\nu -{\nu}_{\alpha}+1$,${\lambda}_{k}={p}_{{\nu}_{\alpha}-1+k}$,${\alpha}_{k}={\gamma}_{{\nu}_{\alpha}-1+k}$,$k=1,\dots ,r$. Then, the Optimization Problems 1–4 are equivalent, i.e., the sets of optimal vectors of these optimization problems coincide. Moreover, let$\left({C}_{0}^{*},{\mathit{C}}^{*}\right)$be a solution vector of the equivalent Optimization Problems 1–4. Then:

$${C}_{0}^{*}=CVa{R}_{\alpha}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)$$

$${\overline{\mathcal{E}}}_{\alpha}\left({C}_{0}^{*},{\mathit{C}}^{*}\right)=\mathcal{E}\left({C}_{0}^{*},{\mathit{C}}^{*}\right)=\mathcal{D}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)={\overline{\mathcal{D}}}_{\alpha}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)$$

**Proof.**

This lemma is a direct corollary of the decomposition Theorem 1 and Corollary 1 of Lemma 1. Indeed, Corollary 1 implies that the Optimization Problems 2 and 4 are equivalent. Further, the decomposition theorem implies that Optimization Problems 1 and 2 and the Optimization Problems 3 and 4 are equivalent. □

Further we provide a lemma about linear regression problems based on Corollary 2 with the Set 2 of parameters. The main statement is that the Optimization Problem 4, Step 1 for the Mixed-Quantile Quadrangle can be used to solve linear regression problem for estimating CVaR. This is the case because CVaR and Mixed-Quantile Quadrangles have the same Deviation. After obtaining vector of coefficients ${\mathit{C}}^{*}$, intercept is calculated, ${C}_{0}^{*}=CVa{R}_{\alpha}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)$.

**Lemma**

**5.**

Let the residual random value$Z\left({C}_{0},\mathit{C}\right)=V-\left({C}_{0}+{\mathit{C}}^{T}\mathit{Y}\right)$be discretely distributed with$\nu $equally probable atoms. Let the mixed-quantile quadrangle with deviation$\mathcal{D}\left({Z}_{0}\left(\mathit{C}\right)\right)$be defined by parameters of Set 2 and$r=\nu -{\nu}_{\alpha}+1$,${\lambda}_{k}={q}_{{\nu}_{\alpha}-2+k}$,${\alpha}_{k}={\beta}_{{\nu}_{\alpha}-2+k}$,$k=1,\dots ,r$. Then,$\left({C}_{0}^{*},{\mathit{C}}^{*}\right)$is a solution of Optimization Problem 1, if and only if,$\left({C}_{0}^{*},{\mathit{C}}^{*}\right)$is a solution of the following two-step procedure:

- Step 1. Find an optimal vector${\mathit{C}}^{*}$by minimizing deviation from the mixed-quantile quadrangle:$$\underset{\mathit{C}\in {\mathbb{R}}^{m}}{min}\mathcal{D}\left({Z}_{0}\left(\mathit{C}\right)\right)$$
- Step 2. Calculate${C}_{0}^{*}=CVa{R}_{\alpha}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)$.

**Proof.**

This lemma is a direct corollary of the decomposition Theorem 1 and Corollary 2 or Lemma 2. Indeed, the Corollary 2 implies that the Optimization Problems 2 and 4 are equivalent. Further, since deviations of the CVaR and mixed-quantile quadrangles coincide, we can use Step 1 to calculate optimal coefficients ${\mathit{C}}^{*}$. Further, the intercept is calculated with ${C}_{0}^{*}=CVa{R}_{\alpha}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)$ because CVaR is the statistic in the CVaR quadrangle. □

For the Set 2 of parameters, the deviations in the CVaR and mixed-quintile quadrangles coincide. The two-step procedure in Optimization Problem 4 can be used to solve linear regression problems with Set 2 parameters for the mixed-quantile deviation. Also, the minimization of the Rockafellar error with the Set 2 of parameters may result in a correct ${\mathit{C}}^{*}$. The statistic of CVaR belongs to the statistic of the mixed-quantile quadrangle. Therefore, the optimization of the Rockafellar error with the Set 2 of parameters may lead to a wrong value of intercept ${C}_{0}^{*}$. This potential incorrectness can be fixed by assigning ${C}_{0}^{*}=CVa{R}_{\alpha}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)$.

## 6. Case Study: Estimation of CVaR with Linear Regression and Style Classification of Funds

The case study described in this section is posted online (see Case Study (2016)). The codes and data are available for downloading and verification. Every optimization problem is presented in three formats: Text, MATLAB, and R. Calculations were done with a PC with a 3.14 GHz processor.

We have applied CVaR regression to the return-based style classification of a mutual fund. We regress a fund return by several indices as explanatory factors. The estimated coefficients represent the fund’s style with respect to each of the indices.

A similar problem with a standard regression based on the mean squared error was considered by Carhart (1997) and Sharpe (1992). They estimated the conditional expectation of a fund return distribution (under the condition that a realization of explanatory factors is observed). Basset and Chen (2001) extended this approach and conducted the style analyses of quantiles of the return distribution. This extension is based on the quantile regression suggested by Koenker and Bassett (1978). The Case Study (2014), “Style Classification with Quantile Regression” implemented this approach and applied quantile regression to the return-based style classification of a mutual fund.

For the numerical implementation of CVaR linear regression, we used the Portfolio Safeguard (2018) package. Portfolio Safeguard (PSG) can solve nonlinear and mixed-integer nonlinear optimization problems. A special feature of PSG is that it includes precoded nonlinear functions: CVaR2 error (

`cvar2_err`) and CVaR2 deviation (`cvar2_dev`) from the CVaR quadrangle, Rockafellar error (`ro_err`) from the mixed-quantile quadrangle, and CVaR deviation (`cvar_dev`) from the quantile quadrangle.We implemented the following equivalent variants of CVaR regression:

- Minimization of the CVaR2 error (PSG function
`cvar2_err`). - Two-step procedure with CVaR2 deviation (PSG function
`cvar2_dev`). - Minimization of the Rockafellar error (PSG function
`ro_err`) with the Set 1 of parameters. - The two-step procedure using mixed CVaR deviation with the Set 1 and Set 2 of parameters. This is calculated as a weighted sum of CVaR deviations (PSG function
`cvar_dev`) from the quantile quadrangle.

PSG automatically converts the analytic problem formulations to the mathematical programming codes and solves them. We included in Appendix A convex and linear programming problems for the minimization of the Rockafellar error with the Set 1 of parameters. These formulations are provided for verification purposes. They can be implemented with standard commercial software. For instance, the linear programming formulation can be implemented with the Gurobi optimization package. If Gurobi is installed in the computer, PSG can use Gurobi code as a subsolver. With the CARGRB solver in PSG, by setting the linearize option to 1, it is possible to solve the linear programming problem with Gurobi. However, this conversion will deteriorate the performance, compared to the default PSG solver VAN. For small problems it will not be noticeable. However, for problems with a large number of scenarios (e.g., with 10

^{8}observations), the standard PSG solver VAN will dramatically outperform the Gurobi linear programming implementation. In this case, Gurobi may not even start on a small PC because of a shortage of memory. Nevertheless, if the number of observations is small (e.g., 10^{3}) and the number of factors is very large (e.g., 10^{7}), it is recommended that the linear programming formulation is used.We regressed the CVaR of the return distribution of the Fidelity Magellan Fund on the explanatory variables: Russell 1000 Growth Index (RLG), Russell 1000 Value Index (RLV), Russell Value Index (RUJ), and Russell 2000 Growth Index (RUO). The dataset includes 1264 historical daily returns of the Magellan Fund and the indices, which were downloaded from the Yahoo Finance website. The data (design matrix for the regression) is posted on the Case Study (2016) website.

The CVaR regression was done with the confidence levels $\alpha =$ 0.75 and $\alpha =$ 0.9. Calculation results are in Table 1 and Table 2, respectively. Here is the description of the columns of the tables:

- Optimization Problem #: Optimization Problem number, as denoted in Section 5; also it is the problem number in the case study posted online, see, Case Study (2016).
- Set #: Set of parameters for the mixed-quantile quadrangle.
- Objective: Optimal value of the objective function.
- RLG: coefficient for the Russell Value Index.
- RLV: coefficient for the Russell 1000 Value Index.
- RUJ: coefficient for the Russell Value Index.
- RUO: coefficient for the Russell 2000 Growth Index.
- Intercept: regression intercept.
- Solving Time: solver optimization time.

Table 1 and Table 2 show calculation results for the considered equivalent problems. We observe that regression coefficients coincide for all problems in Table 1 and Table 2. This confirms the correctness of theoretical results and the numerical implementation. Also, we want to point out that the regression coefficients are quite similar for $\alpha =$ 0.75 (Table 1) and $\alpha =$ 0.9 (Table 2).

The calculation time in majority of cases was around 0.02–0.04 s, except for the case with the mixed CVaR deviation for Set 1, which took 0.11 s. The PSG calculation times were quite low because the solver “knows” analytical expressions for the functions and can take advantage of this knowledge.

## 7. Conclusions

The quadrangle risk theory Rockafellar and Uryasev (2013) and the decomposition theorem Rockafellar et al. (2008) provided a framework for building a regression with relevant deviations. Solution of a regression problem is split in two steps: (1) minimization of deviation from the corresponding quadrangle, and (2) determining of intercept by using statistic from this quadrangle. For CVaR regression, Rockafellar et al. (2014) reduced the optimization problem at Step 1 to a high-dimension linear programming problem. We suggested two sets of parameters for the mixed-quantile quadrangle and investigated its relationship with the CVaR quadrangle. The Set 1 of parameters corresponds to CVaR regression in Rockafellar et al. (2014), where the Set 2 is a new set of parameters.

For the Set 1 of parameters, the minimization of error from CVaR Quadrangle was reduced to the minimization of the Rockafellar error from the mixed-quantile quadrangle. For both sets of parameters, the minimization of deviation in CVaR quadrangle is equivalent to the minimization of deviation in mixed-quantile quadrangle.

We presented optimization problem statements for CVaR regression problems using CVaR and mixed-quantile quadrangles. Linear regression problem for estimating CVaR were efficiently implemented in Portfolio Safeguard (2018) with convex and linear programming. We have done a case study for the return-based style classification of a mutual fund with CVaR regression. We regressed the fund return by several indices as explanatory factors. Numerical results validating the theoretical statements are placed to the web (see Case Study (2016)).

## Supplementary Materials

Data and codes used in the case study can be downloaded: 1. Case Study (2016): Estimation of CVaR through Explanatory Factors with CVaR (Superquantile) Regression. http://www.ise.ufl.edu/uryasev/research/testproblems/financial_engineering/on-implementation-of-cvar-regression/. 2. Case Study (2014): Style Classification with Quantile Regression. http://www.ise.ufl.edu/uryasev/research/testproblems/financial_engineering/style-classification-with-quantile-regression/.

## Author Contributions

Conceptualization, S.U.; Formal analysis, V.K.; Investigation, A.G.; Methodology, S.U.; Software, V.K.; Supervision, S.U.; Writing—original draft, A.G.

## Funding

Research of Stan Uryasev was partially funded by the AFOSR grant FA9550-18-1-0391 on Massively Parallel Approaches for Buffered Probability Optimization and Applications.

## Conflicts of Interest

The authors declare no conflict of interest.

## Appendix A. CVaR Regression with Rockafellar Error: Convex and Linear Programming

The value of the Rockafellar error with given set of parameters ${\lambda}_{k},{\alpha}_{k}$ (${\alpha}_{k}\in \left(0,1\right),k=1,\dots r,{\sum}_{k=1}^{r}{\lambda}_{k}=1$) for a random value $X$ is a minimum w.r.t. a set of variables ${B}_{1},\dots ,{B}_{r}$ of a mixture of Koenker–Bassett Error functions with one linear constraint on these variables:
where ${\mathcal{E}}_{{\alpha}_{k}}\left(X-{B}_{k}\right)=E\left[\frac{{\alpha}_{k}}{1-{\alpha}_{k}}{\left[X-{B}_{k}\right]}^{+}+{\left[X-{B}_{k}\right]}^{-}\right]$ is the normalized Koenker–Bassett error.

$$Rockafellar\_Error{\left(X\right)}_{{\lambda}_{1},{\alpha}_{1},\dots ,{\lambda}_{r},{\alpha}_{r}}=\underset{{B}_{1},\dots ,{B}_{r}}{\mathrm{min}}\left\{{\sum}_{k=1}^{r}{\lambda}_{k}{\mathcal{E}}_{{\alpha}_{k}}\left(X-{B}_{k}\right)|{\sum}_{k=1}^{r}{\lambda}_{k}{B}_{k}=0\right\}$$

By using regret from the mixed-quantile quadrangle, we express the Rockafellar error as follows:

$$Rockafellar\_Error{\left(X\right)}_{{\lambda}_{1},{\alpha}_{1},\dots ,{\lambda}_{r},{\alpha}_{r}}=\underset{{B}_{1},\dots ,{B}_{r}}{\mathrm{min}}\left\{{\sum}_{k=1}^{r}\frac{{\lambda}_{k}}{1-{\alpha}_{k}}E{\left[X-{B}_{k}\right]}^{+}|{\sum}_{k=1}^{r}{\lambda}_{k}{B}_{k}=0\right\}-E\left[X\right].$$

For the linear regression problem, the random variable $X$ is defined by a set of differences between observed values ${V}_{i}$ and linear functions ${C}_{0}+{\mathit{C}}^{T}{\mathit{Y}}_{\mathit{i}}$, where ${\mathit{Y}}_{\mathit{i}}$ is a vector of explanatory factors, $i=1,2,\dots ,\nu $. Vectors $\mathit{C}$ and $\mathit{Y}$ have $m$ components, $\mathit{C}=\left({C}_{1},\dots ,{C}_{m}\right)$, $\mathit{Y}=$ $\left({Y}_{1},\dots ,{Y}_{m}\right)$, and ${C}_{0}$ is a scalar. Residuals ${X}_{i}={V}_{i}-{C}_{0}-{\mathit{C}}^{T}{\mathit{Y}}_{\mathit{i}}$ are values (scenarios) of atoms of the random value $X$. We consider that all atoms have equal probabilities. The estimation of $V$ with factors $\mathit{Y}$ is done by minimizing the error w.r.t. variables $\mathit{C}$, ${C}_{0}$. Further we use the Set 1 of parameters. Let us denote:

$$E\left[V\right]=\frac{1}{v}{\sum}_{i=1}^{v}{V}_{i},E\left[\mathit{Y}\right]=\frac{1}{v}{\sum}_{i=1}^{v}{\mathit{Y}}_{\mathit{i}}.$$

#### Appendix A.1. Convex Programming Formulation for CVaR Regression

Minimize the Rockafellar error:
subject to the constraint:

$$\underset{{B}_{1},\dots ,{B}_{r},{C}_{0},{C}_{1},\dots ,{C}_{m},}{\mathit{min}}\left\{{\sum}_{k=1}^{r}\frac{{\lambda}_{k}}{\left(1-{\alpha}_{k}\right)v}{\sum}_{i=1}^{v}{\left[{V}_{i}-{C}_{0}-{\mathit{C}}^{T}{\mathit{Y}}_{\mathit{i}}-{B}_{k}\right]}^{+}-E\left[V\right]+{C}_{0}+{\mathit{C}}^{T}E\left[\mathit{Y}\right]\right\}$$

$${\sum}_{k=1}^{r}{\lambda}_{k}{B}_{k}=0.$$

This optimization problem has a convex objective and one linear constraint.

#### Appendix A.2. Linear Programming Formulation for CVaR Regression

Equations (A1) and (A2) are reduced to linear programming with additional variables and constraints:
subject to constraints:

$$\underset{\begin{array}{c}{A}_{11},\dots ,{A}_{rv}\\ {B}_{1},\dots ,{B}_{r},{C}_{0},{C}_{1},\dots ,{C}_{m},\end{array}}{\mathrm{min}}\left\{{\sum}_{k=1}^{r}\frac{{\lambda}_{k}}{\left(1-{\alpha}_{k}\right)v}{\sum}_{i=1}^{v}{A}_{ki}-E\left[V\right]+{C}_{0}+{\mathit{C}}^{T}E\left[\mathit{Y}\right]\right\}$$

$${\sum}_{k=1}^{r}{\lambda}_{k}{B}_{k}=0$$

$${A}_{ki}\ge {V}_{i}-{C}_{0}-{\mathit{C}}^{T}{\mathit{Y}}_{\mathit{i}}-{B}_{k},k=1,\dots ,r,i=1,\dots ,v$$

$${A}_{ki}\ge 0,k=1,\dots ,r,i=1,\dots ,v$$

The linear function ${C}_{0}^{*}+{\mathit{C}}^{*T}\mathit{Y}$ estimates $CVa{R}_{\alpha}\left(V\right)$ as a function of explanatory factors $\mathit{Y}$, where ${C}_{0}^{*}$ and ${\mathit{C}}^{*}$ are optimal values of variables for Equations (A1) and (A2) or (A3)–(A6).

## Appendix B. Codes Implementing Regression Optimization Problems

This appendix contains codes implementing Optimization Problems 1–4 described in Section 5. Codes and solution results are posted at internet link: Case Study (2016). Codes are written in Portfolio Safeguard (PSG) Text, MATLAB, and R environments. Here are codes in the Text environment.

**Optimization Problem 1**

Code in PSG Text format:

`minimize``cvar2_err(0.75,matrix_s)`

The keyword “

`minimize`” indicates that the objective function is minimized. The objective function`cvar2_err(0.75,matrix_s)`calculates error ${\overline{\mathcal{E}}}_{\alpha}\left(Z\left({C}_{0},\mathit{C}\right)\right)$ in CVaR quadrangle with confidence level $\alpha $ = 0.75. The`matrix_s`contains scenarios of residual of the regression $Z\left({C}_{0},\mathit{C}\right)=V-{C}_{0}-{\mathit{C}}^{T}\mathit{Y}$.**Optimization Problem 2**

Code in PSG Text format:

`minimize``cvar2_dev(0.75,matrix_s)``value:``cvar_risk(0.75,matrix_s)`

The code includes two parts. The first part begins with the keyword “

`minimize`” indicating that the objective function is minimized. It implements Step 1 of the Optimization Problem 2, which minimizes deviation from the CVaR quadrangle (for determining the optimal vector ${\mathit{C}}^{*}$ of regression coefficients without an intercept). The PSG function`cvar2_dev(0.75,matrix_s)`calculates deviation ${\overline{\mathcal{D}}}_{\alpha}\left({Z}_{0}\left(\mathit{C}\right)\right)$ with $\alpha $ = 0.75. The`matrix_s`contains scenarios of the residual of the regression without an intercept ${Z}_{0}\left(\mathit{C}\right)=V-{\mathit{C}}^{T}\mathit{Y}$.The second part of the code begins with the keyword “

`value`.” This part implements Step 2 of the Optimization Problem 2 for calculating the optimal value of intercept ${C}_{0}^{*}$. The PSG function`cvar_risk(0.75,matrix_s)`calculates $CVa{R}_{\alpha}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)$ with $\alpha $ = 0.75 at the optimal point ${\mathit{C}}^{*}$.**Optimization Problem 3**

Code in PSG Text format with parameters from Set 1.

`minimize``ro_err(matrix_s, matrix_coeff)`

The keyword “

`minimize`” indicates that the objective function is minimized. The objective function`ro_err(matrix_s, matrix_coeff)`calculates the Rockafellar error $\mathcal{E}\left(Z\left({C}_{0},\mathit{C}\right)\right)$ in the mixed-quantile quadrangle. The`matrix_s`contains scenarios of the residual of the regression $Z\left({C}_{0},\mathit{C}\right)=V-{C}_{0}-{\mathit{C}}^{T}\mathit{Y}$. The`matrix_coeff`includes vectors of weights and confidence levels for Set 1 with $\alpha $ = 0.75.**Optimization Problem 4**

Code in PSG Text format with parameters from Set 1 for $\alpha $ = 0.75.

`minimize``vector_c*cvar_dev(vector_a, matrix_s)``value:``cvar_risk(0.75,matrix_s)`

The code includes two parts. The first part begins with the keyword “

`minimize`” indicating that the objective function is minimized. It implements Step 1 of the Optimization Problem 4, which minimizes deviation from the mixed-quantile quadrangle (for determining optimal vector ${\mathit{C}}^{*}$ of regression coefficients without intercept). The inner product`vector_c*cvar_dev(vector_a, matrix_s)`calculates the mixed CVaR deviation $\mathcal{D}\left({Z}_{0}\left(\mathit{C}\right)\right).$ The function`cvar_dev`corresponds to the CVaR deviation from the quantile quadrangle. Vector`vector_c`contains weights for CVaR Deviation mix corresponding to Set 1. Vector`vector_a`contains confidence levels defined by Set 1. The`matrix_s`contains scenarios of the residual of the regression $Z\left({C}_{0},\mathit{C}\right)=V-{C}_{0}-{\mathit{C}}^{T}\mathit{Y}$.The second part of the code begins with the keyword “

`value`.” This part implements Step 2 of the Optimization Problem 4, calculating the optimal value of intercept ${C}_{0}^{*}$. The PSG function`cvar_risk(0.75,matrix_s)`calculates $CVa{R}_{\alpha}\left({Z}_{0}\left({\mathit{C}}^{*}\right)\right)$ with $\alpha $ = 0.75 at the optimal point ${\mathit{C}}^{*}$.## Appendix C. Proof of the Lemma 1

**Proof.**

According to the definition, ${\beta}_{v}=1.$ However, while proving this lemma, we consider ${\beta}_{v}$ a bit smaller than 1, i.e., ${\beta}_{v}=1-\epsilon >{\beta}_{v-1},\epsilon 0,$ to avoid division by 0. Then, we will consider limit $\epsilon \to 0$ to finish the proof. Note that for $\alpha <1$ and for the considered partition, ${\beta}_{i-1}<{\beta}_{i}$ for all $i={\nu}_{\alpha},\dots ,\nu $.

First, let us prove that ${\gamma}_{i}\in \left({\beta}_{i-1},{\beta}_{i}\right)$ for ${\beta}_{i}<1$. Consider three functions of $\sigma ,\delta $ for $\sigma >0,\delta \ge 0$:

$${f}_{1}\left(\delta \right)=\frac{\delta}{\sigma +\delta},{f}_{2}\left(\delta \right)=\mathrm{ln}\left(1+\frac{\delta}{\sigma}\right),{f}_{3}\left(\delta \right)=\frac{\delta}{\sigma}$$

When $\delta =0$, all functions equal to 0 and have equal derivatives. When $\delta >0$, it is true for derivatives that:

$$f{}^{\prime}{}_{1}\left(\delta \right)<f{}^{\prime}{}_{2}\left(\delta \right)<f{}^{\prime}{}_{3}\left(\delta \right)$$

Hence, when $>0$, similar inequalities are valid for functions:

$$\frac{\delta}{\sigma +\delta}<\mathrm{ln}\left(1+\frac{\delta}{\sigma}\right)<\frac{\delta}{\sigma}.$$

There exist some ${\delta}_{\gamma}$ such that $0<{\delta}_{\gamma}<\delta $ and $\frac{\delta}{\sigma +{\delta}_{\gamma}}=\mathrm{ln}\left(1+\frac{\delta}{\sigma}\right)$. If $\delta ={\beta}_{i}-{\beta}_{i-1}$, $\sigma =1-{\beta}_{i}$, $\gamma =1-\sigma -{\delta}_{\gamma}$, then $\gamma ={\gamma}_{i}$ and ${\beta}_{i-1}<\gamma <{\beta}_{i}$. Therefore, ${\gamma}_{i}\in \left({\beta}_{i-1},{\beta}_{i}\right)$.

Further, we show how to calculate the integral ${\int}_{\alpha}^{1-\epsilon}CVa{R}_{\beta}\left(X\right)d\beta $ as a sum of integrals over the partition:

$${\overline{\mathcal{R}}}_{\alpha}\left(X\right)=\underset{\epsilon \to 0}{\mathrm{lim}}\frac{1}{1-\alpha}{\int}_{\alpha}^{1-\epsilon}CVa{R}_{\beta}\left(X\right)d\beta =\underset{{\beta}_{v}\to 1}{\mathrm{lim}}\frac{1}{1-\alpha}{\sum}_{i={\nu}_{\alpha}}^{\nu}{\int}_{{\beta}_{i-1}}^{{\beta}_{i}}CVa{R}_{\beta}\left(X\right)d\beta .$$

Let us denote ${C}_{{\beta}_{i}}=CVa{R}_{{\beta}_{i}}\left(X\right)$ and ${V}_{i}=Va{R}_{{\gamma}_{i}}\left(X\right)$. Note that $Va{R}_{\gamma}\left(X\right)$ is a singleton for every $\gamma \in \left({\beta}_{i-1},{\beta}_{i}\right)$ and it equals ${V}_{i}$, because ${\gamma}_{i}\in \left({\beta}_{i-1},{\beta}_{i}\right)$.

Below, we use value ${V}_{i}$ for the closed interval $\left[{\beta}_{i-1},{\beta}_{i}\right]$ while calculating the integral over this interval because the value of the integral does not depend on the finite values of $Va{R}_{\gamma}\left(X\right)$ at the boundary points ${\beta}_{i-1},{\beta}_{i}$.

Using the definition of CVaR (Rockafellar and Uryasev (2002), Proposition 8 CVaR for scenario models) we write:

$$\begin{array}{cc}\hfill {\int}_{{\beta}_{i-1}}^{{\beta}_{i}}CVa{R}_{\beta}\left(X\right)d\beta & ={\int}_{{\beta}_{i-1}}^{{\beta}_{i}}\frac{1}{1-\beta}\left({C}_{{\beta}_{i}}\left(1-{\beta}_{i}\right)+{V}_{i}\left({\beta}_{i}-\beta \right)\right)d\beta \hfill \\ & ={C}_{{\beta}_{i}}\left(1-{\beta}_{i}\right){\int}_{{\beta}_{i-1}}^{{\beta}_{i}}\frac{1}{1-\beta}d\beta +{V}_{i}{\int}_{{\beta}_{i-1}}^{{\beta}_{i}}\frac{{\beta}_{i}-\beta}{1-\beta}d\beta \hfill \\ & ={C}_{{\beta}_{i}}\left(1-{\beta}_{i}\right)\mathrm{ln}\left(\frac{1-{\beta}_{i-1}}{1-{\beta}_{i}}\right)+{V}_{i}\left[\left({\beta}_{i}-{\beta}_{i-1}\right)-\left(1-{\beta}_{i}\right)\mathrm{ln}\left(\frac{1-{\beta}_{i-1}}{1-{\beta}_{i}}\right)\right]\hfill \end{array}$$

Let us make the transformation of Equation (A7) using the expression for ${\gamma}_{i}$ in the Set 1 definition:

$$\begin{array}{cc}\hfill {\int}_{{\beta}_{i-1}}^{{\beta}_{i}}CVa{R}_{\beta}\left(X\right)d\beta & =\left({\beta}_{i}-{\beta}_{i-1}\right)\left[{V}_{i}+\frac{1}{{\beta}_{i}-{\beta}_{i-1}}\mathrm{ln}\left(\frac{1-{\beta}_{i-1}}{1-{\beta}_{i}}\right)\left({C}_{{\beta}_{i}}-{V}_{i}\right)\left(1-{\beta}_{i}\right)\right]\hfill \\ & =\left({\beta}_{i}-{\beta}_{i-1}\right)\left[{V}_{i}+\frac{1}{1-{\gamma}_{i}}\left({C}_{{\beta}_{i}}-{V}_{i}\right)\left(1-{\beta}_{i}\right)\right]=\left({\beta}_{i}-{\beta}_{i-1}\right)\frac{1}{1-{\gamma}_{i}}\left[{C}_{{\beta}_{i}}\left(1-{\beta}_{i}\right)+{V}_{i}\left({\beta}_{i}-{\gamma}_{i}\right)\right]\hfill \\ & =\left({\beta}_{i}-{\beta}_{i-1}\right)CVa{R}_{{\gamma}_{i}}\left(X\right)\hfill \end{array}$$

The last equality is valid because ${\gamma}_{i}\in \left({\beta}_{i-1},{\beta}_{i}\right)$.

Taking into account that $\underset{\epsilon \to 0}{\mathrm{lim}}{\gamma}_{\nu}=1$, $CVa{R}_{1-\epsilon}\left(X\right)=CVa{R}_{1}\left(X\right)$, and $Va{R}_{1-\epsilon}\left(X\right)=Va{R}_{1}\left(X\right)$, we obtain:

$${\overline{\mathcal{R}}}_{\alpha}\left(X\right)=\underset{\epsilon \to 0}{\mathrm{lim}}{\sum}_{i={\nu}_{\alpha}}^{\nu}\frac{{\beta}_{i}-{\beta}_{i-1}}{1-\alpha}CVa{R}_{{\gamma}_{i}}\left(X\right)={\sum}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}CVa{R}_{{\gamma}_{i}}\left(X\right)$$

Deviation is calculated as follows:

$${\overline{\mathcal{D}}}_{\alpha}\left(X\right)={\sum}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}CVa{R}_{{\gamma}_{i}}\left(X\right)-E\left[X\right]$$

By the definition of CVaR (Rockafellar and Uryasev (2002)):

$$CVa{R}_{\alpha}\left(X\right)={\sum}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}Va{R}_{{\gamma}_{i}}\left(X\right)={\overline{\mathcal{S}}}_{\alpha}\left(X\right)$$

Lemma 1 is proved. □

## Appendix D. Proof of Lemma 2

**Proof.**

Similar to proof of Lemma 1, we consider ${\beta}_{v}$ as a bit smaller than 1, i.e., ${\beta}_{v}=1-\epsilon >{\beta}_{v-1},\epsilon 0,$ to avoid division by 0. Then, we consider limit $\epsilon \to 0$ to finish the proof. We denote ${C}_{{\beta}_{i}}=CVa{R}_{{\beta}_{i}}\left(X\right)$ and ${V}_{i}=Va{R}_{\gamma}\left(X\right)$ for any $\gamma \in \left({\beta}_{i-1},{\beta}_{i}\right)$ because $Va{R}_{\gamma}\left(X\right)$ does not change on this interval.

Additionally we denote ${\delta}_{i}={\beta}_{i}-{\beta}_{i-1}$, $i={\nu}_{\alpha},\dots ,\nu $, and ${\sigma}_{i}=1-{\beta}_{i}$, $i={\nu}_{\alpha}-1,\dots ,\nu $. Note that all ${\delta}_{i}>0$, all ${\sigma}_{i}>0,$ and ${\delta}_{i}={\delta}_{j}=\delta $ for all $i,j<\nu $.
$${\overline{\mathcal{R}}}_{\alpha}\left(X\right)=\underset{\epsilon \to 0}{\mathrm{lim}}\frac{1}{1-\alpha}{\int}_{\alpha}^{1-\epsilon}CVa{R}_{\beta}\left(X\right)d\beta =\underset{{\beta}_{v}\to 1}{\mathrm{lim}}\frac{1}{1-\alpha}{\sum}_{i={\nu}_{\alpha}}^{\nu}{\int}_{{\beta}_{i-1}}^{{\beta}_{i}}CVa{R}_{\beta}\left(X\right)d\beta .$$

Equation (A7) in Lemma 1 is valid for any interval $\left[\theta ,{\beta}_{i}\right]$ such that ${\beta}_{i-1}\le \theta \le {\beta}_{i}$, therefore:

$${\int}_{\theta}^{{\beta}_{i}}CVa{R}_{\beta}\left(X\right)d\beta ={C}_{{\beta}_{i}}\left(1-{\beta}_{i}\right)\mathrm{ln}\left(\frac{1-\theta}{1-{\beta}_{i}}\right)+{V}_{i}\left[\left({\beta}_{i}-\theta \right)-\left(1-{\beta}_{i}\right)\mathrm{ln}\left(\frac{1-\theta}{1-{\beta}_{i}}\right)\right]$$

Let us express ${V}_{i}$ from $CVa{R}_{{\beta}_{i-1}}\left(X\right)$ using the definition of CVaR from Rockafellar and Uryasev (2002) (Equation (25)), then insert this expression into Equation (A9):

$${C}_{{\beta}_{i-1}}=\frac{1}{1-{\beta}_{i-1}}\left({C}_{{\beta}_{i}}\left(1-{\beta}_{i}\right)+{V}_{i}\left({\beta}_{i}-{\beta}_{i-1}\right)\right)=\frac{1}{{\sigma}_{i-1}}\left({C}_{{\beta}_{i}}{\sigma}_{i}+{V}_{i}{\delta}_{i}\right),\phantom{\rule{0ex}{0ex}}{V}_{i}=\frac{1}{{\delta}_{i}}\left({C}_{{\beta}_{i-1}}{\sigma}_{i-1}-{C}_{{\beta}_{i}}{\sigma}_{i}\right)$$

By substituting ${V}_{i}$ into Equation (A9), we obtain:

$${\int}_{\theta}^{{\beta}_{i}}CVa{R}_{\beta}\left(X\right)d\beta ={C}_{{\beta}_{i}}{\sigma}_{i}\mathrm{ln}\left(\frac{1-\theta}{{\sigma}_{i}}\right)+\frac{1}{{\delta}_{i}}\left({C}_{{\beta}_{i-1}}{\sigma}_{i-1}-{C}_{{\beta}_{i}}{\sigma}_{i}\right)\left[\left({\beta}_{i}-\theta \right)-{\sigma}_{i}\mathrm{ln}\left(\frac{1-\theta}{{\sigma}_{i}}\right)\right]$$

The last equation contains two CVaRs, ${C}_{{\beta}_{i}}$ and ${C}_{{\beta}_{i-1}}$. Let us express the coefficients of these CVaRs.

${C}_{{\beta}_{i}}$ in Equation (A10) has the following coefficient:
and ${C}_{{\beta}_{i-1}}$ has the coefficient:

$${q}_{1i}={\sigma}_{i}\left[\mathrm{ln}\left(\frac{1-\theta}{{\sigma}_{i}}\right)\left(1+\frac{{\sigma}_{i}}{{\delta}_{i}}\right)-\frac{{\beta}_{i}-\theta}{{\delta}_{i}}\right]$$

$${q}_{2i-1}=\frac{{\sigma}_{i-1}}{{\delta}_{i}}\left[\left({\beta}_{i}-\theta \right)-{\sigma}_{i}\mathrm{ln}\left(\frac{1-\theta}{{\sigma}_{i}}\right)\right]$$

Coefficient ${q}_{2i-1}\ge 0$ because the value in square brackets is the same as in Equations (A7) and (A9), and is a result of the integration of a non-negative function over the interval $\left[\theta ,{\beta}_{i}\right]$. The coefficient ${q}_{1i}\ge 0$ because it is an integral of the non-negative function $\frac{{\sigma}_{i}}{{\delta}_{i}}\left[\frac{{\delta}_{i}-{\beta}_{i}+\beta}{1-\beta}\right]$ over the same interval.

When summing up integrals to obtain $\frac{1}{1-\alpha}{\int}_{\alpha}^{1-\epsilon}CVa{R}_{\beta}\left(X\right)d\beta $, every ${C}_{{\beta}_{i}}$, aside from ${C}_{{\beta}_{{\nu}_{\alpha}}-1}$ and ${C}_{{\beta}_{\nu}}$, enters the sum two times with coefficients depending on $\nu ,\alpha ,{\beta}_{i-1},{\beta}_{i}$, and ${\beta}_{i+1}$. Once in Equation (A11) for $i$, and the second time in Equation (A12) for $i+1$. All coefficients are non-negative.

Let us explain this in more detail.

If $i$ is such that ${\nu}_{\alpha}<i<\nu $, then ${\beta}_{{\nu}_{\alpha}}<{\beta}_{i}<{\beta}_{\nu}$ and $\theta ={\beta}_{i-1}$ for $i$ in Equation (A11) and $\theta ={\beta}_{i}$ for $i+1$ in Equation (A12). Then, the coefficient for ${C}_{{\beta}_{i}}$ in Equation (A8) equals:

$${q}_{i}=\frac{1}{1-\alpha}\left({q}_{1i}+{q}_{2i}\right)=\frac{1}{1-\alpha}\left({\sigma}_{i}\left[\mathrm{ln}\left(\frac{{\sigma}_{i-1}}{{\sigma}_{i}}\right)\left(\frac{{\sigma}_{i-1}}{{\delta}_{i}}\right)-1\right]+\frac{{\sigma}_{i}}{{\delta}_{i+1}}\left[{\delta}_{i+1}-{\sigma}_{i+1}\mathrm{ln}\left(\frac{{\sigma}_{i}}{{\sigma}_{i+1}}\right)\right]\right)\phantom{\rule{0ex}{0ex}}=\frac{{\sigma}_{i}}{1-\alpha}\left[\frac{{\sigma}_{i-1}}{{\delta}_{i}}\mathrm{ln}\left(\frac{{\sigma}_{i-1}}{{\sigma}_{i}}\right)-\frac{{\sigma}_{i+1}}{{\delta}_{i+1}}\mathrm{ln}\left(\frac{{\sigma}_{i}}{{\sigma}_{i+1}}\right)\right]$$

If $i={\nu}_{\alpha}<\nu $, then $\theta =\alpha $ in Equation (A11) and $\theta ={\beta}_{i}$ in Equation (A12). Then, the coefficient for ${C}_{{\beta}_{i}}$ in Equation (A8) equals:

$${q}_{i}=\frac{1}{1-\alpha}\left({q}_{1i}+{q}_{2i}\right)=\frac{1}{1-\alpha}\left({\sigma}_{i}\left[\mathrm{ln}\left(\frac{1-\alpha}{{\sigma}_{i}}\right)\left(\frac{{\sigma}_{i-1}}{{\delta}_{i}}\right)-\frac{{\delta}_{\alpha}}{{\delta}_{i}}\right]+\frac{{\sigma}_{i}}{{\delta}_{i+1}}\left[{\delta}_{i+1}-{\sigma}_{i+1}\mathrm{ln}\left(\frac{{\sigma}_{i}}{{\sigma}_{i+1}}\right)\right]\right)\phantom{\rule{0ex}{0ex}}=\frac{{\sigma}_{i}}{1-\alpha}\left[1-\frac{{\delta}_{\alpha}}{{\delta}_{i}}+\frac{{\sigma}_{i-1}}{{\delta}_{i}}\mathrm{ln}\left(\frac{1-\alpha}{{\sigma}_{i}}\right)-\frac{{\sigma}_{i+1}}{{\delta}_{i+1}}\mathrm{ln}\left(\frac{{\sigma}_{i}}{{\sigma}_{i+1}}\right)\right]$$

If $i={\nu}_{\alpha}-1$, then ${C}_{{\beta}_{i}}$ enters in the sum only in Equation (A12) with $\theta =\alpha $. Then:

$${q}_{i}=\frac{1}{1-\alpha}{q}_{2i}=\frac{{\sigma}_{i}}{1-\alpha}\left[\frac{{\delta}_{\alpha}}{{\delta}_{i+1}}-\frac{{\sigma}_{i+1}}{{\delta}_{i+1}}\mathrm{ln}\left(\frac{1-\alpha}{{\sigma}_{i+1}}\right)\right]$$

If $i=\nu >{\nu}_{\alpha},$ then ${C}_{{\beta}_{i}}$ enters in the sum only in Equation (A11) with $\theta ={\beta}_{i-1}$. Then:

$${q}_{i}=\frac{1}{1-\alpha}{q}_{1i}=\frac{{\sigma}_{i}}{1-\alpha}\left[\mathrm{ln}\left(\frac{{\sigma}_{i-1}}{{\sigma}_{i}}\right)\left(\frac{{\sigma}_{i-1}}{{\delta}_{i}}\right)-1\right]$$

Also, in the case when $i=\nu ={\nu}_{\alpha},$ ${C}_{{\beta}_{i}}$ enters in the sum only in Equation (A11) with $\theta =\alpha $. Then:

$${q}_{i}=\frac{1}{1-\alpha}{q}_{1i}=\frac{{\sigma}_{i}}{1-\alpha}\left[\mathrm{ln}\left(\frac{1-\alpha}{{\sigma}_{i}}\right)\left(1+\frac{{\sigma}_{i}}{{\delta}_{i}}\right)-\frac{{\delta}_{\alpha}}{{\delta}_{i}}\right]$$

Then, the risk in the CVaR quadrangle equals:

$${\overline{\mathcal{R}}}_{\alpha}\left(X\right)=\underset{\epsilon \to 0}{\mathrm{lim}}{\sum}_{i={\nu}_{\alpha}-1}^{\nu}{q}_{i}CVa{R}_{{\beta}_{i}}\left(X\right).$$

Deviation is calculated as follows: ${\overline{\mathcal{D}}}_{\alpha}\left(X\right)={\overline{\mathcal{R}}}_{\alpha}\left(X\right)-E\left[X\right]$.

Taking into account that all ${\beta}_{i}$ are fixed for $i<\nu $, the limit operation affects only coefficients for $i=\nu $. Namely, $\underset{\epsilon \to 0}{\mathrm{lim}}{\beta}_{\nu}=1$, $\underset{\epsilon \to 0}{\mathrm{lim}}{\sigma}_{\nu}=0$, $\underset{\epsilon \to 0}{\mathrm{lim}}{\sigma}_{\nu}\mathrm{ln}\left({\sigma}_{\nu}\right)=0$, $\underset{\epsilon \to 0}{\mathrm{lim}}{\delta}_{\nu}=\delta $. Then, the limit values of coefficients ${q}_{i}$ are equal to:

- ${q}_{\nu}=0$ in both cases, when $\nu >{\nu}_{\alpha}$ and $\nu ={\nu}_{\alpha}$.
- ${q}_{i}=\frac{1}{1-\alpha}\times \frac{{\sigma}_{i}}{\delta}\left[{\sigma}_{i-1}\mathrm{ln}\left(\frac{{\sigma}_{i-1}}{{\sigma}_{i}}\right)\right]$ for $i$ such that ${\nu}_{\alpha}<i=\nu -1$.
- ${q}_{i}=\frac{1}{1-\alpha}\times \frac{{\sigma}_{i}}{\delta}\left[{\sigma}_{i-1}\mathrm{ln}\left(\frac{{\sigma}_{i-1}}{{\sigma}_{i}}\right)+{\sigma}_{i+1}\mathrm{ln}\left(\frac{{\sigma}_{i+1}}{{\sigma}_{i}}\right)\right]$ for $i$ such that ${\nu}_{\alpha}<i<\nu -1$.
- ${q}_{i}=\frac{1}{1-\alpha}\times \frac{{\sigma}_{i}}{\delta}\left[\delta -{\delta}_{\alpha}+{\sigma}_{i-1}\mathrm{ln}\left(\frac{1-\alpha}{{\sigma}_{i}}\right)+{\sigma}_{i+1}\mathrm{ln}\left(\frac{{\sigma}_{i+1}}{{\sigma}_{i}}\right)\right]$ for $i={\nu}_{\alpha}<\nu -1$.
- ${q}_{i}=\frac{1}{1-\alpha}\times \frac{{\sigma}_{i}}{\delta}\left[{\delta}_{\alpha}+{\sigma}_{i+1}\mathrm{ln}\left(\frac{{\sigma}_{i+1}}{1-\alpha}\right)\right]$ for $i={\nu}_{\alpha}-1<\nu -1$.

If the number of atoms used in calculating ${\overline{\mathcal{R}}}_{\alpha}\left(X\right)$ is 3 or 2, then the coefficients have values:

- ${q}_{i}=\frac{1}{1-\alpha}\times \frac{{\sigma}_{i}}{\delta}\left[\delta -{\delta}_{\alpha}+{\sigma}_{i-1}\mathrm{ln}\left(\frac{1-\alpha}{{\sigma}_{i}}\right)\right]$ for $i={\nu}_{\alpha}=\nu -1$.
- ${q}_{i}=\frac{1}{1-\alpha}\times \frac{{\sigma}_{i}}{\delta}\left[{\delta}_{\alpha}\right]=1$ for $i={\nu}_{\alpha}-1=\nu -1$.

It can be shown that ${\sum}_{i={\nu}_{\alpha}-1}^{\nu}{q}_{i}=1$ by sequentially summing up coefficients.

By recalling that ${\sigma}_{i}=\delta \left(v-i\right)$, we can rewrite the equations for coefficients ${q}_{i}$ using $\delta $ and $j=v-i$.

Lemma 2 is proved. □

## Appendix E. Proof of the Lemma 3

**Proof.**

Distribution of random value $X$ defines the partition of the interval $\left[0,1\right]$: $\delta =1/\nu $, ${\beta}_{i}=i\delta $, for $i=0,1,\dots ,\nu .$

CDF ${F}_{X}\left(x\right)=prob\left\{X\le x\right\}$ is a non-decreasing and right-continuous function and it is constant for every right-open interval $\left[{\beta}_{i-1},{\beta}_{i}\right)$, $i=1,\dots ,\nu $. Therefore, from the definitions of $Va{R}_{\gamma}^{-}\left(X\right)\mathrm{and}Va{R}_{\gamma}^{+}\left(X\right)$, we have:

$$Va{R}_{\gamma}^{-}\left(X\right)=Va{R}_{\gamma}^{+}\left(X\right)=Va{R}_{\gamma}\left(X\right)=const\mathrm{for}\gamma \in \left({\beta}_{i-1},{\beta}_{i}\right).$$

$$Va{R}_{{\beta}_{i}}^{+}\left(X\right)=Va{R}_{\gamma}\left(X\right)\mathrm{for}\gamma \in \left({\beta}_{i},{\beta}_{i+1}\right)\mathrm{and}i=0,\dots ,\nu -1.$$

$$Va{R}_{{\beta}_{i}}^{-}\left(X\right)=Va{R}_{\gamma}\left(X\right)\mathrm{for}\gamma \in \left({\beta}_{i-1},{\beta}_{i}\right)\mathrm{and}i=1,\dots ,\nu .$$

For $i=0$ we have ${\beta}_{i}=0$ and $Va{R}_{0}^{-}\left(X\right)=-\infty $.

Thus, according to the definition of statistic for the mixed-quantile quadrangle, for Set 2:
where $Va{R}_{{\beta}_{i}}\left(X\right)$ are intervals (that may have zero lengths for some $i$), and ${q}_{i}$ and ${\nu}_{\alpha}$ are defined above for the Set 2. Therefore, ${\mathcal{S}}_{\alpha}^{II}\left(X\right)$ is also an interval and it has a non-zero length if not all $Va{R}_{{\beta}_{i}}^{-}\left(X\right),i={\nu}_{\alpha}-1,\dots ,\nu -1$ are equal.

$${\mathcal{S}}_{\alpha}^{II}\left(X\right)={\sum}_{i={\nu}_{\alpha}-1}^{\nu -1}{q}_{i}Va{R}_{{\beta}_{i}}\left(X\right)$$

If ${\nu}_{\alpha}-1=0,$ then ${\mathcal{S}}_{\alpha}^{II}\left(X\right)$ is a left-open interval with the lower bound $-\infty $.

Let ${\nu}_{\alpha}-1>0$. For the Set 1, we defined confidence levels as internal points in intervals ${\gamma}_{i}\in \left({\beta}_{i-1},{\beta}_{i}\right)$. Using these definitions of ${\gamma}_{i}$, we can express ${\mathcal{S}}_{\alpha}^{II}\left(X\right)$ as the interval:

$${\mathcal{S}}_{\alpha}^{II}\left(X\right)=\left[{\sum}_{i={\nu}_{\alpha}-1}^{\nu -1}{q}_{i}Va{R}_{{\gamma}_{i}}\left(X\right),{\sum}_{i={\nu}_{\alpha}-1}^{\nu -1}{q}_{i}Va{R}_{{\gamma}_{i+1}}\left(X\right)\right]$$

To simplify notations, let us denote: ${V}_{i}=Va{R}_{{\gamma}_{i}}\left(X\right)$; ${C}_{\gamma}=CVa{R}_{\gamma}\left(X\right)$; and $L,U$ are bounds for ${\mathcal{S}}_{\alpha}^{II}\left(X\right)$ such that ${\mathcal{S}}_{\alpha}^{II}\left(X\right)=\left[L,U\right]$.

Statistic ${\overline{\mathcal{S}}}_{\alpha}\left(X\right)$ of the CVaR quadrangle is defined in Lemma 1 (see Equation (1)) as ${\overline{\mathcal{S}}}_{\alpha}\left(X\right)={\sum}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}{V}_{i}$.

Therefore, we wish to prove that:

$$L={\sum}_{i={\nu}_{\alpha}-1}^{\nu -1}{q}_{i}{V}_{i}\le {\overline{\mathcal{S}}}_{\alpha}\left(X\right)\le {\sum}_{i={\nu}_{\alpha}-1}^{\nu -1}{q}_{i}{V}_{i+1}=U$$

Let us prove the right inequality for the upper bound.

According to Lemmas 1 and 2, it is valid that:

$${\overline{\mathcal{R}}}_{\alpha}\left(X\right)={\sum}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}{C}_{{\gamma}_{i}}={\sum}_{i={\nu}_{\alpha}-1}^{\nu -1}{q}_{i}{C}_{{\beta}_{i}}$$

Let ${d}_{i}={q}_{i-1}-{p}_{i}$. Then, from the last equality:

$${\sum}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}{C}_{{\gamma}_{i}}={\sum}_{i={\nu}_{\alpha}}^{\nu}{q}_{i-1}{C}_{{\beta}_{i-1}}={\sum}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}{C}_{{\beta}_{i-1}}+{\sum}_{i={\nu}_{\alpha}}^{\nu}{d}_{i}{C}_{{\beta}_{i-1}}$$

$${\sum}_{i={\nu}_{\alpha}}^{\nu}{d}_{i}{C}_{{\beta}_{i-1}}={\sum}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}\left({C}_{{\gamma}_{i}}-{C}_{{\beta}_{i-1}}\right)\ge 0$$

The last inequality is valid because ${p}_{i}>0$, ${\gamma}_{i}>{\beta}_{i-1}$, and ${C}_{{\gamma}_{i}}\ge {C}_{{\beta}_{i-1}}$.

Because ${C}_{{\beta}_{i-1}}$ may have arbitrary but ordered values (${C}_{{\beta}_{i-1}}\le {C}_{{\beta}_{i}}\mathrm{due}\mathrm{to}\mathrm{ordered}{\beta}_{i}$), $i={\nu}_{\alpha},\dots ,\nu $, the last inequality is valid for any ordered sequence of values ${z}_{i-1}\le {z}_{i}$. Therefore, it is valid for VaRs ${V}_{i}$ that:

$${{\displaystyle \sum}}_{i={\nu}_{\alpha}}^{\nu}{d}_{i}{\mathrm{V}}_{i}={{\displaystyle \sum}}_{i={\nu}_{\alpha}}^{\nu}\left({q}_{i-1}-{p}_{i}\right){\mathrm{V}}_{i}\ge 0$$

This leads to the inequality for the upper bound:

$$U={{\displaystyle \sum}}_{i={\nu}_{\alpha}}^{\nu}{q}_{i-1}{V}_{i}\ge {{\displaystyle \sum}}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}{V}_{i}={\overline{\mathcal{S}}}_{\alpha}\left(X\right).$$

Let us prove the inequality for the lower bound $L$:
where $\Delta {V}_{i}={V}_{i}-{V}_{i-1}$.

$$L={{\displaystyle \sum}}_{i={\nu}_{\alpha}}^{\nu}{q}_{i-1}{V}_{i-1}={{\displaystyle \sum}}_{i={\nu}_{\alpha}}^{\nu}\left({p}_{i}+{d}_{i}\right)\left({V}_{i}-\Delta {V}_{i}\right)={\overline{\mathcal{S}}}_{\alpha}\left(X\right)-{{\displaystyle \sum}}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}\Delta {V}_{i}+{{\displaystyle \sum}}_{i={\nu}_{\alpha}}^{\nu}{d}_{i}{V}_{i-1}$$

Let us calculate the upper estimate of $L$. Let us set ${V}_{{\nu}_{\alpha -1}}={V}_{{\nu}_{\alpha}}$, therefore $\Delta {V}_{{\nu}_{\alpha}}=0$. This increases the right hand side of Equation (A13). By recalling that ${p}_{i}=\delta /\left(1-\alpha \right)=\frac{1}{\nu \left(1-\alpha \right)},i={\nu}_{\alpha}+1,\dots ,\nu $, for the Set 1, we have:

$${{\displaystyle \sum}}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}\Delta {V}_{i}=\delta \left({V}_{\nu}-{V}_{{\nu}_{\alpha}}\right)/\left(1-\alpha \right)$$

Because ${\sum}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}={\sum}_{i={\nu}_{\alpha}}^{\nu}{q}_{i-1}$, then ${\sum}_{i={\nu}_{\alpha}}^{\nu}{d}_{i}=0$ and ${\sum}_{i={\nu}_{\alpha}}^{\nu}{d}_{i}{V}_{i-1}={\sum}_{i={\nu}_{\alpha}}^{\nu}{d}_{i}\left({V}_{i-1}-D\right)$ for any $D$.

Because ${\sum}_{i={\nu}_{\alpha}}^{\nu}{d}_{i}{z}_{i}\ge 0$ for any increasing ordered ${z}_{i}$, and we can set ${z}_{{\nu}_{\alpha}}=0,{z}_{i}=1,i={\nu}_{\alpha}+1,\dots ,\nu $, then:

$${{\displaystyle \sum}}_{i={\nu}_{\alpha}}^{\nu}{d}_{i}{z}_{i}={{\displaystyle \sum}}_{i={\nu}_{\alpha +1}}^{\nu}{d}_{i}=-{d}_{{\nu}_{\alpha}}={p}_{{\nu}_{\alpha}}-{q}_{{\nu}_{\alpha}-1}\ge 0$$

Taking into account that ${p}_{{\nu}_{\alpha}},{q}_{{\nu}_{\alpha}-1}\ge 0$ and ${p}_{{\nu}_{\alpha}}\le \frac{\delta}{1-\alpha}$, we have ${\sum}_{i={\nu}_{\alpha +1}}^{\nu}{d}_{i}\le \frac{\delta}{1-\alpha}.$ Then:

$${{\displaystyle \sum}}_{i={\nu}_{\alpha}}^{\nu}{d}_{i}{V}_{i-1}={{\displaystyle \sum}}_{i={\nu}_{\alpha}}^{\nu}{d}_{i}\left({V}_{i-1}-{V}_{{\nu}_{\alpha -1}}\right)\le {{\displaystyle \sum}}_{i={\nu}_{\alpha}+1}^{\nu}{d}_{i}\left({V}_{\nu -1}-{V}_{{\nu}_{\alpha -1}}\right)\le \left({V}_{\nu -1}-{V}_{{\nu}_{\alpha -1}}\right)\frac{\delta}{1-\alpha}$$

Let us return to estimation $L$ by taking into account that ${V}_{{\nu}_{\alpha -1}}={V}_{{\nu}_{\alpha}}$:

$$L\le {\overline{\mathcal{S}}}_{\alpha}\left(X\right)-{{\displaystyle \sum}}_{i={\nu}_{\alpha}}^{\nu}{p}_{i}\Delta {V}_{i}+{{\displaystyle \sum}}_{i={\nu}_{\alpha}}^{\nu}{d}_{i}{V}_{i-1}\le {\overline{\mathcal{S}}}_{\alpha}\left(X\right)-\frac{\delta \left({V}_{\nu}-{V}_{{\nu}_{\alpha}}\right)}{1-\alpha}+\frac{\delta \left({V}_{\nu -1}-{V}_{{\nu}_{\alpha -1}}\right)}{1-\alpha}\phantom{\rule{0ex}{0ex}}={\overline{\mathcal{S}}}_{\alpha}\left(X\right)-\frac{\delta \left({V}_{\nu}-{V}_{\nu -1}\right)}{1-\alpha}\le {\overline{\mathcal{S}}}_{\alpha}\left(X\right)$$

Therefore, we have proved that $L\le {\overline{\mathcal{S}}}_{\alpha}\left(X\right)\le U$.

Lemma 3 is proved. □

## References

- Acerbi, Carlo, and Dirk Tasche. 2002. On the Coherence of Expected Shortfall. Journal of Banking and Finance 26: 1487–503. [Google Scholar] [CrossRef]
- Adrian, Tobias, and Markus K. Brunnermeier. 2016. CoVaR. American Economic Review 106: 1705–41. [Google Scholar] [CrossRef]
- Basset, Gilbert W., and Hsiu-Lang Chen. 2001. Portfolio Style: Return-based Attribution Using Quantile Regression. Empirical Economics 26: 293–305. [Google Scholar] [CrossRef]
- Beraldi, Patrizia, Antonio Violi, Massimiliano Ferrara, Claudio Ciancio, and Bruno Antonio Pansera. 2019. Dealing with complex transaction costs in portfolio management. Annals of Operations Research, 1–16. [Google Scholar] [CrossRef]
- Carhart, Mark M. 1997. On Persistence in Mutual Fund Performance. Journal of Finance 52: 57–82. [Google Scholar] [CrossRef]
- Case Study. 2014. Style Classification with Quantile Regression. Available online: http://www.ise.ufl.edu/uryasev/research/testproblems/financial_engineering/style-classification-with-quantile-regression/ (accessed on 24 June 2019).
- Case Study. 2016. Estimation of CVaR through Explanatory Factors with CVaR (Superquantile) Regression. Available online: http://www.ise.ufl.edu/uryasev/research/testproblems/financial_engineering/on-implementation-of-cvar-regression/ (accessed on 24 June 2019).
- Fissler, Tobias, and Johanna F. Ziegel. 2015. Higher order elicitability and Osband’s principle. arXiv. [Google Scholar] [CrossRef]
- Huang, Wei-Qiang, and Stan Uryasev. 2018. The CoCVaR Approach: Systemic Risk Contribution Measurement. Journal of Risk 20: 75–93. [Google Scholar] [CrossRef]
- Koenker, R. 2005. Quantile Regression. Cambridge: Cambridge University Press. [Google Scholar]
- Koenker, Roger, and Gilbert Bassett. 1978. Regression Quantiles. Econometrica 46: 33–50. [Google Scholar] [CrossRef]
- Portfolio Safeguard. 2018. American Optimal Decisions, USA. Available online: http://www.aorda.com (accessed on 24 June 2019).
- Rockafellar, R. Tyrrell, and Johannes O. Royset. 2018. Superquantile/CVaR Risk Measures: Second-order Theory. Annals of Operations Research 262: 3–29. [Google Scholar] [CrossRef]
- Rockafellar, R. Tyrrell, and Stan Uryasev. 2000. Optimization of Conditional Value-At-Risk. Journal of Risk 2: 21–41. [Google Scholar] [CrossRef]
- Rockafellar, R. Tyrrell, and Stan Uryasev. 2002. Conditional Value-at-Risk for General Loss Distributions. Journal of Banking and Finance 26: 1443–71. [Google Scholar] [CrossRef]
- Rockafellar, R. Tyrrell, and Stan Uryasev. 2013. The Fundamental Risk Quadrangle in Risk Management, Optimization and Statistical Estimation. Surveys in Operations Research and Management Science 18: 33–53. [Google Scholar] [CrossRef]
- Rockafellar, R. Tyrrell, Stan Uryasev, and Michael Zabarankin. 2008. Risk Tuning with Generalized Linear Regression. Mathematics of Operations Research 33: 712–29. [Google Scholar] [CrossRef]
- Rockafellar, R. Terry, Johannes O. Royset, and Sofia I. Miranda. 2014. Superquantile Regression with Applications to Buffered Reliability, Uncertainty Quantification and Conditional Value-at-Risk. European Journal Operations Research 234: 140–54. [Google Scholar] [CrossRef]
- Sharpe, William F. 1992. Asset Allocation: Management Style and Performance Measurement. Journal of Portfolio Management (Winter) 18: 7–19. [Google Scholar] [CrossRef]
- Ziegel, Johanna F. 2014. Coherence and elicitability. arXiv. [Google Scholar] [CrossRef]

**Figure 2.**Five equally probable atoms. Risk in the CVaR quadrangle and mixed-quantile quadrangle with Set 1 of parameters for $\alpha =0.5.$

Optimization Problem # | Set # | Objective | RLG | RLV | RUJ | RUO | Intercept | Solving Time (s) |
---|---|---|---|---|---|---|---|---|

1 | N/A | 0.01248 | 0.486 | 0.581 | −0.0753 | −6.22 × 10^{−3} | 6.98 × 10^{−3} | 0.02 |

2 | N/A | 0.01248 | 0.486 | 0.582 | −0.0753 | −6.22 × 10^{−3} | 6.98 × 10^{−3} | 0.02 |

3 | Set 1 | 0.01247 | 0.486 | 0.582 | −0.0753 | −6.22 × 10^{−3} | 6.96 × 10^{−3} | 0.03 |

4 | Set 1 | 0.01251 | 0.486 | 0.582 | −0.0752 | −6.22 × 10^{−3} | 6.98 × 10^{−3} | 0.11 |

4 | Set 2 | 0.01248 | 0.486 | 0.582 | −0.0753 | −6.23 × 10^{−3} | 6.98 × 10^{−3} | 0.03 |

Optimization Problem # | Set # | Objective | RLG | RLV | RUJ | RUO | Intercept | Solving Time (s) |
---|---|---|---|---|---|---|---|---|

1 | N/A | 0.016656 | 0.472 | 0.606 | −0.078 | −7.052 × 10^{−3} | 1.05 × 10^{−2} | 0.02 |

2 | N/A | 0.016656 | 0.472 | 0.606 | −0.078 | −7.052 × 10^{−3} | 1.05 × 10^{−2} | 0.02 |

3 | Set 1 | 0.016656 | 0.472 | 0.606 | −0.078 | −7.052 × 10^{−3} | 1.05 × 10^{−2} | 0.02 |

4 | Set 1 | 0.016656 | 0.472 | 0.606 | −0.078 | −7.052 × 10^{−3} | 1.05 × 10^{−2} | 0.11 |

4 | Set 2 | 0.016656 | 0.472 | 0.606 | −0.078 | −7.052 × 10^{−3} | 1.05 × 10^{−2} | 0.04 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).