Next Article in Journal / Special Issue
Equivalent Risk Indicators: VaR, TCE, and Beyond
Previous Article in Journal
On the Macroeconomic Conditions of West African Economies to External Uncertainty Shocks
Previous Article in Special Issue
Marriage and Individual Equity Release Contracts with Dread Disease Insurance as a Tool for Managing the Pensioners’ Budget

## Article Menu

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Reverse Sensitivity Analysis for Risk Modelling

by
Silvana M. Pesenti
Department of Statistical Sciences, University of Toronto, Toronto, ON M5S 3G3, Canada
Risks 2022, 10(7), 141; https://doi.org/10.3390/risks10070141
Submission received: 26 May 2022 / Revised: 29 June 2022 / Accepted: 7 July 2022 / Published: 18 July 2022
(This article belongs to the Special Issue Actuarial Mathematics and Risk Management)

## Abstract

:
We consider the problem where a modeller conducts sensitivity analysis of a model consisting of random input factors, a corresponding random output of interest, and a baseline probability measure. The modeller seeks to understand how the model (the distribution of the input factors as well as the output) changes under a stress on the output’s distribution. Specifically, for a stress on the output random variable, we derive the unique stressed distribution of the output that is closest in the Wasserstein distance to the baseline output’s distribution and satisfies the stress. We further derive the stressed model, including the stressed distribution of the inputs, which can be calculated in a numerically efficient way from a set of baseline Monte Carlo samples and which is implemented in the R package SWIM on CRAN. The proposed reverse sensitivity analysis framework is model-free and allows for stresses on the output such as (a) the mean and variance, (b) any distortion risk measure including the Value-at-Risk and Expected-Shortfall, and (c) expected utility type constraints, thus making the reverse sensitivity analysis framework suitable for risk models.

## 1. Introduction

Sensitivity analysis is indispensable for model building, model interpretation, and model validation, as it provides insight into the relationship between model inputs and outputs. A key tool used for sensitivity analysis are sensitivity measures, that assign to each model input a score, representing an input factor’s ability to explain the variability of a model output’s summary statistic; see Saltelli et al. (2008) and Borgonovo and Plischke (2016) for an in-depth review. One of the most widely used output summary statistic is the variance, which gives rise to sensitivity measures, e.g., the Sobol indices, that apportion the uncertainty in the output’s variance to input factors. In many applications, such as reliability management and financial and insurance risk management, however, the variance is not the output statistic of concern and instead quantile-base measures are used; indicatively, see Asimit et al. (2019); Fissler and Pesenti (2022); Maume-Deschamps and Niang (2018); Tsanakas and Millossovich (2016). Furthermore, typical for financial risk management applications is that model inputs are subject to distributional uncertainty. Probabilistic (or global) sensitivity measures, however, tacitly assume that the model’s distributional assumptions are correctly specified; indeed, sensitivity measures based on the difference between conditional (on a model input) and unconditional densities (of the output) are termed “common rationale” Borgonovo et al. (2016). Examples include indices, such as Borgonovo’s sensitivity measures Borgonovo (2007), the f-sensitivity index Rahman (2016), and sensitivity indices based on the Cramér–von Mises distance Gamboa et al. (2018), we also refer to Plischke and Borgonovo (2019) for a detailed overview and to Gamboa et al. (2020) for estimation of these sensitivity measures. Recently, Plischke and Borgonovo (2019) define sensitivity measures that depend only on the copula between input factors, whereas Pesenti et al. (2021) propose a sensitivity measure based on directional derivatives that take dependence between input factors into account. Estimating these sensitivities, however, may render difficult in application where joint observations are scarce, e.g., insurance portfolios, and their interpretation may be limited as dependence structures are commonly specified by expert opinions Denuit et al. (2006).
We consider an alternative sensitivity analysis framework proposed in Pesenti et al. (2019) that (a) considers statistical summaries relevant to risk management, (b) applies to models subject to distributional uncertainty, thus instead of relying on correctly specified distributions from which to calculate sensitivity measures we derive alternative distributions that fulfil a specific probabilistic stress and are “closest” to the baseline distribution; and (c) studies reverse sensitivity measures. Differently to the framework proposed in Pesenti et al. (2019) who use the Kullback–Leibler divergence to quantify the closedness of probability measures, in this work we consider the Wasserstein distance of order two to measure the distance between distribution functions. The Wasserstein distance allows for more flexibility in the choice of stresses including survival probabilities (via quantiles) used in reliability analysis, risk measures employed in finance and insurance, and utility functions relevant for decision under ambiguity.
Central to the reverse sensitivity analysis framework is a baseline model, the 3-tuple $( X , g , P )$, consisting of random input factors $X = ( X 1 , … , X n )$, an aggregation function $g : R n → R$ mapping input factors to a univariate output $Y = g ( X )$, and a probability measure $P$. The methodology has been termed reverse sensitivity analysis by Pesenti et al. (2019) since it proceeds in a reverse fashion to classical sensitivity analysis where input factors are perturbed and the corresponding altered output is studied. Indeed, in the reverse sensitivity analysis proposed by Pesenti et al. (2019) a stress on the output’s distribution is defined and changes in the input factors are monitored. The quintessence of the sensitivity analysis methodology is, however, not confined to stressing the output’s distribution, it is also applicable to stressing an input factor and observing the changes in the model output and in the other inputs. Throughout the exposition, we focus on the reverse sensitivity analysis that proceeds via the following steps:
(i)
Specify a stress on the baseline distribution of the output;
(ii)
Derive the unique stressed distribution of the output that is closest in the Wasserstein distance and fulfils the stress;
(iii)
The stressed distribution induces a canonical Radon–Nikodym derivative $d Q * d P$; a change of measures from the baseline $P$ to the stressed probability measure $Q *$;
(iv)
Calculate sensitivity measures that reflect an input factors’ change in distribution from the baseline to the stressed model.
Sensitivity testing using divergence measures–in the spirit of the reverse sensitivity methodology–has been studied by Cambou and Filipović (2017) using f-divergences on a finite probability space; by Pesenti et al. (2019) and Pesenti et al. (2021) using the Kullback–Leibler divergence; and Makam et al. (2021) consider a discrete sample space combined with the $χ 2$-divergence. It is however known that the set of distribution functions with finite f-divergence, e.g., the Kullback–Leibler and $χ 2$ divergence–around a baseline distribution function depends on the baseline’s tail-behaviour, thus the choice of f-divergence should be chosen dependent on the baseline distribution Kruse et al. (2019). The Wasserstein distance on the contrary, automatically adapts to the baseline distribution function in that the Wasserstein distance penalises dissimilar distributional features such as different tail behaviour Bernard et al. (2020). The Wasserstein distance has enjoyed numerous applications to quantify distributional uncertainty, see, e.g., Blanchet and Murthy (2019) and Bernard et al. (2020) for applications to financial risk management. In the context of uncertainty quantification, Moosmüeller et al. (2020) utilise the Wasserstein distance to elicit the (uncertain) aggregation map g from the distributional knowledge of the inputs and outputs. Fort et al. (2021) utilises the Wasserstein distance to introduce global sensitivity indices for computer codes whose output is a distribution function. In this manuscript we use the Wasserstein distance as it allows for different stresses compared to the Kullback–Leibler divergence. Indeed, the Wasserstein distance allows for stresses on any distortion risk measures, while the Kullback–Leibler divergence only allow for stresses on risk measures which are Value-at-Risk (VaR) and VaR and Expected Shortfall jointly, see Pesenti et al. (2019).
This paper is structured as follows: In Section 2, we state the notation and definitions necessary for the exposition. Section 3 introduces the optimisation problems and we derive the unique stressed distribution function of the output which has minimal Wasserstein distance to the baseline output’s distribution and satisfies a stress. The considered stresses include constraints on risk measures, quantiles, expected utilities, and combinations thereof. In Section 4, we characterise the canonical Radon–Nikodym derivative, induced by the stressed distribution function, and study how input factors’ distributions change when moving from the baseline to the stressed model. An application of the reverse sensitivity analysis is demonstrated on a mixture model in Section 5.
All proofs are delegated to Appendix A.

## 2. Preliminaries

Throughout we work on a measurable space $( Ω , A )$ and denote the sets of distribution functions with finite second moment by
$M = { G : R → [ 0 , 1 ] | G non - decreasing , right - continuous , lim x ↘ − ∞ G ( x ) = 0 , lim x ↗ + ∞ G ( x ) = 1 , a n d ∫ x 2 d G ( x ) < + ∞ ,$
and the corresponding set of square-integrable (left-continuous) quantile functions by
$M ˘ = G ˘ ∈ L 2 ( [ 0 , 1 ] ) G ˘ non - decreasing & left - continuous .$
or any distribution function $G ∈ M$, we denote its corresponding (left-continuous) quantile function by $G ˘ ∈ M ˘$, that is $G ˘ ( u ) = inf { y ∈ R | G ( y ) ≥ u }$, $u ∈ [ 0 , 1 ]$, with the convention that $inf ∅ = + ∞$. We measure the discrepancy between distribution functions on the real line using the Wasserstein distance of order 2, defined as follows.
Definition 1
(Wasserstein Distance). The Wasserstein distance (of order 2) between two distribution functions $F 1$ and $F 2$ is defined as Villani (2008)
$W 2 F 1 , F 2 = inf π ∈ Π ( F 1 , F 2 ) ∫ R 2 | z 1 − z 2 | 2 π ( d z 1 , d z 2 ) 1 2 ,$
where $Π ( F 1 , F 2 )$ denotes the set of all bivariate probability measures with marginal distributions $F 1$ and $F 2$, respectively.
The Wasserstein distance is the minimal quadratic cost associated with transporting the distribution $F 1$ to $F 2$ using all possible couplings (bivariate distributions) with fixed marginals $F 1$ and $F 2$. The Wasserstein distance admits desirable properties to quantify model uncertainty such as the comparison of distributions with differing support, e.g., with the empirical distribution function. Moreover it is symmetric and forms a metric on the space of probability measures; we refer to Villani (2008) for an overview and properties of the Wasserstein distance. It is well known (Dall’Aglio 1956) that for distributions on the real line, the Wasserstein distance admits the representation
$W 2 F 1 , F 2 = ∫ 0 1 F ˘ 1 ( u ) − F ˘ 2 ( u ) 2 d u 1 2 .$

## 3. Deriving the Stressed Distribution

Throughout this section we assume that the modeller’s baseline model is the 3-tuple $( X , g , P )$ consisting of a random vector of input factors $X = ( X 1 , … , X n )$, an aggregation function $g : R n → R$ mapping input factors to a (for simplicity) univariate output $Y = g ( X )$, and a probability measure $P$. The baseline probability measure $P$ reflects the modeller’s (statistical and expert) knowledge of the distribution of $X$ and we denote the distribution function of the output by $F ( y ) = P ( Y ≤ y )$. The modeller then performs reverse sensitivity analysis, that is tries to understand how prespecified stresses/constraints on the output distribution F, e.g., an increase in jointly its mean and standard deviation or a risk measures such as the Value-at-Risk (VaR) or Expected Shortfall (ES), affects the baseline model, e.g., the joint distribution of the input factors. For this, we first define the notion of a stressed distribution. Specifically, for given constraints we call a solution to the optimisation problem
$arg min G ∈ M W 2 ( G , F ) s u b j e c t t o stresses / constraints on G ,$
a stressed distribution. In problem (1), the baseline distribution F is fixed and we seek over all alternative distributions $G ∈ M$ the one who satisfies the stress(es) and which has smallest Wasserstein distance to F. The solution to problem (1)–the stressed distribution–may be interpreted as the most “plausible” distribution function arising under adverse circumstances. Examples of stresses and constraints considered in this work include an increase (decrease), compared to their corresponding values under the reference probability $P$, in e.g., the mean, mean and standard deviation, distortion risk measures, and utility functions, and combinations thereof.
Next, we recall the concept of weighted isotonic projection which is intrinsically connected to the solution of optimisation problem (1); indeed the stressed quantile functions can be uniquely characterised via weighted isotonic projections.
Definition 2
(Weighted Isotonic Projection Barlow et al. (1972)). The weighted isotonic projection $ℓ ↑ w$ of a function $ℓ ∈ L 2 ( [ 0 , 1 ] )$ with weight function $w : [ 0 , 1 ] → [ 0 , + ∞ )$, $w ∈ L 2 ( [ 0 , 1 ] )$, is its weighted projection onto the set of non-decreasing and left-continuous functions in $L 2 ( [ 0 , 1 ] )$. That is, the unique function satisfying
$ℓ ↑ w = arg min h ∈ M ˘ ∫ 0 1 ℓ ( u ) − h ( u ) 2 w ( u ) d u .$
When the weight function is constant, i.e., $w ( x ) ≡ c$, $c > 0$, we write $ℓ ↑ ( · ) = ℓ ↑ c ( · )$, as in this case the isotonic projection is indeed independent of c. The weighted isotonic projection admits not only a graphical interpretation as the non-decreasing function that minimises the weighted $L 2$-distance from but has also a discrete counterpart: the weighted isotonic regression Barlow et al. (1972). Numerically efficient algorithms for calculating weighted isotonic regressions are available, e.g., the R package isotone De Leeuw et al. (2010).
In the next sections, we solve problem (1) for different choices of constraints. Specifically, for risk measures constraints (Section 3.1), integral constraints (Section 3.2), Value-at-Risk constraints (Section 3.3), and expected utility constraint (Section 3.4), and in Section 3.5 we consider ways to smooth stressed distributions. Using these stressed distributions, we derive the stressed probability measures in Section 4 and study how a stress on the output is reflected on the input distribution(s).

#### 3.1. Risk Measure Constraints

This section considers stresses on distortion risk measures, that is we derive the unique stressed distribution that satisfies an increase and/or decrease of distortion risk measures while minimising the Wasserstein distance to the baseline distribution F.
Definition 3
(Distortion Risk Measures). Let $γ ∈ L 2 ( [ 0 , 1 ] )$ be a square-integrable function with $γ : [ 0 , 1 ] → [ 0 , + ∞ )$ and $∫ 0 1 γ ( u ) d u = 1$. Then the distortion risk measure $ρ γ$ with distortion weight function γ is defined as
$ρ γ ( G ) = ∫ 0 1 G ˘ ( u ) γ ( u ) d u f o r G ∈ M .$
The above definition of distortion risk measures makes the assumption that positive realisations are undesirable (losses) while negative realisations are desirable (gains). The class of distortion risk measures includes one of the most widely used risk measures in financial risk management, the Expected Shortfall (ES) at level $α ∈ [ 0 , 1 )$ (also called Tail Value-at-Risk), with $γ ( u ) = 1 1 − α 𝟙 { u > α }$, see, e.g., Acerbi and Tasche (2002). The often used risk measure Value-at-Risk (VaR), while admitting a representation given in (2), has a corresponding weight function $γ$ that is not square-integrable. We derive the solution to optimisation problem (1) with a VaR constraint in Section 3.3.
Theorem 1
(Distortion Risk Measures). Let $r k ∈ R$, $ρ γ k$ be a distortion risk measure with weight function $γ k$ and assume there exists a distribution function $G ˜ ∈ M$ satisfying $ρ γ k ( G ˜ ) = r k$ for all $k ∈ { 1 , … , d }$. Then, the optimisation problem
$arg min G ∈ M W 2 ( G , F ) s u b j e c t t o ρ γ k ( G ) = r k k = 1 , … , d ,$
has a unique solution given by
$G ˘ * ( u ) = F ˘ ( u ) + ∑ k = 1 d λ k γ k ( u ) ↑ ,$
where the Lagrange multipliers $λ k$ are such that the constraints are fulfilled, that is $ρ γ k ( G * ) = r k$ for all $k = 1 , … , d$.
In the above theorem, and also in later results, we assume that there exists a distribution function which satisfies all constraints. This assumption is not restrictive and requires that, particularly, multiple constraints are chosen carefully, e.g., imposing that $∫ 0 1 G ˘ ( u ) d u > 1 1 − α ∫ α 1 G ˘ ( u ) d u$ for $α ∈ ( 0 , 1 )$, i.e., the mean being larger than the $ES α$, cannot be fulfilled by any distribution function; thus, a combination of stresses not of interest to a modeller.
We observe that the optimal quantile function is the isotonic projection of a weighted linear combination of the baseline’s quantile function $F ˘$ and the distortion weight functions of the risk measures. A prominent group of risk measures is the class of coherent risk measures, that are risk measures fulfilling the properties of monotonicity, positive homogeneity, translation invariance, and sub-additivity; see Artzner et al. (1999) for a discussion and interpretation. It is well-known that a distortion risk measure is coherent, if and only if, its distortion weight function $γ ( · )$ is non-decreasing Kusuoka (2001). For the special case of a constraint on a coherent distortion risk measure that results in a larger risk measure compared to the baseline’s, we obtain an analytical solution without the need to calculate an isotonic projection.
Proposition 1
(Coherent Distortion Risk Measure). If $ρ γ$ is a coherent distortion risk measure and $r ≥ ρ γ ( F )$, then optimisation problem (3) with $d = 1$ has a unique solution given by
$G ˘ * ( u ) = F ˘ ( u ) + r − ρ γ ( F ) ∫ 0 1 γ ( u ) 2 d u γ ( u ) .$
We illustrate the stressed distribution functions for constraints on distortion risk measures in the next example. Specifically, we look at the $α$-$β$ risk measures which are a parametric family of distortion risk measures.
Example 1
($α$-$β$ Risk Measure). The α-β risk measure, $0 < β ≤ α < 1$, is defined by
$γ ( u ) = 1 η p 𝟙 { u < β } + ( 1 − p ) 𝟙 { u ≥ α } ,$
where $p ∈ [ 0 , 1 ]$ and $η = p β + ( 1 − p ) ( 1 − α )$ is the normalising constant. This parametric family contains several notable risk measures as special cases: for $p = 0$ we obtain $E S α$, and for $p = 1$ the conditional lower tail expectation (LTE) at level β.
Moreover, if $p < 1 2$$p > 1 2$ the α-β risk measure emphasises losses (gains) relative to gains (losses). For $α = β$ and $p < 1 2$, the risk measure is equivalent to $κ E S α [ Y ] − λ E [ Y ]$, where $κ = ( 1 − 2 p ) ( 1 − α ) η$ and $λ = p κ η$.
Figure 1 displays the baseline $F ˘ Y$ and the stressed $G ˘ Y *$ quantile functions of a random variable Y under a 10% increase on the α-β risk measure with $β = 0.1$, $α = 0.9$, and various $p ∈ { 0.25 , 0.5 , 0.75 }$. The baseline distribution is chosen to be $F Y$ is $L o g n o r m a l ( μ , σ 2 )$ with parameters $μ = 7 8$ and $σ = 0.5$. We observe in Figure 1 that the stressed quantile functions $G ˘ Y *$ have, in all three plots, a flat part which straddles $β = 0.1$ and a jump at $α = 0.9$. The length of the flat part is increasing with increasing p while the size of the jump is decreasing with increasing p. This can also be seen in the stressed densities $g Y *$ which have, for all values of p, a much heavier right albeit a much lighter left tail than the density of the baseline model. Thus, under this stress, both tails of the baseline distribution are altered.

#### 3.2. Integral Constraints

The next results are generalisations of stresses on distortion risk measures to integral constraints, and include as a special case a stress jointly on the mean, the variance, and distortion risk measures.
Theorem 2
(Integral). Let $h k , h ˜ l : [ 0 , 1 ] → [ 0 , ∞ )$ be square-integrable functions and assume there exists a distribution function $G ˜ ∈ M$ satisfying $∫ 0 1 h k ( u ) G ˘ ( u ) d u ≤ c k$ and $∫ 0 1 h ˜ l ( u ) G ˘ ( u ) 2 d u ≤ c ˜ l$ for all $k = 1 , … , d$, and $l = 1 , … , d ˜$. Then the optimisation problem
$arg min G ∈ M W 2 ( G , F ) s u b j e c t t o ∫ 0 1 h k ( u ) G ˘ ( u ) d u ≤ c k , k = 1 , … , d , ∫ 0 1 h ˜ l ( u ) G ˘ ( u ) 2 d u ≤ c ˜ l , l = 1 , … , d ˜ ,$
has a unique solution given by
$G ˘ * ( u ) = 1 Λ ˜ ( u ) F ˘ ( u ) + ∑ k = 1 d λ k h k ( u ) ↑ Λ ˜ ,$
where $Λ ˜ ( u ) = 1 + ∑ k = 1 d ˜ λ ˜ k h ˜ k ( u )$ and the Lagrange multipliers $λ 1 , … , λ d$ and $λ ˜ 1 , … , λ ˜ d$ are non-negative and such that the constraints are fulfilled.
A combination of the above theorems provides stresses jointly on the mean, the variance, and on multiple distortion risk measures.
Proposition 2
(Mean, Variance, and Risk Measures). Let $m ′ ∈ R$, $σ ′ > 0$, $r k ∈ R$, and distortion risk measures $ρ γ k$, $k = 1 , … , d$. Assume there exists a distribution function $G ˜ ∈ M$ with mean $m ′$, standard deviation $σ ′$, and which satisfies $ρ γ k ( G ˜ ) = r k$, for all $k = 1 , … , d$. Then the optimisation problem
$arg min G ∈ M W 2 ( G , F ) s u b j e c t t o ∫ x d G ( x ) = m ′ , ∫ ( x − m ′ ) 2 d G ( x ) = σ ′ 2 a n d ρ γ k ( G ) = r k , k = 1 , … , d ,$
has a unique solution given by
$G ˘ * ( u ) = 1 1 + λ 2 F ˘ ( u ) + λ 1 + λ 2 m ′ + ∑ k = 1 d λ k + 2 γ k ( u ) ↑ ,$
and the Lagrange multipliers $λ 1 , … , λ d + 2$ with $λ 2 ≠ − 1$ are such that the constraints are fulfilled.
Example 2
(Mean, Variance, and ES). Here, we illustrate Proposition 2 with the ES risk measure and three different stresses. The top panels of Figure 2 display the baseline quantile function $F ˘ Y$ and the stressed quantile function $G ˘ Y *$ of Y, where the baseline distribution $F Y$ of Y is again $L o g n o r m a l ( μ , σ 2 )$ with parameters $μ = 7 8$ and $σ = 0.5$. The bottom panels display the corresponding baseline and stressed densities. The left panels correspond to a stress, where, under the stressed model, the $E S 0.95$ and the mean are kept fixed at their corresponding values under the baseline model, while the standard deviation is increased by 20%. We observe, both in the quantile and density plot, that the stressed distribution is more spread out indicating a larger variance. Furthermore, at $y ≈ 5.77$ the stressed density $g Y * ( y )$ drops to ensure that $E S 0.95 ( G Y * ) = E S 0.95 ( F Y )$. This drop is due to the fact that a stress composed of a 20% increase in the standard deviation while fixing the mean (i.e., without a constraint on $E S$) results in an ES that is larger compared to the baseline’s. Indeed, under this alternative stress (without a constraint on ES) we obtain that $E S 0.95 ( G Y * ) ≈ 7.70$ compared to $E S 0.95 ( F Y ) ≈ 6.87$.
The middle panels correspond to a 10% increase in $E S 0.95$ and a 10% decrease in the mean, while keeping the standard deviation fixed at its value under the baseline model. The density plot clearly indicates a general shift of the stressed density to the left, stemming from the decrease in the mean, and a single trough which is induced by the increase in ES. The right panels correspond to a 10% increase in $E S 0.95$, a 10% increase in the mean, and a 10% decrease in the standard deviation. The stressed density still has the trough from the increase in ES; however, the density is less spread out (reduction in the standard deviation) and generally shifted to the right (increase in the mean).

#### 3.3. Value-at-Risk Constraints

In this section we study stresses on the risk measure Value-at-Risk (VaR). The VaR at level $α ∈ ( 0 , 1 )$ of a distribution function $G ∈ M$ is defined as its left-continuous quantile function evaluated at $α$, that is
$VaR α ( G ) = G ˘ ( α ) .$
We further define the right-continuous $VaR +$, that is the right-continuous quantile function of $G ∈ M$ evaluated at $α$, by
$VaR α + ( G ) = G ˘ + ( α ) = inf y ∈ R F ( y ) > α .$
Theorem 3
($VaR$). Let $q ∈ R$ and consider the optimisation problem
$arg min G ∈ M W 2 ( G , F ) s u b j e c t t o ( a ) V a R α ( G ) = q o r ( b ) V a R α + ( G ) = q ,$
and define $α F$ such that $V a R α F ( F ) = q$. Then, the following holds
(i)
under constraint (a), if $q ≤ V a R α ( F )$, then the unique solution is given by
$G ˘ * ( u ) = F ˘ ( u ) + q − F ˘ ( u ) 𝟙 u ∈ α F , α ;$
if $q > V a R α ( F )$, then there does not exist a solution.
(ii)
under constraint (b), if $q ≥ V a R α + ( F )$, then the unique solution is given by
$G ˘ * ( u ) = F ˘ ( u ) + q − F ˘ ( u ) 𝟙 u ∈ α , α F ;$
if $q < V a R α + ( F )$, then there does not exist a solution.
The above theorem states that if the optimal quantile function exists it is either the baseline quantile function $F ˘$ or constant equal to q. Moreover, the stressed quantile function (if it exists) jumps at $α$ which implies that the existence of a solution hinges on the careful choice of the stress. For a stress on $VaR$ (constraint (a)) for example, a solution exists if and only if the constraint satisfies $q ≤ VaR α ( F )$; a decrease in the $VaR α$ from the baseline to the stressed model. The reason for the non-existence of a solution when stressing VaR upwards is that the unique increasing function that minimises the Wasserstein distance and satisfies the constraint is not left-continuous and thus not a quantile function.
Alternatively to stressing VaR or $VaR +$, and in particularly in the case when a desired stressed solution does not exist, one may stress instead the distortion risk measure Range-Value-at-Risk (RVaR) Cont et al. (2010). The RVaR at levels $0 ≤ α < β ≤ 1$ is defined by
$RVaR α , β ( G ) = 1 β − α ∫ α β G ˘ ( u ) d u , for G ∈ M ,$
and belongs to the class of distortion risk measures. The $RVaR$ attains as limiting cases the VaR and $VaR +$. Indeed, for any $G ∈ M$ it holds
$VaR α ( G ) = lim α ′ ↗ α RVaR α ′ , α ( G ) and VaR α + ( G ) = lim β ↘ α RVaR α , β ( G ) .$
The solution to stressing $RVaR$ is provided in Theorem 1.

#### 3.4. Expected Utility Constraint

This section considers the change from the baseline to the stressed distribution under an increase of an expected utility constraint. In the context of utility maximisation, the next theorem provides a way to construct stressed models with a larger utility compared to the baseline.
Theorem 4
(Expected Utility and Risk Measures). Let $u : R → R$ be a differentiable concave utility function, $r k ∈ R$, and $ρ γ k$ be distortion risk measures, for $k = 1 , … , d$. Assume there exists a distribution function $G ˜$ satisfying $∫ R u ( x ) d G ˜ ( x ) ≥ c$ and $ρ γ k G ˜ = r k$ for all $k = 1 , … , d$. Then the optimisation problem
$arg min G ∈ M W 2 ( G , F ) s u b j e c t t o ∫ R u ( x ) d G ( x ) ≥ c & ρ γ k ( G ) = r k , k = 1 , … , d$
has a unique solution given by
$G ˘ * ( u ) = ν ˘ λ 1 F ˘ ( u ) + ∑ k = 1 d λ k + 1 γ k ( v ) ↑ ,$
where $ν ˘ λ 1$ is the left-inverse of $ν λ 1 ( x ) = x − λ 1 u ′ ( x )$, and $λ 1 ≥ 0$, $( λ 2 , … , λ d + 1 ) ∈ R d$ are such that the constraints are fulfilled.
The utility function in Theorem 4 need not be monotone, indeed the theorem applies to any differentiable concave function, without the need of an utility interpretation. Moreover, Theorem 4 also applies to differentiable convex (disutility) functions $u ˜$ and constraint $∫ R u ˜ ( x ) d G ( x ) ≤ c$; a situation of interest in insurance premium calculations. In this case, the solution is given by (5) with $u ( x ) = − u ˜ ( x )$.
Example 3
(HARA Utility & ES). The Hyperbolic absolute risk aversion (HARA) utility function is defined by
$u ( x ) = 1 − η η a x 1 − η + b η ,$
with parameters $a > 0$, $a x 1 − η + b > 0$, and where $η ≤ 1$ guarantees concavity.
We again choose the baseline distribution $F Y$ of Y to be $L o g n o r m a l ( μ , σ 2 )$ with $μ = 7 8$ and $σ = 0.5$ and consider utility parameters $a = 1$, $b = 5$, and $η = 0.5$. Figure 3 displays the baseline and the stressed quantile functions $F ˘ Y$ and $G ˘ Y *$, respectively, for a combined stress on the HARA utility and on $E S$ at levels 0.8 and 0.95. Specifically, for all three stresses we decreasing $E S 0.8$ by 10% and increasing $E S 0.95$ by 10% compared to their values under the baseline model. Moreover, the HARA utility is increased by 0%, 1%, and 3%, respectively, corresponding to the panels from the left to the right. The flat part in the stressed quantile function $G ˘ * ( u )$ around $u = 0.8$, visible in all top panels of Figure 3, is induced by the decrease in $E S 0.8$ while the jump at $u = 0.95$ is due to the increase in $E S 0.95$. From the left to the right panel in Figure 3, we observe that the larger the stress on the HARA utility, the more the stressed quantile function shifts away from the baseline quantile function $F ˘ Y$.

#### 3.5. Smoothing of the Stressed Distribution

We observe that the stressed quantile functions derived in Section 3 generally contain jumps and/or flat parts even if the baseline distribution is absolutely continuous. In situation where this is not desirable, one may consider a smoothed version of the stressed distributions. For this, we recall that the isotonic regression, the discrete counterpart of the weighted isotonic projection, of a function evaluated at $u 1 , … , u n$ with positive weights $w 1 , … , w n$, is the solution to
$min u 1 , … , u n ∑ i = 1 n u i − ℓ ( u i ) 2 w i , subject to u i ≤ u j , i ≤ j .$
There are numerous efficient algorithms that solve (6) most notably the pool-adjacent-violators (PAV) algorithm Barlow et al. (1972). It is well-known that the solution to the isotonic regression contains flat parts and jumps. A smoothed isotonic regression algorithm, termed smooth pool-adjacent-violators (SPAV) algorithm, using an $L 2$ regularisation was recently proposed by Sysoev and Burdakov (2019). Specifically, they consider
$min u 1 , … , u n ∑ i = 1 n u i − ℓ ( u i ) 2 w i + ∑ i = 1 n ζ i u i + 1 − u i 2 , subject to u i ≤ u j , i ≤ j ,$
where $ζ i ≥ 0$, $i = 0 , … , n − 1$, are prespecified smoothing parameters. Using a probabilitistic reasoning, Sysoev and Burdakov (2019) argue that $ζ i$ may be chosen proportional to a (e.g., quadratic) kernel evaluated at $u i$ and $u i + 1$, that is
$ζ i = ζ K ( u i , u i + 1 ) with K ( u i , u i + 1 ) = 1 | u i − u i + 1 | 2 and ζ ≥ 0 .$
The choice of smoothing parameter $ζ = 0$ correspond to the original isotonic regression larger values of $ζ$ correspond to a greater degree of smoothness of the solution. $ζ$ can either be prespecified or estimated using cross-validation, see e.g., Sysoev and Burdakov (2019).
To guarantee that the smoothed quantile function still fulfils the constraint, one may replace in every step of the optimisation for finding the Lagrange parameter the PAV with the SPAV algorithm. Thus, the Lagrange parameter are indeed found such that the constraints are fulfilled.
Remark 1.
There are numerous works proposing smooth versions of isotonic regressions. Approaches include kernel smoothers, e.g., Hall and Huang (2001), and spline techniques, e.g., Meyer (2008). These algorithms, however, are computationally heavy in that their computational cost is $O ( n 2 )$, where n is the number of data points. Furthermore, these algorithm require a careful choice of the kernel or the spline basis which is in contrast to the SPAV. We refer the reader to Sysoev and Burdakov (2019) for a detailed discussion and references to smooth isotonic regression algorithms.

## 4. Analysing the Stressed Model

Recall that a modeller is equipped with a baseline model, the 3-tuple $( X , g , P )$, consisting of a set of input factors $X = ( X 1 , … , X n )$, a univariate output random variable of interest, $Y = g ( X )$, and a probability measure $P$. For a stress on the output’s baseline distribution $F Y$, we derived in Section 3 the corresponding unique stressed distribution function, denoted here by $G Y *$. Thus, to fully specify the stressed model we next define a stressed probability measure $Q *$ that is induced by $G Y *$.

#### 4.1. The Stressed Probability Measures

A stressed distribution $G Y *$ induces a canonical change of measure that allows the modeller to understand how the baseline model including the distributions of the inputs changes under the stress. The Radon–Nikodym (RN) derivative of the baseline to the stressed model is
$d Q * d P = g Y * ( Y ) f Y ( Y ) ,$
where $f Y$ and $g Y *$ denote the densities of the baseline and stressed output distribution, respectively. The RN derivative is well-defined since $f Y ( Y ) > 0$, $P$-a.s. The distribution functions of input factors under the stressed model – the stressed distributions – are then given, e.g., for input $X i$, $i ∈ { 1 , … , n }$, by
$Q * ( X i ≤ x i ) = E 𝟙 { X i ≤ x i } d Q * d P = E 𝟙 { X i ≤ x i } g Y * ( Y ) f Y ( Y ) , x i ∈ R ,$
and for multivariate inputs $X$ by
$Q * ( X ≤ X ) = E 𝟙 { X ≤ X } g Y * ( Y ) f Y ( Y ) , X ∈ R n ,$
where $E [ · ]$ denotes the expectation under $P$. Note that under the stressed probability measure $Q *$, the input factors’ marginal and joint distributions may be altered.
Example 4
(HARA Utility & ES continued). We continue Example 3 and illustrate the RN-densities $d Q * d P$ for the following three stresses (from the left to the right panel): a 10% decrease in $E S 0.8$ and a 10% increasing $E S 0.95$ for all three stresses, and a 0%, 1%, and 3% increase in the HARA utility, respectively.
We observe in Figure 4, that for all three stresses large realisations of Y obtain a larger weight under the stressed probability measures $Q *$ compared to the baseline probability $P$. Indeed, for all three stresses it holds that $d Q * d P ( ω ) > 1$ whenever $Y ( ω ) > 6$ and $ω ∈ Ω$. This is in contrast to small realisations of Y which obtain a weight smaller than 1. The impact of the different levels of stresses of the HARA utility (0%, 1%, and 3%, from the left to the right panel) can be observed in the left tail of $d Q * d P$; a larger stress on the utility induces larger weights. The length of the trough of $d Q * d P$ is increasing from the left panel (approx. ($4.53 , 6.15$)) to the right panel (approx. ($4.43 , 6.18$)), and corresponds in all cases to the constant part in $G Y *$ (see Figure 3, top panels) which is induced by the decrease in $E S 0.8$ under the stressed model.

#### 4.2. Reverse Sensitivity Measures

Comparison of the baseline and a stressed model can be conducted via different approaches depending on the modeller’s interest. While probabilistic sensitivity measures underlie the assumption of a fixed probability measure and quantify the divergence between the conditional (on a model input) and the unconditional output density Saltelli et al. (2008), the proposed framework compares a baseline and a stressed model, i.e., distributions under different probability measures. Therefore, to quantify the distributional change in input factor $X i$ from the baseline $P$ to the stressed $Q *$ probability, a sensitivity measure introduced by Pesenti et al. (2019) may be suitable which quantifies the variability of an input factor’s distribution from the baseline to the stressed model. A generalisation of the reverse sensitivity measure is stated here.
Definition 4
(Marginal Reverse Sensitivity Measure Pesenti et al. (2019)). For a function $s : R → R$, the reverse sensitivity measure to input $X i$ with respect to a stressed probability measure $Q *$ is defined by
$S i Q * = E Q * [ s ( X i ) ] − E [ s ( X i ) ] max Q ∈ Q E Q [ s ( X i ) ] − E [ s ( X i ) ] E Q * [ s ( X i ) ] ≥ E [ s ( X i ) ] , − E Q * [ s ( X i ) ] − E [ s ( X i ) ] min Q ∈ Q E Q [ s ( X i ) ] − E [ s ( X i ) ] o t h e r w i s e ,$
where $Q = { Q | Q p r o b a b i l i t y m e a s u r e w i t h d Q d P = P d Q * d P }$ is the set of all probability measures whose RN-derivative have the same distribution as $d Q * d P$ under $P$. We adopted the convention that $± ∞ ∞ = ± 1$ and $0 0 = 0$.
The sensitivity measure is called “reverse”, as the stress is applied to the output random variable Y and the sensitivity monitors the change in input $X i$. The definition of 4 applies, however, also to stresses on input factors, in which case the RN-density $d Q * d P$ is a function of the stressed input factor and we refer to Pesenti et al. (2019) for a discussion. Note, that the reverse sensitivity measure can be viewed as a normalised covariance measure between the input $s ( X i )$ and the Radon Nikodym derivative $d Q * d P$.
The next proposition provides a collection of properties that the reverse sensitivity measure possesses, we also refer to Pesenti et al. (2019) for a detailed discussion of these properties. For this, we first recall the definition of comonotonic and counter-monotonic random variables.
Definition 5.
Two random variables $Y 1$ and $Y 2$ are comonotonic under $P$, if and only if, there exists a random variable W and non-decreasing functions $h 1 , h 2 : R → R$, such that the following equalities hold in distribution under$P$
$Y 1 = h 1 ( W ) a n d Y 2 = h 2 ( W ) .$
The random variables $Y 1$ and $Y 2$ are counter-monotonic under $P$, if and only if, (7) holds with one of the functions $h 1 ( · ) , h 2 ( · )$ being non-increasing, and the other non-decreasing.
If two random variables are (counter) comonotonic under one probability measure, then they are also (counter) comonotonic under any other absolutely continuous probability measure, see, e.g., Proposition 2.1 of Cuestaalbertos et al. (1993). Thus, we omit the specification of the probability measure when discussing counter- and comonotonicity.
Proposition 3
(Properties of Reverse Sensitivity Measure). The reverse sensitivity measure possesses the following properties:
(i)
$S i Q * ∈ [ − 1 , 1 ]$;
(ii)
$S i Q * = 0$ if $( s ( X i ) , d Q * d P )$ are independent under $P$;
(iii)
$S i Q * = 1$ if and only if $( s ( X i ) , d Q * d P )$ are comonotonic;
(iv)
$S i Q * = − 1$ if and only if $( s ( X i ) , d Q * d P )$ are counter-comonotonic.
The function $s ( · )$ provides the flexibility to create sensitivity measures that quantify changes in moments, e.g., via $s ( x ) = x k$, $k ∈ N$, or in the tail of distributions, e.g., via $s ( x ) = 𝟙 { x > VaR α ( X i ) }$, for $α ∈ ( 0 , 1 )$.
Next, we generalise Definition 4 to a sensitivity measure that accounts for multiple input factors. While $S i Q *$ measures the change of the distribution of $X i$ from the baseline to the stressed model the sensitivity $s i , j Q *$ introduced below, quantifies how the joint distribution of $( X i , X j )$ changes when moving from $P$ to $Q *$.
Definition 6
(Bivariate Reverse Sensitivity Measure). For a function $s : R 2 → R$, the reverse sensitivity measure to inputs $( X i , X j )$ with respect to a stressed probability measure $Q *$ is defined by
$S i , j Q * = E Q * [ s ( X i , X j ) ] − E [ s ( X i , X j ) ] max Q ∈ Q E Q [ s ( X i , X j ) ] − E [ s ( X i , X j ) ] E Q * [ s ( X i , X j ) ] ≥ E [ s ( X i , X j ) ] , − E Q * [ s ( X i , X j ) ] − E [ s ( X i , X j ) ] min Q ∈ Q E Q [ s ( X i , X j ) ] − E [ s ( X i , X j ) ] o t h e r w i s e ,$
where $Q$ is given in Definition 4.
The bivariate sensitivity measure satisfies all the properties in Proposition 3 when $s ( X i )$ is replaced by $s ( X i , X j )$. The bivariate sensitivity $S i , j Q *$ can also be generalised to k input factors by choosing a function $s : R k → R$.
Remark 2.
Probabilistic sensitivity measures are typically used for importance measurement and take values in $[ 0 , 1 ]$; with 1 being the most important input factor and 0 being (desirably) independent from the output Borgonovo et al. (2021). This is in contrast to our framework where $S i Q *$ lives in $[ − 1 , 1 ]$ and e.g., a negative dependence, such as negative quadrant dependence between $s ( X i )$ and $d Q * d P$ implies that $S i Q * < 0$, see Pesenti et al. (2019) [Proposition 4.3]. Thus, the proposed sensitivity measure is different in that it allows for negative sensitivities where the sign of $S i Q *$ indicates the direction of the distributional change.

## 5. Application to a Spatial Model

We consider a spatial model for modelling insurance portfolio losses where each individual loss occurs at different locations and the dependence between individual losses is a function of the distance between the locations of the losses. Mathematically, denote the locations of the insurance losses by $z 1 , … , z 10$, where $z m = ( z m 1 , z m 2 )$ are the coordinates of location $z m$, $m = 1 , … , 10$. The insurance loss at location m, denoted by $L m$, follows a $G a m m a ( 5 , 0.2 m )$ distribution with location parameter 25. Thus, the minimum loss at each location is 25 and locations with larger mean also exhibit larger standard deviations. The losses $L 1 , … , L m$ have, conditionally on $Θ = θ$, a Gaussian copula with correlation matrix given by $ρ i , j = Cor ( L i , L j ) = e − θ | | z i − z j | |$, where $| | · | |$ denotes the Euclidean distance. Thus, the further apart the locations $z i$ and $z j$ are the smaller the correlation between $L i$ and $L j$. The parameter $Θ$ takes values $( 0 , 0.4 , 5 )$ with probabilities $( 0.05 , 0.6 , 0.35 )$ that represent different regimes. Indeed, $Θ = 0$ corresponds to a correlation of 1 between all losses, independently of their location. Larger values of $Θ$ correspond to smaller albeit still positive correlation. Thus, regime with $Θ = 0$ can be viewed as, e.g., circumstances suitable for natural disasters. We further define the total loss of the insurance company by $Y = ∑ m = 1 10 L m$.
We perform two different stresses on the total loss Y detailed in Table 1. Specifically, we consider as a first stress a 0% change in HARA utility, a 0% change in $ES 0.8 ( Y )$, and a 1% increase in $ES 0.95 ( Y )$ from the baseline to the stressed model. The second stress is composed of a 1% increase in HARA utility, a 1% increase in $ES 0.8 ( Y )$, and a 3% increase in $ES 0.95 ( Y )$ compared to the baseline model. As the second stress increases all three metrics it may be viewed as a more severe distortion of the baseline model.
Next, we calculate reverse sensitivity measures for the losses $L 1 , … , L 10$ for both stresses $Q 1 *$ and $Q 2 *$. Figure 5 displays the reverse sensitivity measures for functions $s ( x ) = x$, $s ( x ) = 𝟙 { x > F ˘ i ( 0.8 ) }$, and $s ( x ) = 𝟙 { x > F ˘ i ( 0.95 ) }$, from the left to the right panel, and where $F ˘ i$, denotes the $P$-quantile function of $L i$, $i = 1 , … , 10$.
We observe that for stress 2, the reverse sensitivities to all losses $L i$ and all choices of function $s ( · )$ are positive. This contrasts the reverse sensitivities for stress 1. Indeed, for stress 1 the reverse sensitivities with both $s ( x ) = x$ and $s ( x ) = 𝟙 { x > F ˘ i ( 0.8 ) }$ are negative, with the former values being smaller indicating a smaller change in the distributions of the $L i$’s. By definition of the reverse sensitivity, the left panel corresponds to the (normalised) difference between the expectation under the stressed and baseline model. The middle and right panels correspond to the (normalised) change in the probability of exceeding $F ˘ i ( 0.8 )$ and $F ˘ i ( 0.95 )$, respectively. Thus, as seen in the plots, while the expectations and probabilities of exceeding the 80% $P$-quantile are smaller under the stressed model, the probabilities of exceeding the 95% $P$-quantile are increased substantially. The first stress increases the $ES$ at level 0.95 while simultaneously fixes the utility and $ES$ at level 0.8 to its values under the baseline model. This induces under the stressed probability measure a reduction of the mean and of the probability of exceeding the 80% $P$-quantile while the probability of exceeding the 95% $P$-quantile increases. Thus, the reverse sensitivity measures provide a spectrum of measures to analyse the distributional change of the losses $L i$ from the baseline to the stressed model.
Next, for a comparison we calculate the delta sensitivity measure of introduced by Borgonovo (2007). For a probability measure $Q$ the delta measure of $L i$ is defined by
$ξ Q ( L i ) = 1 2 ∫ ∫ f Y Q ( y ) − f Y | i Q ( y | z ) f i Q ( z ) d y d z ,$
where $f Y Q ( · )$ and $f i Q ( · )$ are the densities of Y and $L i$ under $Q$, respectively, and where $f Y | i Q ( · | · )$ is the conditional density of the total portfolio loss Y given $L i$ under $Q$.
Table 2 reports the delta measures under the baseline model $ξ P$ and the two stresses, i.e., $ξ Q 1 *$ and $ξ Q 2 *$. We observe that the delta measures are similar for all losses $L i$ and do not change significantly under the different probability measures. As the delta sensitivity measure quantifies the importance of input factors under a probability measure, having similar values for $ξ P$, $ξ Q 1 *$, and $ξ Q 1 *$, means that the importance ranking of the $L i$’s under different stresses does not change. We also report, in the first two columns of Table 2, the reverse sensitivity measures with $s ( x ) = 𝟙 { x > F ˘ ( 0.95 ) }$. The reverse sensitivity measures provide, in contrast to the delta measure, insight into the change in the distributions of the $L i$’s from $P$ and $Q *$.
Alternatively to considering the change in the marginal distributions $L i$ from the baseline to the stressed model, we can study the change in the dependence between the losses when moving from the baseline to a stressed model. For this, we consider the bivariate reverse sensitivity measures for the pairs $( L 5 , L 10 )$ and $( L 9 , L 10 )$ for the second stress $Q 2 *$, that is a 1% increase in HARA utility and $ES 0.8$, and a 3% increase in $ES 0.95$. Specifically, we look at the function $s ( L i , L j ) = 𝟙 { L i > F ˘ i ( 0.95 ) } 𝟙 { L j > F ˘ j ( 0.95 ) }$, where $F ˘ i ( · )$ and $F ˘ j ( · )$ are the $P$-quantile functions of $L i$ and $L j$ respectively. This bivariate sensitivity measures quantifies the impact a stress has on the probability of joint exceedances with values $S 5 , 10 Q 2 * = 0.78$ and $S 9 , 10 Q 2 * = 0.81$ indicating that the probabilities of joint exceedances increase more for stress 2. This can also be seen in Figure 6 which shows the bivariate copulae contours of $( L 5 , L 10 )$ (top panels) and $( L 9 , L 10 )$ (bottom panels). The left contour plots correspond to the baseline model $P$ whereas the right panels display the contours under the stress model $Q 2 *$ (solid lines) together with the baseline contours (reported using partially transparent lines). The red dots are the simulated realisations of the losses $( L 5 , L 10 )$ and $( L 9 , L 10 )$, respectively (which are the same for the baseline and stressed model). We observe that for both pairs $( L 5 , L 10 )$ and $( L 9 , L 10 )$ the copula under the stressed model admits larger probabilities of joint large events, which is captured by the bivariate reverse sensitivity measure admitting positive values close to 1.

## 6. Concluding Remarks

We extend the reverse sensitivity analysis proposed by Pesenti et al. (2019) which proceeds as follows. Equipped with a baseline model which comprises of input and output random variables and a baseline probability measure, one derives a unique stressed model such that the output (or input) under the stressed model satisfies a prespecified stress and is closest to the baseline distribution. While Pesenti et al. (2019) consider the Kullback–Leibler divergence to measure the difference between the baseline and stressed models we utilise Wasserstein distance of order two. Compared to Pesenti et al. (2019) the Wasserstein distance allows for additional and different stresses on the output including the mean and variance, any distortion risk measure including the Value-at-Risk and Expected-Shortfall, and expected utility type constraints, thus making the reverse sensitivity analysis framework suitable for models used in financial and insurance risk management. We further discuss reverse sensitivity measures which quantify the change the inputs’ distribution when moving from the baseline to a stressed model and illustrate our results on a spatial insurance portfolio application. The reverse sensitivity analysis framework (including the results from this work and from Pesenti et al. (2019) are implemented in the R package SWIM which is available on CRAN.

## Funding

This research was funded by the Connaught Fund, the Canadian Statistical Sciences Institute (CANSSI), and the Natural Sciences and Engineering Research Council of Canada (NSERC) with funding reference numbers DGECR-2020-00333 and RGPIN-2020-04289.

Not applicable.

## Acknowledgments

S.M.P. would like to thank Judy Mao for her help in implementing the numerical examples.

## Conflicts of Interest

The authors declare no conflict of interest.

## Appendix A. Proofs

Proof of Theorem 1.
We solve the optimisation on the set of quantile functions $M ˘$ and define the Lagrangian with Lagrange multipliers $λ = ( λ 1 , … , λ d ) ∈ R d$
$L ( G ˘ , λ ) = ∫ 0 1 G ˘ ( u ) − F ˘ ( u ) 2 − 2 ∑ k = 1 d λ k G ˘ ( u ) γ k ( u ) − r k d u = ∫ 0 1 G ˘ ( u ) − F ˘ ( u ) + ∑ k = 1 d λ k γ k ( u ) 2 − 2 ∑ k = 1 d λ k F ˘ ( u ) γ k ( u ) − r k − ∑ k = 1 k λ k γ k ( u ) 2 d u .$
Thus, the optimisation problem (3) is equivalent to first solving, for fixed $λ$, the optimisation problem
$arg min G ˘ ∈ M ˘ L ( G ˘ , λ )$
and then finding $λ$ such that the constraints are fulfilled. For fixed $λ$, the solution to (A1) is equal to the solution to
$arg min G ˘ ∈ M ˘ ∫ 0 1 G ˘ ( u ) − F ˘ ( u ) + ∑ k = 1 d λ k γ k ( u ) 2 d u ,$
which is given by the isotonic projection of $F ˘ ( u ) + ∑ k = 1 d λ k γ k ( u )$ onto the set $M ˘$ and the Lagrange multipliers are such that the constraints are satisfied. Existence of the Lagrange multipliers follows since the set $M$ is non-empty. Uniqueness follows by convexity of the Wasserstein distance, by convexity of the constraints on the set of quantile functions. □
Proof of Proposition 1.
For coherent distortion risk measures the corresponding weight function $γ$ is non-decreasing. Moreover the optimal quantile function is given by Theorem 1 and is of the form $G ˘ λ ( u ) = F ˘ ( u ) + λ γ ( u ) ↑$ for some $λ$ such that $G ˘ λ$ fulfils the constraint. The choice
$λ * = r − ρ γ ( F ) ∫ 0 1 γ ( u ) 2 d u ≥ 0$
implies that $G ˘ λ * ( u ) = F ˘ ( u ) + λ * γ ( u )$ is a quantile function of the form (4) that fulfils the constraint. By uniqueness of Theorem 1, $G ˘ λ *$ is indeed the unique solution. □
Proof of Theorem 2.
Since both constraints are convex in $G ˘$ the Lagrangian with parameters $λ = ( λ 1 , … , λ d )$ and $λ ˜ = ( λ ˜ 1 , … , λ ˜ d ˜ ) ≥ 0$ becomes
$L ( G ˘ , λ , λ ˜ ) = ∫ 0 1 G ˘ ( u ) − F ˘ ( u ) 2 + 2 ∑ k = 1 d λ k h k ( u ) G ˘ ( u ) − c k d u + ∑ k = 1 d ˜ λ ˜ k h ˜ k ( u ) G ˘ ( u ) 2 − c ˜ k d u = ∫ 0 1 Λ ˜ ( u ) G ˘ ( u ) − 1 Λ ˜ ( u ) F ˘ ( u ) − ∑ k = 1 d λ k h k ( u ) 2 − 1 Λ ˜ ( u ) F ˘ ( u ) − ∑ k = 1 d λ k h k ( u ) 2 + F ˘ ( u ) 2 − 2 ∑ k = 1 d λ k c k − ∑ k = 1 d ˜ λ ˜ k c ˜ k ,$
where $Λ ˜ ( u ) = 1 + ∑ k = 1 d ˜ λ ˜ k h ˜ k ( u )$. Since $λ ˜ ≥ 0$ by the KKT-condition, we obtain that $Λ ˜ ( u ) ≥ 0$ for all $u ∈ [ 0 , 1 ]$. Therefore, for fixed $λ , λ ˜$, using an argument similar to the proof of Theorem 1, the solution (as a function of $λ , λ ˜$) is given by the weighted isotonic projection of $1 Λ ˜ ( u ) F ˘ ( u ) − ∑ k = 1 d λ k h k ( u )$, with weight function $Λ ˜ ( · )$. □
Proof of Proposition 2.
The mean and variance constraint can be rewritten as
$m ′ = ∫ x d G ( x ) = ∫ 0 1 G ˘ ( u ) d u and σ ′ 2 = ∫ x − m ′ 2 d G ( x ) = ∫ 0 1 G ˘ ( u ) − m ′ 2 d u .$
Thus, the Lagrangian with Lagrange multipliers $λ = ( λ 1 , … , λ k + 2 )$ is, if $λ 2 ≠ − 1$,
$L ( G ˘ , λ ) = ∫ 0 1 G ˘ ( u ) − F ˘ ( u ) 2 d u − 2 λ 1 ∫ 0 1 G ˘ ( u ) d u − m ′ + λ 2 ∫ 0 1 G ˘ ( u ) − m ′ 2 d u − σ ′ 2 − 2 ∑ k = 1 d λ k + 2 ∫ 0 1 G ˘ ( u ) γ k ( u ) d u − r k = ( 1 + λ 2 ) ∫ 0 1 G ˘ ( u ) − 1 1 + λ 2 F ˘ ( u ) + λ 1 + λ 2 m ′ + ∑ k = 1 d λ k + 2 γ k ( u ) 2 − 1 1 + λ 2 F ˘ ( u ) + λ 1 + λ 2 m ′ + ∑ k = 1 d λ k + 2 γ k ( u ) 2 + F ˘ ( u ) 2 d u + 2 λ 1 m ′ + λ 2 m ′ 2 − σ ′ 2 + 2 ∑ k = 1 d λ k + 2 r k .$
For fixed Lagrange multipliers $λ$ with $λ 2 ≠ − 1$, the optimal quantile function is characterised by the isotonic projection and given by (using an analogous argument to the proof of Theorem 1)
$G ˘ * ( u ) = 1 1 + λ 2 F ˘ ( u ) + λ 1 + λ 2 m ′ + ∑ k = 1 d λ k + 2 γ k ( u ) ↑ = 1 | 1 + λ 2 | sgn ( 1 + λ 2 ) F ˘ ( u ) + λ 1 + λ 2 m ′ + ∑ k = 1 d λ k + 2 γ k ( u ) ↑ = 1 | 1 + λ 2 | H ˘ ( u ) ,$
where we define $H ˘ ( u ) = sgn ( 1 + λ 2 ) F ˘ ( u ) + λ 1 + λ 2 m ′ + ∑ k = 1 d λ k + 2 γ k ( u ) ↑ ∈ M ˘$, and $sgn ( · )$ denotes the sign function. Next we show that $λ 2$ cannot be in a neighbourhood of $− 1$. It holds that for $λ 2 ≠ − 1$,
$∫ 0 1 G ˘ * ( u ) 2 d u = 1 ( 1 + λ 2 ) 2 ∫ 0 1 H ˘ ( u ) 2 d u .$
Since the rhs of (A3) is increasing for $| λ 2 + 1 | ↘ 0$, there exists a $ϵ 0 > 0$ such that for all $ϵ < ϵ 0$ and $λ 2 ∈ ( − 1 − ϵ , − 1 + ϵ )$, it holds that
$1 ( 1 + λ 2 ) 2 ∫ 0 1 H ˘ ( u ) 2 d u > σ ′ 2 + m ′ 2 ,$
which is a contradiction to the optimality of $G ˘ *$. Thus, $λ 2$ is indeed bounded away from $− 1$ and the unique solution is given in (A2). □
Proof of Theorem 3.
We split this proof into the two cases $( i )$, that is constraint (a) and $( i i )$, i.e., constraint (b).
Case $( i )$: For constraint a), i.e., $VaR α ( G ) = q$, we first assume that $q ≤ VaR α ( F )$ which implies $F ˘ ( α F ) = q ≤ F ˘ ( α )$ and thus $α F ≤ α$. Therefore, $G ˘ * ( u ) = F ˘ ( u ) + q − F ˘ ( u ) 𝟙 u ∈ α F , α$ is a quantile function which satisfies the constraint. Next, we show that $G *$ has a smaller Wasserstein distance to F than any other distribution function satisfying the constraint. For this, let $H ˘$ be a quantile function satisfying the constraint and $H ˘ ( u ) ≠ G ˘ ( u )$ on a measurable set of non-zero measure. Then
$W 2 ( H , F ) = ∫ 0 α F H ˘ ( u ) − F ˘ ( u ) 2 d u + ∫ α F α H ˘ ( u ) − F ˘ ( u ) 2 d u + ∫ α 1 H ˘ ( u ) − F ˘ ( u ) 2 d u ≥ ∫ α F α H ˘ ( u ) − F ˘ ( u ) 2 d u .$
By non-decreasingness of $H ˘$ and $F ˘$ and by the constraint it holds for all $u ∈ [ α F , α ]$ that $H ˘ ( u ) ≤ H ˘ ( α ) = q = F ˘ ( α F ) ≤ F ˘ ( u )$. Thus, on the interval $[ α F , α ]$, we obtain $H ˘ ( u ) − F ˘ ( u ) 2 ≥ q − F ˘ ( u ) 2$ and therefore
$W 2 ( H , F ) ≥ ∫ α F α H ˘ ( u ) − F ˘ ( u ) 2 d u ≥ ∫ α F α q − F ˘ ( u ) 2 d u = W 2 ( G * , F ) ,$
where at least one inequality is strict since $H ˘ ( u ) ≠ G ˘ ( u )$ on a measurable set of non-zero measure. Uniqueness follows by the strict convexity of the Wasserstein distance and since the constraint is convex on the set of quantile functions.
Second, we assume that $q > VaR α ( F )$ and show that there does not exist a solution. Assume by contradiction that $G ˘$ is an optimal quantile function satisfying the constraint. By definition of $α F$, we have that $q = F ˘ ( α F ) > F ˘ ( α )$ and thus $α F ≥ α$. We apply a similar argument to the first part of the proof using non-decreasingness of $G ˘$, $G ˘ ( α ) = q$, and optimality of $G ˘$, to obtain that $G ˘$ is constant equal to q on $[ α , α F ]$ and equal to $F ˘ ( u )$ for $u > α F$. Specifically, it holds that
$G ˘ ( u ) = F ˘ ( u ) + ( q − F ˘ ( u ) ) 𝟙 { u ∈ ( α , α F ] } , for all u > α .$
Moreover, since the optimal quantile function minimises the Wasserstein distance to F, it holds that, for all $ϵ > 0$, $G ˘$ satisfies
$G ˘ ( u ) = F ˘ ( u ) , for all u ≤ α − ϵ .$
Thus, we can define for all $ϵ ∈ ( 0 , α )$ the family of quantile functions
$H ˘ ϵ ( u ) = F ˘ ( u ) + ( q − F ˘ ( u ) ) 𝟙 { u ∈ ( α − ϵ , α F ] } ,$
which satisfies $W 2 ( H ϵ 1 , F ) < W 2 ( H ϵ 2 , F )$ for all $0 ≤ ϵ 1 < ϵ 2$, and $H ˘ ( α ) = q$ for all $ϵ > 0 .$ However, $lim ϵ ↘ 0 H ˘ ϵ ( α ) = F ˘ ( α ) < q$ and thus the quantile function $lim ϵ ↘ 0 H ˘ ϵ ( u )$ does not fulfil the constraint. Hence, we obtain a contradiction to the optimality of $G ˘ .$
Case (ii): First, we assume that $q ≥ VaR α + ( F )$ which implies that $F ˘ ( α F ) = q ≥ F ˘ + ( α ) ≥ F ˘ ( α )$ and thus $α F ≥ α .$ Therefore $G ˘ * ( u ) = F ˘ ( u ) + ( q − F ˘ ( u ) ) 𝟙 { u ∈ ( α , α F ] }$ is a quantile function. Moreover $G ˘ *$ satisfies the constraint since by right-continuity of $G ˘ *$, we have that
$G ˘ * + ( α ) = lim ϵ ↘ 0 G ˘ * + ( α + ϵ ) = q .$
The proof that $G ˘ *$ has the smallest Wasserstein distance to F compared to any other distribution function satisfying the constraint is analogous to the one in case (i).
For the case when $q > VaR α + ( F )$, the argument of non-existence of the solution follows using similar arguments as those in case $( i )$. □
Proof of Theorem 4.
By concavity of the utility function, the constraint is convex and can be written as $− ∫ 0 1 u G ˘ ( v ) d v + c ≤ 0$. Thus, we can define the Lagrangian with $λ 1 ≥ 0$ and $( λ 2 , … , λ d + 1 ) ∈ R d$ by
$L ( G ˘ , λ ) = 1 2 ∫ 0 1 G ˘ ( v ) − F ˘ ( v ) 2 − λ 1 u G ˘ ( v ) − c − ∑ k = 1 d λ k + 1 G ˘ ( v ) γ k ( v ) − r k d v = ∫ 0 1 T G ˘ ( v ) − G ˘ ( v ) F ˘ ( v ) + ∑ k = 1 d λ k + 1 γ k ( v ) + 1 2 F ˘ ( v ) 2 + λ 1 c + ∑ k = 1 d λ k + 1 r k d v ,$
where $T ( x ) = 1 2 x 2 − λ 1 u ( x )$. Therefore, for fixed $λ 1 , … , λ d + 1$, we apply Theorem 3.1 by Barlow and Brunk (1972) and obtain the unique optimal quantile function (as a function of $λ 1 , … , λ d + 1$), that is $G ˘ * ( v ) = ν ˘ λ 1 F ˘ ( u ) + ∑ k = 1 d λ k + 1 γ k ( v ) ↑$, where $ν ˘ λ 1$ is the left-inverse of $ν λ 1 ( x ) = x − λ 1 u ′ ( x )$.
Next, we show that if $d = 0$, the utility constraint is binding, that is $λ 1 > 0$. For this, assume by contradiction that the $λ 1 = 0$, then the optimal quantile function becomes $G ˘ * ( u ) = ν ˘ 0 F ˘ ( u )$. Since $ν 0 ( x ) = x$, we obtain that $G ˘ * ( u ) = F ˘ ( u )$. $F ˘$, however, does not fulfil the constraint, which is a contradiction to the optimality of $G ˘ *$. □
Proof of Proposition 3.
We prove the properties one-by-one:
(i)
We first define for a random variable Z with $P$-distribution $F Z$ the random variable $U Z : = F Z ( Z )$. Then, $U Z$ and Z are comonotonic and $U Z$ has a uniform distribution under $P$. Next, recall that for any random variables $Y 1 , Y 2$ it holds that Rüschendorf (1983)
$E Y 1 F Y 2 − 1 1 − U Y 1 ≤ E Y 1 Y 2 ≤ E Y 1 F Y 2 − 1 U Y 1 .$
where $F Y 2 − 1 U Y 1$ is the random variable that is comonotonic to $Y 1$ and has the same $P$-distribution as $Y 2$. Similarly, $F Y 2 − 1 1 − U Y 1$ is the random variable that is counter-monotonic to $Y 1$ and has the same $P$-distribution as $Y 2$. The left (right) inequality in (A4) become equality if and only if the random variables $Y 1$ and $Y 2$ are counter-comonotonic (comonotonic).
Thus, we can rewrite the maximum in the normalising constant of the reverse sensitivity measure as follows
$max Q ∈ Q E Q s ( X ) = max Z = P d Q * d P E s ( X ) Z = E s ( X ) F d Q * d P − 1 U s ( X )$
and the minimum in the normalising constant is
$min Q ∈ Q E Q s ( X ) = min Z = P d Q * d P E s ( X ) Z = E s ( X ) F d Q * d P − 1 1 − U s ( X ) .$
The reverse sensitivity for the case $E Q * [ s ( X i ) ] ≥ E [ s ( X i ) ]$ then becomes
$S i Q * = E [ s ( X i ) d Q * d P ] − E [ s ( X i ) ] E s ( X ) F d Q * d P − 1 U s ( X ) − E [ s ( X i ) ] ,$
which satisfies $0 ≤ S i Q * ≤ 1$ using again (A4). For the case $E Q * [ s ( X i ) ] ≤ E [ s ( X i ) ]$, it holds that
$S i Q * = − E [ s ( X i ) d Q * d P ] − E [ s ( X i ) ] E s ( X ) F d Q * d P − 1 1 − U s ( X ) − E [ s ( X i ) ] ,$
which satisfies $− 1 ≤ S i Q * ≤ 0$.
(ii)
Assume that $s ( X i )$ and $d Q * d P$ are independent under $P$, then
$E [ s ( X i ) d Q * d P ] = E s ( X i ) E d Q * d P = E [ s ( X i ) ] ,$
and the reverse sensitivity measure is indeed zero.
(iii)
From property $( i )$ we observe that $s ( X i )$ and $d Q * d P$ are comonotonic, if and only if, $S i Q * = 1$ since in this case the right inequality in Equation (A4) becomes equality.
(iv)
From property $( i )$ we observe that $s ( X i )$ and $d Q * d P$ are counter-comonotonic, if and only if, then $S i Q * = 1$ as in this case left inequality in Equation (A4) becomes equality.
The proof that the joint reverse sensitivity $S i , j Q *$ also fulfils the above properties follows using analogous arguments and replacing $s ( X i )$ with $s ( X i , X j )$. □

## References

1. Acerbi, Carlo, and Dirk Tasche. 2002. On the coherence of Expected Shortfall. Journal of Banking & Finance 26: 1487–503. [Google Scholar]
2. Artzner, Philippe, Freddy Delbaen, Jean-Marc Eber, and David Heath. 1999. Coherent measures of risk. Mathematical Finance 9: 203–28. [Google Scholar] [CrossRef]
3. Asimit, Vali, Liang Peng, Ruodu Wang, and Alex Yu. 2019. An efficient approach to quantile capital allocation and sensitivity analysis. Mathematical Finance 29: 1131–56. [Google Scholar] [CrossRef]
4. Barlow, Richard E., and Hugh D. Brunk. 1972. The isotonic regression problem and its dual. Journal of the American Statistical Association 67: 140–47. [Google Scholar] [CrossRef]
5. Barlow, Richard E., David J. Bartholomew, John. M. Bremner, and Hugh D. Brunk. 1972. Statistical Inference under Order Restrictions: The Theory and Application of Isotonic Regression. Hoboken: Wiley. [Google Scholar]
6. Bernard, Carole, Silvana M. Pesenti, and Steven Vanduffel. 2020. Robust distortion risk measures. arXiv arXiv:2205.08850. [Google Scholar] [CrossRef]
7. Blanchet, Jose, and Karthyek Murthy. 2019. Quantifying distributional model risk via optimal transport. Mathematics of Operations Research 44: 565–600. [Google Scholar] [CrossRef]
8. Borgonovo, Emanuele. 2007. A new uncertainty importance measure. Reliability Engineering & System Safety 92: 771–84. [Google Scholar]
9. Borgonovo, Emanuele, and Elmar Plischke. 2016. Sensitivity analysis: A review of recent advances. European Journal of Operational Research 48: 869–87. [Google Scholar] [CrossRef]
10. Borgonovo, Emanuele, Gordon B. Hazen, and Elmar Plischke. 2016. A common rationale for global sensitivity measures and their estimation. Risk Analysis 36: 1871–95. [Google Scholar] [CrossRef]
11. Borgonovo, Emanuele, Gordon B. Hazen, Victor Richmond R. Jose, and Elmar Plischke. 2021. Probabilistic sensitivity measures as information value. European Journal of Operational Research 289: 595–610. [Google Scholar] [CrossRef]
12. Cambou, Mathieu, and Damir Filipović. 2017. Model uncertainty and scenario aggregation. Mathematical Finance 27: 534–67. [Google Scholar] [CrossRef]
13. Cont, Rama, Romain Deguest, and Giacomo Scandolo. 2010. Robustness and sensitivity analysis of risk measurement procedures. Quantitative Finance 10: 593–606. [Google Scholar] [CrossRef]
14. Cuestaalbertos, Juan-Alberto, Ludger Rüschendorf, and Araceli Tuerodiaz. 1993. Optimal coupling of multivariate distributions and stochastic processes. Journal of Multivariate Analysis 46: 335–61. [Google Scholar] [CrossRef]
15. Dall’Aglio, Giorgio. 1956. Sugli estremi dei momenti delle funzioni di ripartizione doppia. Annali della Scuola Normale Superiore di Pisa-Classe di Scienze 10: 35–74. [Google Scholar]
16. De Leeuw, Jan, Kurt Hornik, and Patrick Mair. 2010. Isotone optimization in r: Pool-adjacent-violators algorithm (pava) and active set methods. Journal of Statistical Software 32: 1–24. [Google Scholar] [CrossRef] [Green Version]
17. Denuit, Michel, Jan Dhaene, Marc Goovaerts, and Rob Kaas. 2006. Actuarial Theory for Dependent Risks: Measures, Orders and Models. Hoboken: John Wiley & Sons. [Google Scholar]
18. Fissler, Tobias, and Silvana M. Pesenti. 2022. Sensitivity measures based on scoring functions. arXiv arXiv:2203.00460. [Google Scholar] [CrossRef]
19. Fort, Jean-Claude, Thierry Klein, and Agnès Lagnoux. 2021. Global sensitivity analysis and wasserstein spaces. SIAM/ASA Journal on Uncertainty Quantification 9: 880–921. [Google Scholar] [CrossRef]
20. Gamboa, Fabrice, Pierre Gremaud, Thierry Klein, and Agnès Lagnoux. 2020. Global sensitivity analysis: A new generation of mighty estimators based on rank statistics. arXiv arXiv:2003.01772. [Google Scholar]
21. Gamboa, Fabrice, Thierry Klein, and Agnès Lagnoux. 2018. Sensitivity analysis based on Cramér–von Mises distance. SIAM/ASA Journal on Uncertainty Quantification 6: 522–48. [Google Scholar] [CrossRef] [Green Version]
22. Hall, Peter, and Li-Shan Huang. 2001. Nonparametric kernel regression subject to monotonicity constraints. The Annals of Statistics 29: 624–47. [Google Scholar] [CrossRef]
23. Kruse, Thomas, Judith C. Schneider, and Nikolaus Schweizer. 2019. The joint impact of f-divergences and reference models on the contents of uncertainty sets. Operations Research 67: 428–35. [Google Scholar] [CrossRef]
24. Kusuoka, Shigeo. 2001. On law invariant coherent risk measures. In Advances in Mathematical Economics. Berlin and Heidelberg: Springer, pp. 83–95. [Google Scholar]
25. Makam, Vaishno Devi, Pietro Millossovich, and Andreas Tsanakas. 2021. Sensitivity analysis with χ2-divergences. Insurance: Mathematics and Economics 100: 372–83. [Google Scholar] [CrossRef]
26. Maume-Deschamps, Véronique, and Ibrahima Niang. 2018. Estimation of quantile oriented sensitivity indices. Statistics & Probability Letters 134: 122–27. [Google Scholar]
27. Meyer, Mary C. 2008. Inference using shape-restricted regression splines. The Annals of Applied Statistics 2: 1013–33. [Google Scholar] [CrossRef]
28. Moosmüeller, Caroline, Felix Dietrich, and Ioannis G. Kevrekidis. 2020. A geometric approach to the transport of discontinuous densities. SIAM/ASA Journal on Uncertainty Quantification 8: 1012–35. [Google Scholar] [CrossRef]
29. Pesenti, Silvana M., Alberto Bettini, Pietro Millossovich, and Andreas Tsanakas. 2021. Scenario weights for importance measurement (SWIM)—An R package for sensitivity analysis. Annals of Actuarial Science 15: 458–83. [Google Scholar] [CrossRef]
30. Pesenti, Silvana M., Pietro Millossovich, and Andreas Tsanakas. 2019. Reverse sensitivity testing: What does it take to break the model? European Journal of Operational Research 274: 654–70. [Google Scholar] [CrossRef] [Green Version]
31. Pesenti, Silvana M., Pietro Millossovich, and Andreas Tsanakas. 2021. Cascade sensitivity measures. Risk Analysis 31: 2392–414. [Google Scholar] [CrossRef]
32. Plischke, Elmar, and Emanuele Borgonovo. 2019. Copula theory and probabilistic sensitivity analysis: Is there a connection? European Journal of Operational Research 277: 1046–59. [Google Scholar] [CrossRef]
33. Rahman, Sharif. 2016. The f-sensitivity index. SIAM/ASA Journal on Uncertainty Quantification 4: 130–62. [Google Scholar] [CrossRef]
34. Rüschendorf, Ludger. 1983. Solution of a statistical optimization problem by rearrangement methods. Metrika 30: 55–61. [Google Scholar] [CrossRef]
35. Saltelli, Andrea, Marco Ratto, Terry Andres, Francesca Campolongo, Jessica Cariboni, Debora Gatelli, Michaela Saisana, and Stefano Tarantola. 2008. Global Sensitivity Analysis: The Primer. Hoboken: John Wiley & Sons. [Google Scholar]
36. Sysoev, Oleg, and Oleg Burdakov. 2019. A smoothed monotonic regression via l2 regularization. Knowledge and Information Systems 59: 197–218. [Google Scholar] [CrossRef] [Green Version]
37. Tsanakas, Andreas, and Pietro Millossovich. 2016. Sensitivity analysis using risk measures. Risk Analysis 36: 30–48. [Google Scholar] [CrossRef] [PubMed] [Green Version]
38. Villani, Cédric. 2008. Optimal transport: Old and New. Berlin and Heidelberg: Springer Science & Business Media, vol. 338. [Google Scholar]
Figure 1. Top panels: Baseline quantile function $F ˘ Y$ (blue dashed) compared to the stressed quantile function $G ˘ Y *$ (red solid) for a 10% increase on the $α$-$β$ risk measure with $β = 0.1$, $α = 0.9$, and various values of p. The green line $ℓ ( · )$ is the function, whose isotonic projection equals $G ˘ Y ( · )$. Bottom panels: corresponding baseline $f Y$ and stressed $g Y *$ densities.
Figure 1. Top panels: Baseline quantile function $F ˘ Y$ (blue dashed) compared to the stressed quantile function $G ˘ Y *$ (red solid) for a 10% increase on the $α$-$β$ risk measure with $β = 0.1$, $α = 0.9$, and various values of p. The green line $ℓ ( · )$ is the function, whose isotonic projection equals $G ˘ Y ( · )$. Bottom panels: corresponding baseline $f Y$ and stressed $g Y *$ densities.
Figure 2. Top: Baseline quantile function $F ˘ Y$ compared to the stressed quantile function $G ˘ Y *$. Bottom: corresponding baseline $f Y$ and stressed $g Y *$ densities. Left: $ES 0.95$ and the mean being fixed and a 20% increase in the standard deviation. Middle: 10% increase in $ES 0.95$, 10% decrease in the mean, and fixed standard deviation. Right: 10% increase in $ES 0.95$, 10% increase in the mean, and 10% decrease in standard deviation. Note that in the middle and right panel the green lines are equal to the red lines.
Figure 2. Top: Baseline quantile function $F ˘ Y$ compared to the stressed quantile function $G ˘ Y *$. Bottom: corresponding baseline $f Y$ and stressed $g Y *$ densities. Left: $ES 0.95$ and the mean being fixed and a 20% increase in the standard deviation. Middle: 10% increase in $ES 0.95$, 10% decrease in the mean, and fixed standard deviation. Right: 10% increase in $ES 0.95$, 10% increase in the mean, and 10% decrease in standard deviation. Note that in the middle and right panel the green lines are equal to the red lines.
Figure 3. Top panels: Baseline quantile function $F ˘ Y$ compared to the stressed quantile function $G ˘ Y *$, for a 10% decrease in $ES 0.8$, and a 10% increase in $ES 0.95$, and, from left to right, a 0%, 1%, and 3% increase in the HARA utility, respectively. The function $ℓ ( · )$ (solid green) is the function whose isotonic projection equals $G ˘ * ( · )$. Bottom panels: Corresponding baseline $f Y$ and stressed $g Y *$ densities.
Figure 3. Top panels: Baseline quantile function $F ˘ Y$ compared to the stressed quantile function $G ˘ Y *$, for a 10% decrease in $ES 0.8$, and a 10% increase in $ES 0.95$, and, from left to right, a 0%, 1%, and 3% increase in the HARA utility, respectively. The function $ℓ ( · )$ (solid green) is the function whose isotonic projection equals $G ˘ * ( · )$. Bottom panels: Corresponding baseline $f Y$ and stressed $g Y *$ densities.
Figure 4. RN-densities for following stresses: a 10% decrease in both $ES 0.8$ and $ES 0.95$, and an increase in the HARA utility. The change in HARA utility is 0%, 1%, and 3%, respectively, from left to right.
Figure 4. RN-densities for following stresses: a 10% decrease in both $ES 0.8$ and $ES 0.95$, and an increase in the HARA utility. The change in HARA utility is 0%, 1%, and 3%, respectively, from left to right.
Figure 5. Reverse sensitivity measures with $s ( x ) = x$, $s ( x ) = 𝟙 { x > F ˘ i ( 0.8 ) }$, and $s ( x ) = 𝟙 { x > F ˘ i ( 0.95 ) }$ (left to right), for two different stresses on the output Y. First stress (salmon) is keeping the HARA utility and $ES ( Y ) 0.8$ fixed and increasing the $ES ( Y ) 0.85$ by 1%. Second stress (violet) is an increase of 1% in HARA utility, 1% $ES ( Y ) 0.8$, and 3% in $ES ( Y ) 0.85$.
Figure 5. Reverse sensitivity measures with $s ( x ) = x$, $s ( x ) = 𝟙 { x > F ˘ i ( 0.8 ) }$, and $s ( x ) = 𝟙 { x > F ˘ i ( 0.95 ) }$ (left to right), for two different stresses on the output Y. First stress (salmon) is keeping the HARA utility and $ES ( Y ) 0.8$ fixed and increasing the $ES ( Y ) 0.85$ by 1%. Second stress (violet) is an increase of 1% in HARA utility, 1% $ES ( Y ) 0.8$, and 3% in $ES ( Y ) 0.85$.
Figure 6. Contour plots of the bivariate copulae of $( L 5 , L 10 )$ (top panels) and $( L 9 , L 10 )$ (bottom panels) under different models. The left contour plots correspond to the baseline model and the right panels to the stress $Q 2 *$ (solid lines) with the baseline contours reported using partially transparent lines. Red points are simulated realisations.
Figure 6. Contour plots of the bivariate copulae of $( L 5 , L 10 )$ (top panels) and $( L 9 , L 10 )$ (bottom panels) under different models. The left contour plots correspond to the baseline model and the right panels to the stress $Q 2 *$ (solid lines) with the baseline contours reported using partially transparent lines. Red points are simulated realisations.
Table 1. Summary of the stresses applied to the portfolio loss Y represented in relative increases of the stressed model from the baseline model.
Table 1. Summary of the stresses applied to the portfolio loss Y represented in relative increases of the stressed model from the baseline model.
HARA Utility$ES 0.8 ( Y )$$ES 0.95 ( Y )$
Stress 1: $Q 1 *$0%0%1%
Stress 2: $Q 2 *$1%1%3%
Table 2. Comparison of different sensitivity measures: First two columns correspond to the reverse sensitivity measures with $s ( x ) = 𝟙 { x > F ˘ ( 0.95 ) }$ and stressed models $Q 1 *$, and $Q 2 *$, respectively. The last three columns are the delta measure under $P$, $Q 1 *$, and $Q 2 *$, respectively.
Table 2. Comparison of different sensitivity measures: First two columns correspond to the reverse sensitivity measures with $s ( x ) = 𝟙 { x > F ˘ ( 0.95 ) }$ and stressed models $Q 1 *$, and $Q 2 *$, respectively. The last three columns are the delta measure under $P$, $Q 1 *$, and $Q 2 *$, respectively.
$S i Q 1 *$$S i Q 2 *$$ξ P$$ξ Q 1 *$$ξ Q 2 *$
$L 1$0.450.680.380.380.38
$L 2$0.470.620.290.290.29
$L 3$0.510.570.300.300.29
$L 4$0.520.630.300.300.29
$L 5$0.340.580.330.340.33
$L 6$0.410.620.340.340.32
$L 7$0.540.720.400.400.38
$L 8$0.600.690.380.390.39
$L 9$0.240.660.400.400.38
$L 10$0.410.730.390.380.37
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Share and Cite

MDPI and ACS Style

Pesenti, S.M. Reverse Sensitivity Analysis for Risk Modelling. Risks 2022, 10, 141. https://doi.org/10.3390/risks10070141

AMA Style

Pesenti SM. Reverse Sensitivity Analysis for Risk Modelling. Risks. 2022; 10(7):141. https://doi.org/10.3390/risks10070141

Chicago/Turabian Style

Pesenti, Silvana M. 2022. "Reverse Sensitivity Analysis for Risk Modelling" Risks 10, no. 7: 141. https://doi.org/10.3390/risks10070141

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Back to TopTop