Without suitable acceleration techniques, the above CVA compression approach is not workable in real time on realistic banking portfolios: in the examples of
Section 4, a naive (desktop) implementation requires about 20 h of computations. This becomes even more problematic for hyperparameters tuning (such as
, crossover rate
, etc.). Hyperparameters are generally chosen with grid search, random search (see (
Bergstra et al. 2011)), Bayesian optimization (see (
Snoek et al. 2012)) or even evolutionary algorithms again (see (
Young et al. 2015)). In any case, their calibration is greedy in terms of overall genetic algorithm execution.
In this section, we deal with the two following acceleration techniques, which may be used simultaneously:
3.1. MtM Store-and-Reuse Approach for Trade Incremental XVA Computations
Most of the time in portfolio-wide XVA calculations is spent in clean valuation (i.e., mark-to-market MtM) computations: by comparison, simulation of the risk factors or of the collateral is typically negligible.
Our case study is based on the CVA metric. As observed after Equation (
6), by lack of trade-additivity of the (portfolio-wide) CVA, trade incremental XVA computations require two portfolio-wide calculations: one without the new trade and another one including it. However, it is possible to store the (including MtM) paths simulated for the initial portfolio and reuse them each time we want to compute a new trade incremental XVA. Then, each trade incremental XVA computation only requires the forward simulation of the mark-to-market process of the new deal.
The corresponding MtM store-and-reuse approach to trade incremental XVA computations circumvents repeated valuations at the cost of disk memory. It exploits the trade additivity of clean valuation by recording the MtM paths of the initial portfolio on a disk. For every new deal, the augmented portfolio exposure is obtained by adding, along the paths of the risk factors, the mark-to-market of the initial portfolio and of the new deal. This augmented portfolio exposure is then plugged into the XVA engine.
An optimally implemented MtM store-and-reuse approach brings down trade incremental XVA computations to the time of generating the clean price process of the trade itself, instead of the one of the augmented portfolio as a whole. Another advantage of this approach is its compliance with desk segregation: As far as clean valuation is concerned, the XVA desks just use the pricers of the clean desks. Hence, the MtM process plugged into the XVA computations is consistent with the one used for producing the market risk hedging sensitivities.
However, such an approach comes at the costs of memory disk (obviously), but also data slippage as, for consistency, it requires anchoring all the trade incremental XVA computations at the market data and parameters corresponding to the generation of the initial portfolio exposure. In practice, an MtM process at the overall portfolio level can only be generated during night runs, between two market sessions.
Moreover, we have to distinguish between first-order (or first-generation) XVAs, which are options on the MtM process, and higher-order (or second-generation) XVAs (see (
Crépey et al. 2019)), which can be viewed as compound options of order two or more on the MtM process. Second-generation XVAs may also involve conditional risk measures, e.g., conditional value-at-risk for the dynamic initial margin calculations that are required for MVA
2 computations, as opposed to conditional expectations only in the case of first generation XVAs.
A Monte Carlo simulation diffuses risk factors X (such as interest rates, credit spreads, etc.) along drivers Z (such as Brownian motions, Poisson processes, etc.), according to a model formulated as a Markovian system of stochastic differential equations, starting from some given initial condition for all risk factors, suitably discretized in time and space. Modulo calibration, can be identified with the time 0 market data. We denote by a suitable estimate of a process Y at all (outer) nodes of a Monte Carlo XVA engine. In particular, is the fully discrete counterpart of the MtM process of the initial portfolio, namely the clean value of the portfolio at future exposure dates in a time grid and for different scenario paths.
At first sight, an MtM store-and-reuse approach is unsuitable for second-order XVAs, such as the MVA and the KVA (but also the CVA in the case of a CSA where the bank receives so-called initial margin), Indeed, in their case, the principle of swapping computations against storage would require to store not one portfolio exposure
, but a whole family of resimulated, future conditional portfolio exposures, (at least, over a certain time horizon), which seems hardly feasible in practice. However, even in the case of second-order XVA metrics, an MtM store and reuse approach can be implemented with the help of appropriate regression techniques (at the cost of an additional regression error; see (
Crépey et al. 2019)).
Formalizing the above discussion, the conditions for a straightforward and satisfactory application of the MtM store-and-reuse approach to a given XVA metric are as follows, referring by indices init, incr, and augm to the initial portfolio, the new deal, and the augmented portfolio:
(No nested resimulation of the portfolio exposure required) The formula for the corresponding (portfolio-wide, time-0) XVA metric should be estimatable without nested resimulation, only based on the portfolio exposure rooted at . A priori, additional simulation level makes impractical the MtM store-and-reuse idea of swapping execution time against storage.
(Common random numbers) should be based on the same paths of the drivers as . Otherwise, numerical noise (or variance) would arise during aggregation.
(Lagged market data) should be based on the same time, say 0, and initial condition (including, modulo calibration, market data), as . This condition ensures a consistent aggregation of and into .
These conditions have the following implications:
The first seems to ban second-order generation XVAs, such as CVA in presence of initial margin, but these can in fact be included with the help of appropriate regression techniques.
The second implies storing the driver paths that were simulated for the purpose of obtaining
; it also puts a bound on the accuracy of the estimation of
, since the number of Monte Carlo paths is imposed by the initial run. Furthermore, the XVA desks may want to account for some wrong way risk dependency between the portfolio exposure and counterparty credit risk (see
Section 2.1); approaches based on correlating the default intensity and the market exposure in Equation (
5) are readily doable in the present framework, provided the trajectories of the drivers and/or risk factors are shared between the clean and XVA desks.
The third induces a lag between the market data (of the preceding night) that are used in the computation of and the exact process; when the lag on market data becomes unacceptably high (because of time flow and/or volatility on the market), a full reevaluation of the portfolio exposure is required.
Figure 2 depicts the embedding of an MtM store-and-reuse approach into the trade incremental XVA engine of a bank.
3.2. Parallelization of the Genetic Algorithm
Most of the XVA compression computational time is spent in the evaluation of the incremental XVA metric involved in the fitness criterion visible in Equation (
7). The MtM store-and-reuse approach allows reducing the complexity of such trade incremental XVA computations to trade (as opposed to portfolio) size. However, to achieve XVA compression in real time, this is not enough; another key step is the parallelization of the genetic algorithm that is used for solving Equation (
7).
The genetic algorithm is a population based method, which implies to maintain a population of individuals (tentative new deals) through each iteration of the algorithm. The calculation of the objective function, for a given individual, does not depend on the fitness value of the other individuals. Therefore we can vectorize the computation of the fitness values within the population. Provided a suitable parallel architecture is available, a perfectly distributed genetic algorithm makes the execution time independent of the population size
P (see Algorithm 1 and
Figure 1).
This makes an important difference with other metaheuristic optimization algorithm, such as simulated annealing or stochastic hill climbing, which only evaluate one or very few solutions per iteration, but need much more iterations to converge toward a good minimum (see (
Adler 1993;
Janaki Ram et al. 1996)). As discussed in (
Pardalos et al. 1995), the above parallelization of the fitness function evaluation, for a given population, should not be confused with a parallel genetic algorithm in the sense of an independent evolution of several smaller populations.
In our context, where individuals only represent incremental trades, a parallelization of population fitness evaluation is compatible with an MtM store-and-reuse approach for the trade incremental XVA computations. Combining the two techniques results in an XVA compression time independent of the sizes of the initial portfolio of the bank and of the population of the genetic algorithm used for the optimization, which represents an XVA compression computation time gain factor of the order of