QUANTIFYING MODEL RISK IN CREDIT DERIVATIVES PRICING

We propose a methodology for the quantification of model risk in the context of credit 1 derivatives pricing and CVA, where the uncertain or unmodelled parameter is often the correlation 2 between rates and credit. We take the rates model to be Hull-White (normal) and the credit model to 3 be Black-Karasinski (lognormal). We show how highly accurate analytic pricing formulae, hitherto 4 unpublished, can be derived for CDS and extended to address instruments with defaultable Libor 5 flows which may in addition be capped and/or floored. We also consider the pricing of a contingent 6 CDS with an interest rate swap underlying. We derive explicit expressions showing how to good 7 accuracy the dependence of model prices on the uncertain parameter(s) can be captured in analytic 8 formulae which are readily amenable to computation without recourse to Monte Carlo or lattice-based 9 computation. In so doing, we take into account the impact on model calibration of the uncertain (or 10 unmodelled) parameter. 11


Model risk management
Much effort is currently being invested into managing the risk faced by financial institutions as a consequence of model uncertainty.One strand to this effort is an increased level of regulatory scrutiny of the performance of the model validation function, both in terms of ensuring that adequate testing is performed of all models used for pricing and risk management purposes and of enforcing a governance policy that only models so tested are so used.As is stated in the Supervision and Regulation Letter of US Federal Reserve (2011): An integral part of model development is testing, in which the various components of a model and its overall functioning are evaluated to show the model is performing as intended; to demonstrate that it is accurate, robust, and stable; and to evaluate its limitations and assumptions.
Another concern is model risk monitoring and management.Here the idea is that, having validated models and examined the associated uncertainty, the risk department should monitor and report on the risk faced by a financial institution, ideally so that senior management can, based on "risk appetite", make informed decisions about model usage policy.According to US Federal Reserve (2011): Validation activities should continue on an ongoing basis after a model goes into use to track known model limitations and to identify any new ones.Validation is an important check during periods of benign economic and financial conditions, when estimates of risk and potential loss can become overly optimistic and the data at hand may not fully reflect more stressed conditions. . .Generally, senior management should ensure that appropriate mitigating steps are taken in light of identified model limitations, which can include adjustments to model output, restrictions on model use, reliance on other models or approaches, or other compensating controls.
Here the notion of best practice is less well established, in particular because different institutions adopt different approaches to measuring and reporting model risk.It is not therefore possible to enforce specific regulatory standards in this area, although regulators do take an interest in how banks perform the model risk governance function.
Central to the task of monitoring and managing model risk or uncertainty is the challenge of how to measure it.Current practice tends to be a mix of qualitative and quantitative metrics.While the former are easier to implement the latter are preferable in terms of the level of control which can be exercised, particularly if the model risk can be quantified in monetary terms.However, the fact that no commonly agreed approach exists means that it is not easy to make progress in this area.
The present paper represents some of the authors thoughts on this topic based on ten years of experience examining and quantifying model risk in relation to credit derivatives pricing.We include within this scope credit hybrid derivatives pricing and the calculation of the cost of counterparty risk protection on other types of derivative.

Layout of the paper
We begin in section 2 by reviewing previous methodologies which have been proposed for the quantification of model risk, before formally outlining our own proposed methodology.We go on in section 3 to describe our modelling approach for pricing credit derivatives in the context of stochastic interest rates and credit intensity.Under the assumption that both of these rates are small, we construct perturbation expansions representing solutions to the assumed governing equations, expressed in practice as a partial differential equation (PDE).Rather than looking to obtain particular solutions directly, a Green's function for the full homogeneous PDE is sought as a perturbation expansion up to second order in the small parameters.It is shown in section 4 how this Green's function can be used to calculate CDS prices or, conversely, to facilitate calibration of the model to market-observed CDS prices.
In the process, explicit analytic expressions are obtained for the PV of both the protection leg and the coupon leg of the CDS (under an assumed rates-credit correlation).Our Green's function is then used in section 5 to derive expressions for the PV of other credit derivatives, specifically credit-contingent interest rate swaps (including with capped or floored Libor) and contingent CDS with an interest rate swap underlying.These formulae can then be used in conjunction with those developed in section 2 to assess the level of model risk associated with the uncertain parameter(s).Finally in section 6, we present some concluding remarks and a number of directions for possible future work.

Previous work
A number of authors have previously visited the question of how to define a methodology for the quantification of model risk.In his pioneering work on the subject, Cont (2004) proposes two approaches.In the first, a family of plausible models is envisaged, each calibrated to all relevant market instruments then used to price a given portfolio of exotic derivatives.The degree of variation in the prices which are observed provides a measure of the intrinsic uncertainty associated with modelling the price of the portfolio.A second approach, taking account of the fact that not all models are amenable to calibration to market instruments, compares the models by penalising them for the pricing error associated with calibration instruments.The pricing errors for multiple instruments can be combined using various choices of norm, giving rise to a number of possible measures of model risk.
While intuitively attractive, neither of these approaches appears to have been adopted by practitioners.This is likely a consequence of the cost of implementing multiple models and re-pricing under them.Financial institutions usually have only a very few models implemented, often just one, capable of pricing a given exotic option.Furthermore, regulatory pressure is towards standardising pricing on as small a set as possible of models, which fact mitigates against the adoption of the kind of approach envisaged by Cont (2004).
More recently Glasserman and Xu (2014) have proposed an alternative approach based on maximising the model error subject to a constraint on the level of plausibility.The approach starts from a baseline model and finds the worst-case error that would be incurred through a deviation from the baseline model, given a precise constraint on the plausibility of the deviation.Using relative entropy to constrain model distance leads to an explicit characterization of worst-case model errors.In this way they are able to calculate upper bounds on model error.They show how their approach can be applied to the problems of portfolio risk measurement, credit risk, delta hedging and counterparty risk measured through credit valuation adjustment (CVA).
Although this approach has the attraction of a rigorous definition and, according to the authors, is amenable to convenient Monte Carlo implementation, it has the disadvantage that an entropy constraint specified a priori is not the sort of concept which risk managers are likely to be comfortable with in defining or expressing risk appetite.Yet it is central to the whole approach.Furthermore, the approach has the disadvantage that it probably offers too much laxity in allowing the joint probability distribution function governing risk factors to vary freely subject only to the entropy constraint.
Many of the perturbed distributions, including those giving rise to worst-case errors, would likely be deemed "unrealistic" by practitioners for reasons which cannot easily be encoded through entropy considerations.An approach which allows the user to be more specific about what is believed to be "known" and with what degree of certainty using a parametrisation more closely related to market variables would probably be preferred.
For example, the consensus among practitioners might be that the "best" interest rate model would be somewhere between a normal and a lognormal process.But under the proposal of Glasserman and Xu (2014), if a Hull-White (normal) model were chosen as the baseline, deviations towards lognormal and away from it would be penalised equally.Yet, we are really only interested in assessing the impact of the former.
We look to build to some extent on the basic philosophy of Cont (2004) but simplifying the methodology so as to avoid the cost of implementing multiple models at prohibitive cost.As we shall see, key to making progress is the ability to assess, at least to a good approximation, the impact of more advanced model features without implementing them explicitly in a fully working model.To this end, asymptotic analysis which has in the author's view been under-used in risk management turns out to offer a fruitful way forward, certainly in the context of credit derivatives pricing with which we shall mainly be concerned here.

Proposed framework
We formally state the problem we are looking to address as follows.Consider a model M(s; ρ) which we wish to use as the basis for pricing a portfolio Φ containing derivatives D k , k = 1, 2, . . ., m.
Here s = (s 1 , s 2 , . . ., s n ), with s i the values of a market data-determined model parameter, typically nodes on a curve associated with maturities T 1 , T 2 , . . ., T n , and ρ an additional model parameter, the appropriate value of which is unknown and furthermore not readily ascertainable from market data; or alternatively a parameter representing a risk factor which is not in practice modelled.We wish to consider and indeed quantify the dependence of the portfolio price on the model parameter.dependent on the set {s j | j = 1, 2, . . ., i}, with the dependence very weak for j < i, so it is a reasonable approximation to suppose dependence only on s i .We further suppose that the pricing of these market instruments under M(s; ρ) is sensitive to the chosen value of ρ.There may be other market instruments to which the model is calibrated but, if the generated prices of these are not sensitive to ρ, we do not need to consider them explicitly in our analysis here.
Let us denote the price calculated for derivative D k using model M(s; ρ), calibrated to market prices p, in self-evident shorthand notation by We propose that, ρ being an uncertain parameter, the model will in general be only weakly dependent thereon.(It would otherwise not be particularly usable.)An appropriate measure of the model risk associated with pricing the derivative portfolio is on this basis obtained by use of the linear approximation: with ∆ρ an estimate of the level of uncertainty or inaccuracy associated with the representation of the parameter ρ.However, we need to be clear just what we mean by the partial derivative since the choice of s may depend on the value of ρ used.Let us suppose that the model is initially calibrated with a value of ρ = ρ 0 , with the result s 0 = f (ρ 0 ), say.Then the impact of recalibration requires that we capture the ρ-dependence through We require a means of determining f i (ρ).We note that the ith calibration condition can be expressed as leading under our previous assumptions to Substituting in (3) for f i (ρ) from ( 5) and further substituting into (2) gives our representation of the model risk, contingent on our being able to compute satisfactorily the requisite partial derivatives w.r.t.s i and ρ.We note in this regard that, while the partial derivatives in (3) all need to be calculated for each instrument in the portfolio, (5) needs to be solved only once for each calibration instrument.
Clearly the usefulness of the above formulae will depend on the degree of convenience with which the relevant partial derivatives can be computed.We are helped here by the fact that, given our intrinsic uncertainty about the magnitude of ∆ρ, we are necessarily looking to provide an estimate rather than an exact computation of the model risk.Further, we have already made the assumption that the dependence of prices on ρ will be weak.So an estimate of the partial derivatives should be good enough for our purposes if this can be provided.In the case that our model M is specified (with or without ρ-dependence) in a form requiring solution by Monte Carlo simulation or finite difference 1 We use the word 'price' loosely here to encompass other quoted rates or indices from which market prices can be implied, such as CDS fair spreads or implied volatilities.
solution of a PDE, even the ability to do approximate calculations still leaves a lengthy and tedious calculation to perform.This will be compounded if we are concerned to understand how the level of model risk might change under different, perhaps more stressed, market environments, as is usually the case in a risk management context.
Our suggestion here is that, if we can derive analytic approximations to instrument prices taking into account the uncertain model parameters, this opens the way to obtaining analytic representations of the partial derivatives in (3) and so to obtaining an estimate of the model risk more conveniently than otherwise.We will illustrate our approach with some examples from the credit derivatives area, with which the author is most familiar.

Underlying Processes
Our modelling approach will be to represent the interest rate r t and the credit default intensity λ t (of a named debt issuer) as correlated mean-reverting short rate processes.In this respect our approach is similar to that pioneered by Schönbucher (1999) who took both processes to be normal mean-reverting diffusions, in other words governed by the gaussian short rate model of Hull and White (1990).Solutions were in his case found by constructing a two-dimensional tree.As was pointed out by Schönbucher (1999), it is a straightforward matter to extend his model to non-gaussian processes.
A number of authors have followed this suggestion taking the credit process to be lognormal, governed by a Black and Karasinski (1991) short rate model which, although less tractable than a gaussian model, ensures that credit spreads stay positive (and thus that survival probabilities are decreasing functions of time).Jobst and Zenios (2001) sought to price portfolios of bonds, modelling the credit spread for securities in a given rating class in this way, coupled with a Hull-White interest rate model, but also allowing rating class migrations to take place.A similar approach with only rates and credit default risk was used by Cortina (2007) to provide analytic solutions for the prices of defaultable bonds in the assumed absence of correlation, and by Pan and Singleton (2007) who considered the joint distribution of credit spreads and default loss rates implied by CDS market data.
We will follow the latter authors in taking the interest rate process to be normal, as proposed by Hull and White (1990), and the credit intensity process to be lognormal, so ensuring positive intensities, following Black and Karasinski (1991).The correlation ρ rλ between these two processes will often be the uncertain model parameter of interest, although we could equally within our framework consider the credit mean reversion rate, or even its volatility as uncertain model parameters.We shall find it convenient to work with auxiliary variables x t and y t satisfying the following Ornstein-Uhlenbeck processes: where dW 1 t and dW 2 t are correlated Brownian motions under the risk-neutral measure with These auxiliary variables are related to the interest short rate r t and the credit default intensity λ t , respectively, by t is a stochastic exponential with [X] t the quadratic variation of a process X t .The required form of the configurable functions r * (t) and λ * (t) is determined by calibration of the model to satisfy the no-arbitrage conditions set out below, so rendering our model risk-neutral.
The interest rate model obtained in this way is of Hull-White type and the credit intensity model Black-Karasinski.

The no-arbitrage condition
The formal no-arbitrage constraints which determine the functions r * (t) and λ * (t) are as follows: under the martingale measure for 0 < t ≤ T m , where T m is the longest maturity date for which the model is calibrated, is the t 1 -forward price of the t 2 -maturity zero coupon bond and the corresponding risky bond price.We shall assume the bond prices can be ascertained at the initial time t = 0 from the market, whence we can view ( 12) and ( 13) as defining the forward rate r(t) and associated credit spread λ(t), respectively.

Derivation of governing PDE
We consider the general problem of pricing a cash security with maturity T whose payoff depends on x T .We will also look below at protection instruments whose payoff may depend on τ and x τ , where τ is a stopping time in (0, T].We introduce the convenient shorthand notation that, for a process X t and real-valued function f (•), in terms of which we can re-write ( 8) and ( 9) as r t = r(x t , t) and λ t = λ(y t , t), where Writing the price of the security at time t ∈ [0, T] as f T t = f (x t , y t , t), we can infer by application of the Feynman-Kac theorem to ( 6) and ( 7) in the standard manner that the function f (x, y, t) satisfies the following backward diffusion equation: where (2017), we propose a perturbation expansion approach as follows.
For both short rate models we apply a 'low rates' assumption.To this end we define, taking T m to be the longest time to maturity for which the model is calibrated, small parameters We assume that r t , r(t) and σ r (t) are O( r ), while λ t and λ(t) are O( λ ).The scaling of r * (t) and λ * (t) is inferred as part of the calculation.We presage our conclusions by writing with γ * i,j (t) = O( i r j λ ).We rewrite (16) as where φ (x, y, t) := h(x, t) + g(y, t), (23) g(y, t) := λ(y, t) − λ(t). (25) We take advantage of the assumed smallness of φ (.) to seek a Green's function solution for (22) as a joint power series in r and λ , asymptotically valid in the limit as these two parameters tend to zero.

Green's function expansion
From the analysis of Turfus (2017a), we infer that the Green's function solution of ( 22) can be expanded as with ).We will for the present purposes be interested only in terms up to second order, We will in all cases be interested in 'free-boundary' Green's function solutions which tend to zero as x, y → ±∞.The leading order Green's function solution subject to these conditions is straightforwardly deduced.It is given by: where with For future notational convenience we also define Following Turfus (2017a), we deduce at first order: and where we have defined The extension to second order terms is similar.The details are presented in Appendix A. There it is also shown how our model can be calibrated consistent with the no-arbitrage conditions ( 10) and ( 11); in the process expressions are obtained for the unknown γ * i,j (•) in ( 20) and ( 21).
Use of the first order expressions will prove adequate in the most part for present purposes.
Equations ( 27), ( 35) and ( 36) can therefore be taken as the key results used in deriving the results below.

CDS Pricing
We next consider how we can use our Green's function to price a credit default swap (CDS) analytically under an assumed rates-credit correlation.Although this is a vanilla instrument, its use in calibration means that it is nonetheless important to have analytic formulae.

Fixed coupon leg
If, as proposed, the risky discount factors with ∆ i the relevant year fraction.

Protection leg
Turfus (2017a) shows how we can derive the price of a protection leg by solving a nonhomegeneous version of ( 22) with the forcing function −(1 − R)λ(y, t) on the r.h.s., where R is the assumed recovery level of the referenced debt.The result obtained is per unit notional with O( 2 r λ ) error, where with γ * 1,1 (•) given by (A9) provides an O( r λ ) to the leading order result.Here, the first term in the expression for ∆λ(•) comes from the application of G 0,0 to λ * (•) and the second from the application of G 1,0 to λ(•).Note that, in the absence of correlation, ∆λ(•) = 0 and the value of protection is as given under the assumption of deterministic rates.
A comparison of CDS prices based on the above against the results of a finite difference solution of the underlying PDE is reproduced from Turfus (2017a) in Fig. 1 to illustrate a typical parameter dependence structure and to indicate the level of accuracy which is furnished by our asymptotic method.The CDS had a 5y maturity and quarterly coupon payments.The CDS rate was taken to be 400bp with an assumed recovery of 40%, with a local vol of 60% and a mean reversion rate of 0.25.The 5y swap rate was taken to be 300 bp with a short rate local volatility of 50 bp and a mean reversion rate of 0.25.The notional is taken here and in subsequent numerical comparisons to be 100.As can be seen, the agreement is excellent, with the discrepancy between the two modelling approaches considerably less than 1 bp of notional.

Calibration to CDS market
If we consider our model to be calibrated to risky bond prices, the calibration is at this stage completely specified, at least to second order accuracy.In particular, taking the uncertain model parameter to be ρ = ρ rλ , we see that f i (ρ) = 0 in (3), simplifying our task.
Alternatively if, as is often the case, the calibration is to a term structure of CDS rates, we can take the market prices p i to be CDS fair premia associated with maturities T i .Let us further suppose that the function λ(t) can be taken as piecewise constant between the T i , given say by with T 0 ≡ 0. We can the take the s i introduced in section 2.2 above to be given by these λ i , which can be inferred successively from the p i by a standard bootstrapping process.Precise inference of the f i (ρ) is then a somewhat intricate process, but our task is simplified if we are willing to consider only the leading order impact of calibration, whence we can neglect the O( λ ) indirect impact of the λ i on (risky) discount factors in favour of their direct impact in the context of default-driven payoffs.A straightforward calculation gives rise to the conclusion that with expected O( λ ) relative errors. 2 Equipped with this additional information we are in a position to assess the model uncertainty associated with other derivative types priceable by our model.

Interest rate swap extinguisher
An interest rate swap extinguisher is an interest rate swap where the cash flows are contingent on survival of a named debt issuer.We have already considered fixed flows in section 4.1 above.We now look to price credit-contingent Libor flows.The payoff at time t i for a payment period [t i−1 , t i ] is given in previously defined notation by with errors = O( 2 r ).The calculation for the PV of this Libor flow contingent on no default was performed by Turfus (2017a).This was found to be given by with O( r ( 2 r + 2 λ )) error, where 2 The errors can in addition be expected to approximate to near zero since the calibration swaps are assumed to be at the money, whence the (risky) discounting affects both legs almost equally.
We here use the binary operators ∧ and ∨ to represent min and max respectively.In conclusion, the fair price of a payer extinguisher will be Libor − PV As can be seen from the graph, the use of our linear approximation approach to the model risk is a good one, with the discrepancy between the two modelling approaches in all cases less than 0.1 bp of notional.
It is from here a straightforward matter of differentiation to quantify the model uncertainty associated with the parameter ρ rλ .For the coupon flows there is no such dependency to leading order.
For the Libor flows, we have Peer-reviewed version available at Int.J. Financial Stud. 2018, 6, 39;doi:10.3390/ijfs6020039Solution of ( 54) is by standard application of the Green's function expansion: only the leading order term G 0,0 (•) is needed for our purposes.We conclude following Turfus (2017b) that, with relative error = O( r + λ ), the cost of protection purchased at t = 0 on a payer swap is given by where with where the latter expression need only be calculated to leading order, to which end x t i−1 can be replaced by x in (58).
A comparison of (59) against the results of a Monte Carlo simulation is reproduced from Turfus (2017b) in Fig. 3 to illustrate a typical parameter dependence structure and to indicate the level of accuracy which is furnished by our asymptotic method.The credit intensity in this case is 640 bp so not particularly 'small'; the local volatility was taken to be 70% with a mean reversion rate of 0.3.The interest rate market was as in section 5.1.The contract provided protection on the full value of the swap (assuming no recovery) for 6 years in return for semi-annual coupon payments of 400 bp.The notional was again taken to be 100.As can be seen, the use of our linear approximation approach to the model risk remains good, with the discrepancy between the two modelling approaches unlikely to exceed a few basis points of notional.It is again a matter of straightforward differentiation to derive an expression for the correlation risk associated with this modelling approach.To that end we note that the impact of correlation on discount factors is again weak and likely to cancel between legs.The impact through d 1 (•) will likewise be weak and again will cancel between legs.So we ignore these two effect to leading order and focus on the direct impact on f (i) L through the term explicitly containing I rλ (•).We obtain Again the impact of correlation on calibration can be taken into account, but this will invariably be small compared to the above so we propose that (64) will capture the uncertainty well.It may be suggested that the computational effort required here could become burdensome if N were large.
However, the greatest computational effort will be involved in computing ξ * (v) and the associated cumulative normal.The values of the latter can be tabulated in advance for a range of v ∈ [0, T] then interpolation used in the integration.Further, for T ≡ t k , we can re-express and factor the integrand into the product of a v-dependent term and an i-dependent term, the latter of which can be taken outside the integral.This means we must integrate numerically from 0 to T only once, which is comparatively little effort.This approach was used to good effect by the author in the computations described in Turfus (2017b).

Capped Libor flows
We consider the impact of capping a Libor flow such as was considered in section 5.1 above at some level K > 0.
with errors = O( 2 r ).Because of the appearance of x t−1 in the above expression, we must first compute the PV as of t i−1 .We obtain by straightforward application of our leading order Green's function G 0,0 : To proceed we define the value as the (asymptotic) representation of the value of x t−1 at which the cap K is hit.Applying our (leading order) Green's function again to the payoff at t i−1 to obtain the PV at t = 0, we obtain G 0,0 (0, 0, 0; ξ, η, t i−1 ) Carrying out the required integrations, we conclude with errors = O( r ( r + λ )).In a similar vein, for a Libor flow floored at K, we have On this occasion the terms involving d 1 (•) should not be neglected since they constitute the leading order impact of correlation.They furthermore impact only one leg, not both, so there will be no cancellation between legs as in the previous case.Differentiating, we obtain Taking the absolute magnitude of this and multiplying by the uncertainty in ρ rλ gives us the model uncertainty, assuming the calibration impact can again be ignored.We suggest it can be, provided the embedded caps are not too far in or out of the money, in which case the capped flow can be viewed as a fixed flow or a Libor flow, respectively, both of which cases we have already considered above.In particular in the latter case, we may wish to separate the capped flow into a Libor flow and a cap, modifying the former in accordance with (45) to take account of the O( r λ ) contribution from ∆L (i) .
An identical result pertains for the floored case.

Conclusions
We have proposed a framework for the quantification of model risk in circumstances where a parameter of the model is either uncertain in its value or not included in a calculation.We have detailed how our proposed approach would work when the model parameter is the correlation between interest rates and credit default intensity in pricing credit derivatives.We considered in particular the cases of Although the cases considered here involve rather simple modelling considerations, we propose that the technique has much wider application.In particular it is possible to look at modelling involving also the price of a spot underlying such as an equity, an FX rate or an inflation rate.These quantities can further be assumed to jump in value contingent on default.Modelling then requires a three-dimensional diffusion process (possibly four, since two interest rates may appear, either or both of which may be assumed stochastic).Pricing of defaultable FX swaps or contingent CDS on FX or equity options can be handled and analytic expressions for model uncertainty obtained in the manner specified above.For some examples, see Turfus (2017c) and Turfus (2017d).In cases where a possible jump at default is assumed, the model uncertainty due to uncertainty in the expected jump size can also easily be obtained as an analytic expression.
Much work has also been done using perturbation approaches to obtain analytic approximations for prices of numerous option types under local-stochastic volatility modelling assumptions.See for example Pagliarani and Pascucci (2011), who considered equity option pricing under a local volatility assumption, obtaining a perturbation expansion for the relevant Green's function much as we did here, and using it to derive asymptotic expressions for option prices.Their approach was applied also to Asian option pricing in Foschi et al. (2013) and extended, with the use of some Fourier analysis, to incorporate Lévy jumps in the dynamics of the spot underlying in Pagliarani and Pascucci (2013).
A review of a number of other papers which have presented asymptotic pricing formulae in recent years has been given by Turfus and Schubert (2017).Our model uncertainty methodology is equally applicable to the results of such work.An interesting prospect for future work would be to combine asymptotic modelling of stochastic rates and local-stochastic volatility, as was done by Funahashi (2015), and look at the resultant model uncertainty in option pricing.

Conflicts of Interest:
The author declares no conflict of interest.
Finally, determination of the unknown γ * i,j (•) functions is achieved by calibration of our model consistent with the no-arbitrage conditions (10) and ( 11).We must consider the consistent pricing in the former case of a risk-free cash flow, and in the latter case of a risky cash flow, as we now show.

Pricing of risk-free cash flow
The calculation for a risk-free cash flow in our model is very similar to that performed by Horvath et al. (2017) and essentially corresponds to taking the distinguished limit as λ → 0 then r → 0.
The same result is naturally obtained, namely that f T t = X T (x, t) where, with the convention that Peer-reviewed version available at Int.J. Financial Stud. 2018, 6, 39;doi:10.3390/ijfs6020039 are synchronised.(The extension of the calculation if payments are not synchronised is trivial.)A comparison of analytic calculations based on (49) against the results of a finite difference solution of the underlying PDE is reproduced fromTurfus (2017a) in Fig.2to illustrate a typical parameter dependence structure and to indicate the level of accuracy which is furnished by our asymptotic method.The swap extinguisher paid quarterly Libor + 100 bp spread and received a quarterly 400 bp fixed coupon against a swap notional of 100.The credit default intensity was taken to be 770 bp, with a local vol of 60% and a mean reversion rate of 0.3.The 10y swap rate was taken to be 80 bp with a short rate local volatility increasing from 20 bp to 70 bp and a mean reversion rate of 0.25.
) and, again ignoring indirect impact of the λ j on (risky) discount factors, we obtain Preprints (www.preprints.org)| NOT PEER-REVIEWED | Posted: 4 January 2018 doi:10.20944/preprints201801.0023.v1 a) an interest rate swap extinguisher, b) a contingent CDS on an interest rate swap underlying, and c) an extinguisher with capped or floored Libor flows.We derived explicit analytic expressions which we propose are very accurate assessments of the model risk as a function of the degree of uncertainty associated with the correlation, under an asymptotic assumption of the interest rate and the credit default intensity being small.

4 January 2018 doi:10.20944/preprints201801.0023.v1
B(t 1 , t 2 ) are assumed known, a coupon payment made for a payment period [t i−1 , t i ] with coupon c and t i > 0 can be straightforwardly priced as Preprints (www.preprints.org)| NOT PEER-REVIEWED | Posted:
The payoff at time t i for a payment period [t i−1 , t i ] is given in previously defined