Large deviations for a class of multivariate heavy-tailed risk processes used in insurance and finance

Modern risk modelling approaches deal with vectors of multiple components. The components could be, for example, returns of financial instruments or losses within an insurance portfolio concerning different lines of business. One of the main problems is to decide if there is any type of dependence between the components of the vector and, if so, what type of dependence structure should be used for accurate modelling. We study a class of heavy-tailed multivariate random vectors under a non-parametric shape constraint on the tail decay rate. This class contains, for instance, elliptical distributions whose tail is in the intermediate heavy-tailed regime, which includes Weibull and lognormal type tails. The study derives asymptotic approximations for tail events of random walks. Consequently, a full large deviations principle is obtained under, essentially, minimal assumptions. As an application, an optimisation method for a large class of Quota Share (QS) risk sharing schemes used in insurance and finance is obtained.


Introduction
Applications in finance and insurance require multivariate models with heavytailed distributions to accurately describe multivariate risks. This includes un-derstanding the possible dependence types of large observations. The case where such observations are restricted to a subset, say orthant, of the d-dimensional space R d is studied in the setting of multivariate regular variation in [18]. Many studies on multivariate heavy-tailed distributions are built on the assumption of extremely heavy tails assuming e.g. regular variation [13,14,20,21]. In this paper, we concentrate on the less studied situation where the large observations can be found from any direction and where the tails are not as heavy as regularly varying tails. Such situations appear naturally in the case of financial returns of portfolios since the tails are often observed to have a lognormal type distribution [11,15,24] and the observations can be present in all orthants [18].
We study asymptotic approximations of random walks, i.e. multivariate processes (S n ) := (S n ) ∞ n=1 in R d where S n = X 1 + · · · + X n and the increments X, X 1 , X 2 , . . . are independent and identically distributed (i.i.d.) random vectors. The class of studied increments is closely related to the class of multivariate subexponential vectors. Our class concerns lighter than polynomial tails where the variables have finite moments of all orders. There exist at least three different approaches in the literature to define multivariate subexponentiality. The definitions in [3,22] require, in addition to subexponentiality of the marginal distributions, a multivariate version of long-tailedness. The approach in [23] uses an alternative definition via fixed ruin sets in order to define a one-dimensional distribution function with respect to each set. The distribution class considered in this paper is consistent with the definition of [23]. For the one-dimensional case, [5] provides an overview of large deviations results for subexponential distributions.
We write X in product form as The one-dimensional radius variable R controls the heaviness of the increments and U indicates which directions (defined by unit vectors) are possible. Variable R can have, for example, Weibull or Lognormal type distribution. This definition can be extended to include the class of elliptical distributions, which frequently appear in the literature in applications in finance, see, for instance, [12,16]. Notably, the tail decay speed of R is not restricted to a narrowly defined parametric class. The proof methods are based on earlier results concerning one-dimensional random walks such as the ones presented in [17]. A full large deviations principle with non-trivial rate function under, essentially, minimal assumptions on the distribution is also derived. This study complements [19] which considers lognormal distributions and the result presented in [1] which focuses on Weibull distributions in the one-dimensional setting. As an application, we get an optimisation method for Quota Share (QS) risk sharing schemes, which are widely used in the field of reinsurance. In a QS-contract, there are two participants called the ceding company and the reinsurance company. They agree to share a random risk Y so that one pays qY and the other pays (1 − q)Y . Our aim is to optimise the portions q when a company buys reinsurance for all lines of business, i.e. each component of X is shared with a reinsurance company. The optimisation is obtained from the viewpoint of both the ceding and the reinsurance company.

Notation
We denote vectors by bold symbols and their components by upper indices, e.g. for x ∈ R d we write x = (x 1 , . . . , x d ) T . The inner product of the vectors x and y is denoted by x, y = d j=1 x j y j and x 2 is the L 2 -norm. Here, x 2 is called the length of x and x/ x 2 the direction of x. S d−1 is the d-dimensional unit sphere, the subset of R d including all vectors with L 2 -norm equal to one. The notation B • stands for the interior of the set B, B for its closure and B c for its complement. For r ∈ R + and S ⊂ S d−1 , we set where the expression A := B means A is defined by B. B(x, a) defines a ball centred at x with radius a and we denote the inverse function of f by f −1 .
The asymptotic relation f (x) ∼ g(x), as x → ∞ means lim x→∞ f (x)/g(x) = 1 and the little-o notation g(x) = o(f (x)) means lim x→∞ g(x)/f (x) = 0. We take the limit x → ∞ or n → ∞, where x denotes real and n natural numbers. The symbol 1(C) denotes the indicator function of the event C, P(C) its probability and E(X) stands for the expectation of X. By A we denote a symmetric d × d matrix and Ω ⊂ R d is the ellipsoid generated by the linear transformation Λ : R d → R d , Λ(x) = Ax of the unit sphere, Ω = Λ(S d−1 ).

Model assumptions
The aim is to derive a large deviations principle for elliptical multivariate distributions with moderate heavy tails. Therefore, we study the asymptotic behaviour of the random walk (S n ), where S n = X 1 + · · · + X n , and X, X 1 , X 2 , . . . are i.i.d. increments. Here, X is the product of a heavy-tailed random variable R and a random vector U or Θ similarly to the setting in [10]. We assume that U is distributed on the d-dimensional unit sphere S d−1 and that Θ is distributed on a d-dimensional ellipsoid Ω.
We make the following, essentially minimal, assumptions on R, U and Θ.
(A1) The tail function of the random variable R satisfies is an increasing and concave function such that (A2) The random vector U ∈ S d−1 has a distribution on the d-dimensional unit sphere S d−1 . Let S ⊂ S d−1 be a subset with positive Lebesgue measure. We assume that P(U ∈ S) > 0. In addition, U is assumed to be asymptotically independent of the random variable R in the sense that and E(RU) = 0.
We can then define the random vector Θ by a linear transformation from the unit sphere S d−1 to the d-dimensional ellipsoid Ω centred at the origin. The linear transformation Λ : R d → R d with Λ(x) = Ax, where A is a symmetric, positive definite matrix generates the ellipsoid Ω = Λ(S d−1 ). The random vector Θ can then be written as a transformed vector, Θ = Λ(U). If A is a diagonal matrix, the ellipsoid is orientated along the axes.
Instead of defining Θ through the linear transformation Λ, we can write its definition in a similar way as for the random vector U.
(A2') The d-dimensional random vector Θ is distributed on an ellipse or ellipsoid Ω centred at the origin with E(RΘ) = 0. It holds for every set S ⊂ Ω with positive Lebesgue measure that P(Θ ∈ S) > 0 and Θ is asymptotically independent of the random variable R in the sense that lim x→∞ P(Θ ∈ S|R > x) = P(Θ ∈ S).
Remark 1.1. Assumption (A1) implies that the random variable R is heavytailed in the sense that E(e sR ) = ∞ for all s > 0. The fact that log(x) = o(h(x)) implies E(R s ) < ∞ for all s > 0 so the random variable R has finite moments of all orders. Furthermore, it follows from assumptions (A2) and (A2') that the support of the random vector U or Θ is the entire set S d−1 or Ω.
Assumption (A1) is closely related to the class of subexponential distributions that is introduced, for instance, in [6,7]. Proof. The statement follows directly from Theorem 2 of [25] which gives a sufficient condition for subexponentiality of a distribution based on tail functions. The condition has three requirements two of which are immediately true by our definition. To check the remaining condition, we can define an auxiliary function g(x) := (x/h(x)) 1/2 . Then g(x) → ∞ and x − g(x) → ∞, as x → ∞. Without loss of generality, we can assume h(0) ≥ 0, see Remark 2.8. Due to the concavity, it holds that The corresponding upper bound of the limit is immediately valid by definition.
Simple examples of distributions of R that fulfil Assumption (A1) include Weibull distributions with parameter β ∈ (0, 1) and lognormal type distributions which are defined by the relation P(R > x) ∼ e −(log(x)) p for x > x 0 , where p > 1.
Assumption (A1) can be used to obtain bounds even if it does not hold immediately for a given tail function P(R > x). For example, if R can be stochastically bounded by, say, R ′ and R ′′ in the sense that and the variables R ′ and R ′′ satisfy (A1) (possibly with different concave functions), a result can be obtained if the asymptotic behaviour concerning the upper and lower bounds coincides in a suitable sense. For a concrete example of this, recall that a random variable R belongs to the class of stretched exponential distributions if, for large enough x, inequalities where β ∈ (0, 1) and l, l 1 , l 2 are slowly varying functions. This class is studied in particular in [8,9]. Here, Assumption (A1) is valid if l(x)x β is a concave function for large enough x. If it is not, we can still find, based on Theorem 1 of [17], a function h(x) which satisfies Assumption (A1) and inequality P(R > x) ≤ e −h(x) for large enough x and This fact can be used in the proofs by replacing P(R > x) by e −h(x) in suitable places in order to obtain results also for the stretched exponential class.

Large deviations principle
Throughout this section, we study the random walk (S n ) generated by random vectors of the form X = RU, where the random variable R fulfils Assumption (A1) and the random vector U fulfils Assumption (A2). We examine the probability of the asymptotic event that the random walk exceeds a threshold in a selected norm in order to prove a large deviations theorem. In this study, we choose to use the L 2 -norm because it is, in our view, a natural choice when dealing with ellipses. We start by considering a spherical distribution. The result is later extended to the setting of asymptotically elliptical heavy-tailed distributions. The proofs of the theorems stated below can be found in Subsection 2.3. The first result concerns logarithmic asymptotics of the norm of the random walk.
The asymptotic relation derived in Theorem 2.1 yields a full large deviations principle with non-trivial rate function for asymptotically spherical heavy-tailed distributions under an additional technical assumption.
Theorem 2.2. Let X = RU where R and U fulfil assumptions (A1) and (A2). Additionally, assume that, for a > 0, the limit exists. Then, the process {S n /n} satisfies the large deviations principle with rate function The large deviations principle in Theorem 2.2 is a multivariate equivalent of the large deviation principle in [17] for d-dimensional spherical random vectors.
Proof. The claim follows directly from the assumed continuity of the rate function.
The rate function is symmetric with respect to the origin. Furthermore, the one-dimensional rate function along any line segment with endpoint in the origin is concave like the rate function in the one-dimensional case examined in [17].
The following example examines the rate function for typical distributions that fulfil Assumption (A1).
Then, h(x) = cx β with some constant c > 0 so the index α from Remark 2.4 is equal to β. Furthermore, for all x ∈ R d and I is a good rate function. The rate function I(x) is continuous so Corollary 2.3 holds.
(ii) If R has a lognormal type distribution of the form P(R > x) ∼ e −(log(x)) p for x > x 0 with some parameter p > 1, the index α from Remark 2.4 is 0 and for all x ∈ R d \{0} so the rate function I jumps at the origin. The rate function is non-trivial, but not good, according to the terminology used in the context of large deviations.

Auxiliary results
In order to prove Theorem 2.1 and Theorem 2.2, we need some auxiliary lemmas. The auxiliary results study the projection of the random walk to a onedimensional setting and its asymptotics. The orthogonal projection P v (·) on the subspace spanned by the vector v ∈ S d−1 is defined as where the inner product defined as p v (x) := v, x indicates the length of the projected vector in the subspace according to the L 2 -norm. The next result shows that the projection of the random vector X has the same asymptotic behaviour as X 2 .
Lemma 2.6. Suppose X = RU where assumptions (A1) and (A2) hold. Let a > 0 and v ∈ S d−1 . Then Proof. The asymptotic upper bound is due to To prove the asymptotic lower bound, we fix δ > 0 and set Hence, which holds for every δ > 0 small enough and proves thus the claim since In the proof of Lemma 2.9 we divide the probability into the term caused by a single big jump and its complement. The following lemma provides an upper bound for the remaining term not caused by a single big jump.
and finite moments of all order and suppose a > 0. Furthermore, let where h(x) fulfils the assumptions on the function h(x) stated in Assumption (A1) and additionally h(0) ≥ 0. Then, The proof uses similar ideas as the proof of Theorem 2 in [17].
Proof of Lemma 2.7: Due to the inequality and the independence of the random variables, it holds that To bound the expectation from above, we split it into two parts Taking δ ∈ (0, 1) and setting b n : Then, it holds b n c n → 0 and therefore one can apply Taylor series to the first term E 1 , (1)).
Integrating E 2 by parts and rewriting it in terms of the tail distribution of Y , one gets For every δ > 0, it holds P(Y > y) ≤ exp(−(1 − δ/2)h(y)) for all y ≥ c n choosing n large enough. Applying additionally Taylor series to the first term of the latter equation results in the upper bound Due to the concavity of the function h(x) and the fact that y ≤ na, it holds that Applying the inequality log(x) ≤ x − 1 to the term n log( The first terms converge to zero, because E(Y ) = 0 and all moments are finite. Choosing ε(n) such that log(n) = o (h(c n )), also the last term converges to zero. Finally, The last inequality holds for any δ > 0 which implies the claim. We can now state the principle of a single big jump for projections of multivariate random walks. Lemma 2.9. Let X = RU, assume (A1) and (A2) and let v ∈ S d−1 and a > 0 fixed. Then, it holds that Proof. At first, we show the asymptotic lower bound Using the principle of a single big jump and the weak law of large numbers, it follows that The last expression applies, additionally to the weak law of large numbers, Lemma 2.6. The fact that this holds for every ε > 0 implies the asymptotic lower bound. It remains to show the asymptotic upper bound Dividing the probability into the case where at least the projection of one random vector exceeds the threshold na and its complement implies we can examine the terms separately since (log(P(p v (X) > na))) −1 → 0, as n → ∞. We get Clearly, the first term is −1. For the second term, we set Y = p v (X). Since log(P(p v (X) > na)) = log(P( v, U R > na)) ≤ log(P(R > na)) ∼ −h(na), we can apply Lemma 2.7 with the help of Remark 2.8. Finally, applying Lemma 2.6 we get the upper bound −1 also for the second term which completes the proof.

Proof of Theorem 2.1 and Theorem 2.2
We can now state the proofs of the main results of Section 2.
Proof of Theorem 2.1: First, we approximate the event { S n 2 > na} by projections in different directions. The fact that the principle of a single big jump holds for any orthogonal projection yields the desired asymptotic behaviour. Since P( S n 2 > na) ≥ P(p v (S n ) > na) for every v ∈ S d−1 the asymptotic lower bound is an immediate consequence of Lemma 2.6 and Lemma 2.9.
To prove the corresponding upper bound, we cover the set {x : x 2 > na} by a finite union of m sets defined by orthogonal projections and study the limit, as m → ∞. To this end, let m ≥ 2d. We aim to choose vectors v k ∈ S d−1 , k = 1, . . . , m, such that unions of form ∪ m k=1 {x : p v k (x) > ε}, where ε > 0, can be used to cover the whole space except some neighbourhood of the origin. For example, in R 2 , an easy way to define the vectors v k ∈ S 1 is to take v k = (cos(2kπ/m), sin(2kπ/m)) T . Choosing v k appropriately, for instance, such that they are uniformly spaced on the unit sphere, there exists some ε(m) > 0 such that and, more specifically, we can choose the numbers ε(m) so that ε(m) → 0, as m → ∞. Hence, Applying Lemma 2.9 and Lemma 2.6, we get Letting m → ∞, it holds that ε(m) → 0 and the term above converges to −1.

Proof of Theorem 2.2:
The proof of the large deviations principle is based on the fact that since open sets of the form V a,S defined in (1.1), where a > 0 and S is an open subset of S d−1 , generate R d . The limit superior of (2.2) follows directly from Theorem 2.1 due to the inequality P(S n ∈ V na,S ) ≤ P( S n 2 > na).
Rewriting P(S n ∈ V na,S ) = P( S n 2 > na)P S n S n 2 ∈ S S n 2 > na yields the limit inferior of (2.2) since the last probability is positive. The weak law of large numbers implies lim n→∞ P(S n /n ∈ B(0, ε)) = 1 for all ε > 0 and thus if 0 ∈ B ⊂ R d lim n→∞ log(P(S n /n ∈ B)) h(n) = 0.
Finally, by (2.2), it is easy to obtain the inequalities 3 Asymptotics in the elliptical case

Contraction principle
To extend the result to asymptotically elliptically distributed random vectors, we apply the contraction principle, Theorem 4.2.1 in [4], to the large deviations result in Theorem 2.2.
In this section, we study the asymptotics of the random walk (S n ) with increments X = RΘ where Θ is distributed on some d-dimensional ellipsoid Ω centred at the origin. For asymptotically elliptical distributions a suitable linear map in the contraction principle is the bijective function Λ : R d → R d defined by Λ(x) = Ax mapping vectors from S d−1 to Ω. Here, A is a symmetric, positive definite and thus invertible d × d matrix such that Λ(S d−1 ) = Ω. Since Λ is linear, where Example 3.2. Similarly to Example 2.5, if X fulfils the assumptions of Theorem 3.1 where R follows a Weibull distribution with parameter β ∈ (0, 1), is a good rate function. If R has a lognormal type distribution, the rate function is a constant everywhere except at the origin and hence not good.

Proof of Theorem 3.1
The proof of the main result in this section relies on the linearity of the function Λ and the contraction principle for large deviation principles.
Proof of Theorem 3.1: The linear transformation to Ω. By the linearity of Λ, it holds Λ(S n ) = n i=1 Λ(X i ) so the mapping of the random walk is equal to the sum of mappings of the increments.
The claim follows then from the contraction principle, see, for instance, Theorem 4.2.1 in [4]. Applying the contraction principle with the continuous function Λ −1 we get which completes the proof.

Introduction and setting
An insurance company with d lines of business might optimise the asymptotic behaviour of their ruin probability by sharing its risks for some lines of business using quota share reinsurance contracts. Quota share reinsurance is a proportional reinsurance [6], where the insurer (the ceding company) pays only a fixed ratio for each claim while the reinsurance company pays the rest. In general contracts, the insurance company shares both the losses and the profits with the reinsurer. A diagonal d×d matrix Q can be used to represent a quota share reinsurance strategy for an insurance company with d lines of business. The element q k,k refers then to the quota share ratio of the kth line of business, i.e. the ceding company pays q k,k X k of the kth line of business and the reinsurance company pays the remaining part (1 − q k,k )X k . It is natural to assume that q k,k ∈ (0, 1] for all indices k = 1, . . . , d since typically the insurance company keeps some share for every line of business. Under this assumption Matrix Q is invertible. For the ceding company, the aim could be to find a quota share reinsurance strategy defined by a matrix Q such that lim n→∞ log(P( S n /n 2 > a)) h(n) > lim n→∞ log(P( QS n /n 2 > a)) h(n) (4.1) because it reduces the asymptotic size of the ruin probability P( S n /n 2 > a). We look at the quota share reinsurance from two different perspectives. In Subsection 4.2, we model the optimal quota share reinsurance strategy from the point of view of the ceding company which sets reinsurance only for the lines of business with highest risks. Subsection 4.3 compares different quota share risk sharing strategies from the viewpoint of a reinsurance company that wants to offer quota share reinsurance while minimising their own risks.
We model the risk process of an insurance company with d lines of business as a d-dimensional random walk (S n ). The component of the increment X represents the difference between the claim size and the associated net premium in the corresponding line of business. We assume that X is asymptotically elliptically distributed: Let X = RΘ where R and Θ fulfil assumptions (A1) and (A2') and Ω is an ellipse or ellipsoid defined by the bijection Λ : If there are several reinsurance strategies that yield the same right-hand side of Inequality (4.1), the insurance company chooses the one with the smallest premium. In this setting, we define the premium of the reinsurance strategy Q as where p is a positive premium vector, 1 denotes the d-dimensional vector with ones and I the d × d identity matrix. The positive premium vector p contains the premium rates of the reinsurances for each line of business when the entire component is reinsured. For example, p j could be connected to the expected loss of the reinsurance company added with a safety loading. In (4.2), the constants p j are multiplied by the factors (1 − q j,j ), where the q-coefficients can be selected by the ceding company. The premium vector is considered as given and the ceding company cannot affect the values of this vector. Therefore, (4.2) is the premium for the entire reinsurance strategy. If the insurance company does not take reinsurance for the jth line of business, it selects q j,j = 1. Hence, the premium for the reinsurance of the jth line of business is zero in this case.

Quota share reinsurance strategy of the ceding company
The aim of the insurance company is to identify the riskiest lines of business and choose its quota share reinsurance strategy reducing these risks. The ceding company typically only wants to insure the highest risks. This is why it is natural to assume q k,k = 1 for at least one index k which represents the line of business with the lowest asymptotic risk.
Since quota share reinsurance is defined component-by-component one can find an optimal reinsurance strategy for distributions on ellipsoids orientated along the axes. The general case can be mathematically reduced into this setting by rotating the original data. However, if the data is transformed using a rotation, suitable QS contracts might not be immediately available on the market because the new axes would not correspond to the lines of business.  is defined by the linear transformation Λ(x) = Ax, where A is a diagonal d × d matrix with a k,k > 0 for all k = 1, . . . , d. Furthermore, we assume that R follows a Weibull distribution with parameter β ∈ (0, 1). Then, the quota share reinsurance strategy defined by the matrix Q = min d j=1 a j,j A −1 yields the inequality for any a > 0 and Q minimises the right-hand side of Inequality (4.3) over all strategies. The minimum is unique under the additional condition that p(Q) is also minimised.
Proof. The assumptions on R imply the large deviations principle lim n→∞ log(P( QS n /n 2 > a)) h(n) Therefore, it is sufficient to show that the matrix Q is the matrix that maximises where Q is the set of d × d diagonal matrices with q j,j ∈ (0, 1] for all j = 1, . . . , d and q k,k = 1 for at least one index k ∈ {1, . . . , d}. Without loss of generality we assume min d j=1 a j,j = a 1,1 . By the property cx 2 = |c| x 2 , the infimum is always achieved at the boundary, so The fact that inf x 2 =a A −1 Q −1 x 2 = a/a 1,1 follows directly from the definition of Q since x. It remains to show that a matrixQ that maximises (4.4) is of the formq k,k ≤ a 1,1 /a k,k for all k = 1, . . . , d. In order forQ to maximise (4.4) it has to hold that for all x with x 2 = a. Checking the inequality for a times the unit vectors we get the conditionq k,k ≤ a 1,1 /a k,k for all k = 1, . . . , d. Due to the additional condition that q k,k = 1 for at least one index, we need to set q 1,1 = 1. Now, takingq 1,1 = 1 andq k,k < a 1,1 /a k,k for all k = 2, . . . , d yields inf Comparing the premium of the reinsurance strategies Q andQ, it is easy to obtain p(Q) < p(Q) due to the positivity of the vector p. Thus, Q is the quota share reinsurance strategy that maximises (4.4) with the lowest premium. The optimal matrix Q transforms the set Ω to a d-dimensional ball. Hence, the probability that the reinsured risk process exceeds a threshold in a selected norm has the same asymptotics in all directions. Taking reinsurance defined by the matrix Q reduces the risks of the riskier lines of business to the same level of the less risky line of business.
Remark 4.4. If a k,k = a 1,1 for all k ∈ {1, . . . , d}, Ω describes a d-dimensional ball and quota share reinsurance does not improve the asymptotic behaviour of the logarithmic ruin probability.
If the ellipsoid is not orientated along the axes, it is not possible to find an optimal quota share reinsurance, which is defined component-by-component and results in equal asymptotic behaviour of all directions. Depending on the orientation of the ellipsoid, it might be possible to find a quota share reinsurance strategy defined component-by-component that still reduces the risks in the most risky directions and thus reduces the ruin probability.

Quota share risk sharing of the reinsurance company
A reinsurance company is interested in optimising the risk sharing portfolio such that lim n→∞ log(P( S n /n 2 > a)) h(n) > lim n→∞ log(P( (I − Q)S n /n 2 > a)) h(n) , where Q is a positive, diagonal d×d matrix with 0 < q j,j < 1 for all j = 1, . . . , d.
The condition q j,j < 1 for all j = 1, . . . , d is due to the natural assumption that the reinsurer offers reinsurance for all lines of business. Comparing the situation with the quota share reinsurance strategy of the ceding company, the same situation leads to a square matrix of smaller dimension since the insurance company want to get an offer for reinsurance only in the most risky lines of business and covers the risks of the line of business with the lowest risk itself. The reinsurance company collects the premium (1 − q j,j )p j and covers the amount (1 − q j,j )X j of the jth line of business. The following theorem states the optimising quota share strategy. For the jth unit vector e j it holds A −1 (ae j ) 2 = a/a j,j which results in This proves Inequality (4.5).
As in Theorem 4.2, A is a diagonal matrix which implies that the ellipsoid that defines the distribution of X is orientated along the axes. The constant c defines the risk share of the reinsurance company and hence the amount that they reassure. The quota share ratio of the reinsurer of the jth line of business is 1 − q j,j = 1/(a j,j c) so increasing c reduces the risks for the reinsurer. As in Subsection 4.2 also the optimal risk sharing strategy of the reinsurance company includes bigger ratio for the lines of business with smaller risks.
The ceding company as well as the reinsurance company optimise their risks by taking or offering reinsurance that maps their share of the initial ellipsoid to a ball. The optimising strategy of the ceding company yields where c > max d i=1 1/a i,i or 1/c < min d i=1 a i,i . The radius of the ball defined by AQ is 1/ min d i=1 a i,i and therefore smaller than the radius of the ball defined by A(I −Q) which is √ c. Increasing c increases the radius of the ball generated by A(I −Q). In general, a bigger radius implies smaller risks for the insurance or reinsurance company. Increasing c reduces the share of the reinsurance company and therefore also the risks of the reinsurance company. The minimum min d i=1 a i,i indicates the line of business with the lowest risk.

Conclusions
The assumptions of the main result require that the support of the random vectors is, asymptotically, the whole space. In particular, the components of increments are asymptotically dependent. However, the studied model admits more flexibility than many typical models in the sense that it does not require uniformly distributed random vectors on ellipses. This makes it possible to derive asymptotics for a wide class of zero-mean random walks. The general case with the non-zero expectation can be studied by centring the increments. The derived large deviations principle quantifies, asymptotically, the probabilities of rare events of such random walks, which enables further results in applications such as the presented QS optimisation method. For further research, it would be natural to ask how the set Ω can be deduced from observed data and if the large deviations principle holds even if Ω is, for example, any star shaped set.