Next Article in Journal
Optimal Dynamic Portfolio with Mean-CVaR Criterion
Next Article in Special Issue
A Risk Model with an Observer in a Markov Environment
Previous Article in Journal / Special Issue
Gaussian and Affine Approximation of Stochastic Diffusion Models for Interest and Mortality Rates
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Deterministic Investment Strategies for Insurers

1
Department of Mathematics, Karlsruhe Institute of Technology, Karlsruhe D-76128, Germany
2
Department of Optimization and Operations Research, University of Ulm, Ulm D-89069, Germany
*
Author to whom correspondence should be addressed.
Risks 2013, 1(3), 101-118; https://doi.org/10.3390/risks1030101
Submission received: 30 September 2013 / Revised: 28 October 2013 / Accepted: 2 November 2013 / Published: 7 November 2013
(This article belongs to the Special Issue Application of Stochastic Processes in Insurance)

Abstract

:
We consider an insurance company whose risk reserve is given by a Brownian motion with drift and which is able to invest the money into a Black–Scholes financial market. As optimization criteria, we treat mean-variance problems, problems with other risk measures, exponential utility and the probability of ruin. Following recent research, we assume that investment strategies have to be deterministic. This leads to deterministic control problems, which are quite easy to solve. Moreover, it turns out that there are some interesting links between the optimal investment strategies of these problems. Finally, we also show that this approach works in the Lévy process framework.

1. Introduction

Inspired by [1], we first consider a mean-variance problem for an insurance company, where the investment strategy has to be deterministic or, in other words, pre-determined at time zero. Mathematically, the strategy has to be F 0 -measurable. We assume that the risk reserve is given by a Brownian motion with drift and allow investments into a Black–Scholes market with one bond and d risky assets. Investment strategies are determined by the amount of money that is invested in the assets. Such a model has been considered in [2] with one stock, but different optimization criteria, and in [3] with the emphasis towards time-consistency. Here, we present, first, the solution of the classical mean-variance problem, where we optimize over adapted wealth-dependent investment strategies. The solution procedure uses a standard Hamilton–Jacobi–Bellman (HJB) approach and follows along established lines, like in [4,5]. More interestingly, in the second part, we consider the same problem with deterministic investment strategies. The authors in [1] motivate this approach by remarking that such a kind of investment strategy is easier to implement, communicate and compare to alternatives. These kinds of strategies are also partly used in defined contribution pension plans. We refer the reader to [6], where, among others, the performance of deterministic and dynamic investment strategies is compared. In [1], the authors consider a mean-variance problem with additional consumption, and their investment strategies are given in terms of the fraction of wealth invested in the single risky asset. We would like to add that our deterministic investment strategies are mathematically easier to obtain and that there are some interesting and surprising links to optimal investment strategies for other optimization criteria, as we will explain below. Although, when we compare the densities for the final wealth, which are obtained under the optimal deterministic and dynamic investment strategies for the mean-variance problem, we will see that there is quite some difference.
Mathematically, the mean-variance problem for the restricted class of strategies leads to a deterministic control problem directly, without the problem of facing the non-separability of the target function. In the classical adapted case, it is necessary to link the mean-variance problem to an auxiliary linear-quadratic problem first (see, e.g., [5,7,8]) denoted by Q P ( η ) in Section 3. This step is not necessary in the deterministic case. Moreover, we will also show that in this special model with deterministic strategies, the mean-variance optimal strategy is also optimal for an arbitrary mean-risk problem, where the variance is replaced by an arbitrary law-invariant and positive homogeneous risk measure for the deviation of the terminal wealth from the mean. This is mainly due to the fact that the terminal wealth under a deterministic investment strategy has a normal distribution with mean and variance depending on the strategy. This observation can also be used to solve the control problem for other optimization criteria, like, e.g., expected exponential utility or the probability of ruin. Surprisingly, it will turn out that the classical optimal investment strategy for a company with exponential utility (within the class of adapted strategies) is deterministic and coincides with the optimal deterministic strategy for the mean-variance problem. Finally, we also show that this approach works when the involved processes are Lévy processes. In order to explain our procedure, we restrict the presentation to the most important case, where the risk reserve process is given as in the Cramér–Lundberg model, i.e., the risk reserve process is a compound Poisson process. Since the jumps vanish under expectation, we can proceed in almost the same way. In the classical setting with adapted strategies, it is also possible to deal with Lévy processes; see, e.g., [9,10] for LQ- and mean-variance problems, [11] for the exponential utility or [12] for more general information. In [13], a reinsurance problem with a Lévy market has been considered, and it turned out that the optimal reinsurance strategy is deterministic in the larger class of adapted strategies already. For a recent interesting comparison of different approaches to solve continuous-time mean-variance problems, see [14]. The authors there also consider non-negativity constraints on terminal wealth.
In this paper, we do not deal with questions of the time-consistency of the optimal investment strategy. This seems to be a key point in recent research. We just point to the recent papers [3,15,16], where time-consistency is discussed. The deterministic investment strategies depend on time only and are consistent for the deterministic control problem.
The paper is organized as follows: In the next section, we introduce the insurance model and the mean-variance problem along with some standing assumptions. Then, we explain how to reduce the problem in general to a stochastic linear-quadratic problem. Next, we solve the problem within the classical framework of adapted, i.e., wealth-dependent investment strategies. In Section 5, we consider the mean-variance problem with deterministic investment strategies. We show how the problem is turned into a deterministic control problem and solve it. In a special case, we study the form of the densities of the optimal terminal wealth for the mean-variance problem under deterministic and adapted investment strategies. The next section is dedicated to more general mean-risk problems and other optimization criteria. We show that for deterministic investment strategies, the optimal one is insensitive to the choice of the risk measure, as long as it is law-invariant and positive homogeneous. Finally, in the last section, we deal with the Lévy process framework. We assume that the risk reserve process follows a compound Poisson process, like in the Cramér–Lundberg model.

2. The Model

We suppose that the risk reserve process, ( Y t ) , of the insurance company is given by the following stochastic differential equation:
d Y t = α d t + β d W ˜ t
where W ˜ = ( W ˜ t ) is a Brownian motion and α , β are arbitrary real constants with β 0 , and it is reasonable, but not mathematically necessary to assume that α 0 . The initial capital is given by Y 0 = x 0 > 0 . The risk reserve can be invested into a financial market, which is given by a riskless bond with price process ( S 0 ( t ) ) , where
S 0 ( t ) : = e r t
and r 0 denotes the deterministic interest rate. Further, there are d risky assets, and the price process ( S i ( t ) ) of asset i is given by the stochastic differential equation:
d S i ( t ) = S i ( t ) b i d t + j = 1 k σ i j d W j ( t )
with S i ( 0 ) = 1 . The process W = ( W t 1 , , W t k ) is a k-dimensional Brownian motion, which may be correlated with the driving Brownian motion of the risk reserve process. More precisely, we assume that W ˜ , W j t = ρ j t for j = 1 , , k and ρ : = ( ρ 1 , , ρ k ) . In what follows, we set b : = ( b 1 , , b d ) R d and σ = ( σ i j ) R + d × k . We assume that all processes are defined on a common probability space ( Ω , F , P ) , that ( F t ) is the filtration generated by all Brownian motions and that there is a final time horizon T > 0 .
The insurance company is now allowed to invest the risk reserve into the financial market. A classical trading strategy π = ( π t ) is an ( F t ) -adapted stochastic process, where π t = ( π 1 ( t ) , , π d ( t ) ) R d and π i ( t ) is the amount of money invested in stock i at time t. Note that short-selling is allowed and that the bond investment ( π 0 ( t ) ) is given by the self-financing condition. Adaptedness means that we assume that the decision maker is able to observe all Brownian motions, and, thus, the risk reserve and the evolution of the financial market, and is able to react to it. Given a trading strategy, π, and the notation 1 : = ( 1 , , 1 ) R d , the corresponding wealth process of the insurance company follows the stochastic differential equation:
d X t π = r X t π + α + ( b - r 1 ) π t d t + β d W ˜ t + π t σ d W t
X 0 π = x 0
In what follows, let us denote Σ : = σ σ , which we assume to be positive definite. Since the quadratic variation of ( X t π ) is given by:
d X π t = π t Σ π t + β 2 + 2 β π t σ ρ d t
the process ( X t π ) is in a distribution equal to:
d X t π = r X t π + α + ( b - r 1 ) π t d t + π t Σ π t + β 2 + 2 β π t σ ρ d W ^ t
for a generic Brownian motion, W ^ . The generator of the controlled Markov process ( X t π ) is for v C 1 , 2 given by:
A π v ( t , x ) = v t + v x r x + α + ( b - r 1 ) π + 1 2 v x x ( π Σ π + β 2 + 2 β π σ ρ )
We call an investment strategy, π, admissible if all integrals in Equation (4) exist and E x 0 [ ( X T π ) 2 ] < . At first, we are interested in the dynamic mean-variance problem of the form (for μ R )
( M V ) Var x 0 [ X T π ] min E x 0 [ X T π ] μ π is an admissible investment strategy
In the next section, we explain the standard way to transform this problem into a classical stochastic control problem, which will then be solved in the subsequent sections. In order to obtain non-trivial problems, we assume that:
μ > x 0 e r T + ( e r T - 1 ) α r - β r ( b - r 1 ) Σ - 1 σ ρ
We will discuss this condition later in a remark at the end of Section 5.

3. Transformation of MV to an Ordinary Stochastic Control Problem

Problem (MV) can be solved via the well-known Lagrange multiplier technique. The discussion in this section follows [17], chapter 4.6. Let L x 0 ( π , λ ) be the Lagrange-function, i.e.:
L x 0 ( π , λ ) : = Var x 0 [ X T π ] + 2 λ μ - E x 0 [ X T π ]
for π is an admissible investment strategy and λ 0 . As usual, ( π * , λ * ) with λ * 0 is called a saddle-point of the Lagrange-function, L x 0 ( π , λ ) , if:
sup λ 0 L x 0 ( π * , λ ) = L x 0 ( π * , λ * ) = inf π L x 0 ( π , λ * )
Lemma 1 
Let ( π * , λ * ) be a saddle-point of L x 0 ( π , λ ) . Then, the value of (MV) is given by
inf π sup λ 0 L x 0 ( π , λ ) = sup λ 0 inf π L x 0 ( π , λ ) = L x 0 ( π * , λ * )
and π * is optimal for (MV).
Proof:
Obviously, the value of (MV) is equal to inf π sup λ 0 L x 0 ( π , λ ) and:
inf π sup λ 0 L x 0 ( π , λ ) sup λ 0 inf π L x 0 ( π , λ ) .
For the reverse inequality, we obtain:
inf π sup λ 0 L x 0 ( π , λ ) sup λ 0 L x 0 ( π * , λ ) = L x 0 ( π * , λ * ) = inf π L x 0 ( π , λ * ) sup λ 0 inf π L x 0 ( π , λ )
and the first statement follows. Further, from the definition of a saddle-point, we obtain for all λ 0
λ * μ - E x 0 [ X T π * ] λ μ - E x 0 [ X T π * ]
and, hence, E x 0 [ X T π * ] μ . Then, we conclude L x 0 ( π * , λ * ) = Var x 0 [ X T π * ] , and π * is optimal for (MV).☐
From Lemma 1, we see that it is sufficient to look for a saddle point ( π * , λ * ) of L x 0 ( π , λ ) . It is not difficult to see that the pair ( π * , λ * ) is a saddle-point if λ * > 0 and π * = π * ( λ * ) satisfy:
π * is optimal for P ( λ * ) a n d E x 0 [ X T π * ] = μ
Here, P ( λ ) denotes the so-called Lagrange-problem for the parameter λ > 0
P ( λ ) L x 0 ( π , λ ) min π is an admissible investment strategy
Note that the problem P ( λ ) is not a standard stochastic control problem. We embed the problem, P ( λ ) , into a tractable auxiliary problem, Q P ( η ) , that turns out to be a stochastic LQ-problem. For η R , define
Q P ( η ) E x 0 ( X T π - η ) 2 min π is an admissible investment strategy
The following result shows the relationship between the problems P ( λ ) and Q P ( η ) .
Lemma 2 
If π * is optimal for P ( λ ) , then π * is optimal for Q P ( η ) with η : = E x 0 [ X T π * ] + λ .
Proof:
Suppose π * is not optimal for Q P ( η ) with η : = E x 0 [ X T π * ] + λ . Then, there exists an admissible π, such that:
E x 0 [ ( X T π ) 2 ] - 2 η E x 0 [ X T π ] < E x 0 [ ( X T π * ) 2 ] - 2 η E x 0 [ X T π * ]
Define the function U : R 2 R by
U ( x , y ) : = y - x 2 + 2 λ ( μ - x )
Then, U is concave and U ( x , y ) = L x 0 ( π , λ ) for x : = E x 0 [ X T π ] and y : = E x 0 [ ( X T π ) 2 ] . Moreover, we set x * : = E x 0 [ X T π * ] and y * : = E x 0 [ ( X T π * ) 2 ] . The concavity of U implies:
U ( x , y ) U ( x * , y * ) - 2 ( λ + x * ) ( x - x * ) + y - y * = U ( x * , y * ) - 2 η ( x - x * ) + y - y * < U ( x * , y * )
where the last inequality is due to our assumption y - 2 η x < y * - 2 η x * . Hence, π * is not optimal for P ( λ ) , leading to a contradiction. ☐
The implication of Lemma 2 is that any optimal solution of P ( λ ) (as long as it exists) can be found by solving problem Q P ( η ) . Indeed, if P ( λ ) has an optimal solution and if the optimal solution of Q P ( η ) is unique, it must be the optimal solution of P ( λ ) .

4. Solution of MV for a Classical Adapted Investor

We will first solve problem Q P ( η ) , which is a classical stochastic control problem with no running cost and terminal cost ( x - η ) 2 . Let us denote
V ( t , x ) : = inf π E t , x ( X T π - η ) 2
where, as usual, E t , x is the conditional expectation given X t π = x . In view of the generator of the wealth process, the corresponding Hamilton–Jacobi–Bellman (HJB) equation reads (note that with a slight abuse of notation, we name the action again π):
0 = inf π R d { v t + v x r x + α + ( b - r 1 ) π + 1 2 v x x π Σ π + β 2 + 2 β π σ ρ )
( x - η ) 2 = v ( T , x )
where we denote by v t and v x the partial derivatives. Since this is a standard LQ-problem, a solution of the HJB equation can easily be found by using the Ansatz v ( t , x ) = A ( t ) + B ( t ) x + C ( t ) x 2 . Plugging this form into the HJB equation yields:
0 = inf π R d { A ˙ ( t ) + B ˙ ( t ) x + C ˙ ( t ) x 2 + B ( t ) + 2 C ( t ) x r x + α + ( b - r 1 ) π + C ( t ) π Σ π + β 2 + 2 β π σ ρ )
where A ˙ ( t ) denotes the derivative w.r.t. time. The minimum point of this equation is given by:
π * ( t , x ) = - Σ - 1 ( b - r 1 ) B ( t ) 2 C ( t ) + x - β Σ - 1 σ ρ
Inserting the minimum point into the HJB equation and collecting the terms without x, the terms with x and the terms with x 2 yields the following ordinary differential equations for B ( t ) , C ( t ) and A ( t ) :
C ˙ ( t ) = - C ( t ) 2 r - ( b - r 1 ) Σ - 1 ( b - r 1 )
B ˙ ( t ) = - B ( t ) r - ( b - r 1 ) Σ - 1 ( b - r 1 ) - 2 C ( t ) α - β ( b - r 1 ) Σ - 1 σ ρ
A ˙ ( t ) = - B ( t ) α - β ( b - r 1 ) Σ - 1 σ ρ - C ( t ) β 2 1 - ρ σ Σ - 1 σ ρ +
+ ( b - r 1 ) Σ - 1 ( b - r 1 ) B ( t ) 2 4 C ( t )
with boundary condition C ( T ) = 1 , B ( T ) = - 2 η , A ( T ) = η 2 . The differential equation for A ( t ) involves only B ( t ) and C ( t ) on the right-hand side. Since we are only interested in the optimal investment strategy, π * , the interesting quantity is h ( t ) : = B ( t ) C ( t ) . For h ( t ) , we obtain the differential equation
h ˙ ( t ) = B ˙ ( t ) C ( t ) - B ( t ) C ˙ ( t ) C 2 ( t ) = h ( t ) r - 2 δ r
with δ r : = α - β ( b - r 1 ) Σ - 1 σ ρ and boundary condition h ( T ) = - 2 η . A solution is given by
h ( t ) = 2 δ - 2 ( δ + η ) e - r ( T - t )
Plugging this expression into Equation (10) yields:
π * ( t , x ) = - Σ - 1 ( b - r 1 ) δ - ( δ + η ) e - r ( T - t ) + x - β Σ - 1 σ ρ
Altogether, we obtain the following result with a standard verification argument (cf., for example, [4,18]):
Theorem 3 
The value function of problem Q P ( η ) is given by V ( t , x ) = A ( t ) + B ( t ) x + C ( t ) x 2 with A , B , C being solutions of Equations (13), (12) and (11), respectively, and the optimal investment strategy ( π t * ) is determined via Equation (15) by π t * : = π * ( t , X t * ) , where ( X t * ) is the corresponding optimal wealth process solving Equation (5) with π * .
Finally, we want to solve problem (MV). Thus, we have to compute E x 0 [ X T * ] , the expected terminal wealth under the optimal strategy, π * , for Q P ( η ) . We obtain:
E x 0 [ X t * ] = x 0 + 0 t r E x 0 [ X s * ] + α + ( b - r 1 ) E [ π s * ] d s = x 0 + 0 t r E x 0 [ X s * ] + δ r - a δ - ( δ + η ) e - r ( T - s ) + E x 0 [ X s * ] d s
with a : = ( b - r 1 ) Σ - 1 ( b - r 1 ) . Thus, E x 0 [ X T * ] follows from solving the ordinary differential equation:
h ˙ ( t ) = h ( t ) ( r - a ) + δ r - δ a + a ( δ + η ) e - r ( T - t ) h ( 0 ) = x 0
and we get:
E x 0 [ X T * ] = x 0 e - T ( a - r ) - δ e - T a ( 1 - e r T ) + η ( 1 - e - T a )
From E x 0 [ X T * ] = μ , and η = λ * + μ we conclude:
λ * = e - T a 1 - e - T a μ - x 0 e T r - δ ( e r T - 1 )
which is positive, due to Equation (7). Hence, we obtain the following result:
Theorem 4 
The optimal investment strategy, π * , for problem (MV) is determined by Equation (15) with η = μ + λ * and λ * given by Equation (16).

5. MV Problem for an Investor with Deterministic Investment Strategies

In this section, we assume now that the investment strategy has to be pre-determined, i.e., that the process, π is F 0 -measurable, which means it is deterministic and only a function of time. Thus, the fund manager of the insurance company has to explain at t = 0 the investment strategy for the time horizon [ 0 , T ] without using further knowledge about the evolution of the processes. This seems at least sometimes to be more realistic than the adaptive strategy Equation (15). A similar situation has been considered in [1], where the authors motivate such a strategy by pension funds often being managed by time-dependent investment strategies only. Hence, we consider
( M V D ) Var x 0 [ X T π ] min E x 0 [ X T π ] μ π is a deterministic investment strategy
This is now the same problem over a smaller class of investment strategies. We consider the first problem P D ( λ ) :
P D ( λ ) Var x 0 [ X T π ] + 2 λ μ - E x 0 [ X T π ] min π is a deterministic investment strategy
Here, it is not necessary to consider the artificial problem, Q P ( η ) . P D ( λ ) can be transformed into a deterministic control problem as follows. To this end, note that the stochastic differential Equation (5) for the wealth can easily be solved. When we denote by X ˜ t π = X t π S 0 ( t ) the discounted wealth process, then we obtain:
X ˜ t π = x 0 + 0 t e - r s α + ( b - r 1 ) π s d s + 0 t e - r s π s Σ π s + β 2 + 2 β π s σ ρ d W ^ s
For a deterministic process, π, the second integral is obviously a true martingale, and we obtain:
E x 0 [ X ˜ t π ] = x 0 + 0 t e - r s α + ( b - r 1 ) π s d s = : x ( t ) Var x 0 [ X ˜ t π ] = 0 t e - 2 r s π s Σ π s + β 2 + 2 β π s σ ρ d s = : y ( t )
Note that x ( t ) and y ( t ) both depend on π. Thus, the target function of P D ( λ ) can be written as:
Var x 0 [ X T π ] + 2 λ μ - E x 0 [ X T π ] = e 2 r T Var x 0 [ X ˜ T π ] + 2 λ μ - e r T E x 0 [ X ˜ T π ] = e 2 r T y ( T ) + 2 λ ( μ - e r T x ( T ) )
The deterministic control problem is then:
P D ( λ ) e 2 r T y ( T ) + 2 λ ( μ - e r T x ( T ) ) min x ˙ ( t ) = e - r t α + ( b - r 1 ) π t y ˙ ( t ) = e - 2 r t π t Σ π t + β 2 + 2 β π t σ ρ π t R d
The value function of this problem is:
V ( t , x , y ) : = inf π e 2 r T y ( T ) + 2 λ ( μ - e r T x ( T ) )
Obviously, the related HJB equation is:
0 = inf π R d { v t + v x e - r t α + ( b - r 1 ) π + v y e - 2 r t π Σ π + β 2 + 2 β π σ ρ )
v ( T , x , y ) = e 2 r T y + 2 λ ( μ - e r T x )
In order to find a solution, we now consider the Ansatz
v ( t , x , y ) = e 2 r T ( y + g ( t ) ) + 2 λ μ - e r T ( x + f ( t ) )
with f ( T ) = g ( T ) = 0 . Thus, we obtain:
v t = e 2 r T g ˙ ( t ) - 2 λ e r T f ˙ ( t ) v x = - 2 λ e r T v y = e 2 r T
The minimizer of Equation (18) is determined by:
π t * = - Σ - 1 ( b - r 1 ) v x v y e r t 2 - β Σ - 1 σ ρ
= Σ - 1 ( b - r 1 ) λ e - r ( T - t ) - β Σ - 1 σ ρ
Plugging this into the HJB Equation (18) yields:
0 = e 2 r T g ˙ ( t ) - 2 λ e r T f ˙ ( t ) - 2 λ e r T x ˙ * ( t ) + e 2 r T y ˙ * ( t ) .
Note that this equation is satisfied when f ˙ ( t ) = - x ˙ * ( t ) and g ˙ ( t ) = - y ˙ * ( t ) ; thus:
f ( t ) = t T e - r s α + ( b - r 1 ) π s * d s
g ( t ) = t T e - 2 r s ( π s * ) Σ π s * + β 2 + 2 β ( π s * ) σ ρ d s
Note that under the control π * and corresponding state trajectories x * , y * , it holds that x * ( t ) + f ( t ) = E x 0 [ X ˜ T * ] and y * ( t ) + g ( t ) = Var x 0 [ X ˜ T * ] for all t [ 0 , T ] . We summarize our results in the following theorem. A verification is straightforward.
Theorem 5 
The value function of problem P D ( λ ) is given by
V ( t , x , y ) = e 2 r T ( y + g ( t ) ) + 2 λ μ - e r T ( x + f ( t ) )
with f , g being solutions of Equations (21) and (22), respectively. The optimal investment strategy ( π t * ) is given by Equation (20).
Finally, we solve problem (MVD). First note that:
E x 0 [ X T * ] = e r T x 0 + δ ( e r T - 1 ) + a λ T
From E x 0 [ X T * ] = μ , we obtain:
λ * = ( a T ) - 1 μ - e r T x 0 - δ ( e r T - 1 )
which is positive, due to condition Equation (7). Thus, we obtain the following result:
Theorem 6 
The optimal investment strategy, π * , for problem (MVD) is determined by Equation (20) with λ * given by Equation (23).
Remark: In this setting, it is also easy to determine the strategy with the minimum achievable variance. In case the financial market is not perfectly correlated with the risk reserve, this minimal variance is not zero. For an arbitrary deterministic investment strategy, we obtain:
Var x 0 [ X T π ] = e 2 r T 0 T e - 2 r s π s Σ π s + β 2 + 2 β π s σ ρ d s
Minimizing this expression in π s yields the minimum variance investment strategy: π ^ t - β Σ - 1 σ ρ with corresponding minimal variance
Var x 0 [ X ^ T ] = 1 2 r β 2 ( 1 - ρ σ Σ - 1 σ ρ ) ( e 2 r T - 1 )
and expectation:
E x 0 [ X ^ T ] = x 0 e r T + δ ( e r T - 1 )
Thus, in case μ x 0 e r T + δ ( e r T - 1 ) , problem (MVD) is trivial, because then, π ^ satisfies the constraint E x 0 [ X ^ T ] μ and yields the minimal possible variance. As a result, condition Equation (7) is reasonable.
Remark: Of course, for a given expected return of μ, when we minimize the variance over the smaller set of deterministic investment strategies, the variance will be not smaller than in the classical stochastic case. Indeed, when we suppose that α = β = 0 , i.e., no additional insurance business and d = k = 1 , we obtain the density of the optimal terminal wealth for (MVD), as well as for (MV). From Section 5, we conclude that for (MVD), the optimal terminal wealth satisfies:
X T * N μ , ( μ - x 0 e r T ) 2 a T
For problem (MV), an optimal terminal wealth can be derived from the Lagrangian approach and is given by (cp.Theorem 3.3 in [14]):
X T * = 1 e a T - 1 μ e a T - x 0 e r T + ( x 0 e r T - μ ) L T
where L T is the risk neutral density given by:
L T = exp - 1 2 a T - a W T
In Figure 1, we plotted the two densities of the optimal terminal wealth for (MVD) and (MV) for the parameters x 0 = 10 , μ = 12 , r = 0 . 02 , b = 0 . 15 , σ = 0 . 2 and T = 1 .
Figure 1. Densities of the optimal terminal wealth for (MVD) and (MV).
Figure 1. Densities of the optimal terminal wealth for (MVD) and (MV).
Risks 01 00101 g001
Obviously, for this time horizon ( T = 1 ), there is a great difference in the performance of the two strategies. It seems that deterministic strategies here only make sense for small time horizons.

6. More General Mean-Risk Problems and Other Optimization Criteria

In this section, we will briefly discuss some other optimality criteria for the investment problem with deterministic investment strategies. Of course, when the solution of the classical stochastic control problem with adapted investment strategies yields an optimal strategy, which is itself deterministic, then this strategy is also optimal in the smaller class of deterministic strategies. A situation like this can occur when we consider the probability of ruin or the expected exponential utility as a target function. We discuss these cases below. However, we start this section with the observation that in the mean-variance framework, our optimal deterministic investment strategy is not only optimal w.r.t. to minimizing the variance.

6.1. More General Mean-Risk Problems

The variance or standard deviation is, of course, just one way to measure risk. Suppose now that ϱ is an arbitrary, law invariant and positive homogeneous risk measure, i.e., ϱ ( λ X ) = λ ϱ ( X ) for all λ > 0 . We claim now that the problem
( M R D ) ϱ X T π - E x 0 [ X T π ] min E x 0 [ X T π ] μ π is a deterministic investment strategy
has the same solution as (MVD), which is obtained when we use the standard deviation ϱ ( X ) : = Var [ X ] .
Theorem 7 
The optimal investment strategy for (MRD) coincides with the optimal investment strategy for (MVD).
Proof:
First note that in both cases, because of Var x 0 [ X T π ] = e 2 r T Var x 0 [ X ˜ T π ] and the fact that ϱ X T π - E x 0 [ X T π ] = e r T ϱ X ˜ T π - E x 0 [ X ˜ T π ] , we can minimize the target function with X T π replaced by X ˜ T π and the side constraint, E x 0 [ X ˜ T π ] μ e - r T . Now, due to Equation (17), we see that for deterministic investment strategies, X ˜ T π has a normal distribution, N ( m π , s π 2 ) , with parameters:
m π = x 0 + 0 T e - r s α + ( b - r 1 ) π s d s s π 2 = 0 T e - 2 r s π s Σ π s + β 2 + 2 β ρ π s σ d s
Hence, in distribution X T π = d m π + s π Z , where Z is a standard normal random variable. The optimization problem (MRD) can thus be written as:
( M R D ) ϱ ( s π Z ) min m π μ e - r T π is a deterministic investment strategy
Since ϱ is positive homogeneous, we obtain ϱ ( s π Z ) = s π ϱ ( Z ) , which means that we have to minimize the standard deviation of X T π . Hence, the statement follows. ☐
As a consequence, the optimal investment strategy we obtained is very robust w.r.t. the choice of risk measure. Indeed, it does not depend on the precise risk measure, as long as we agree to take a law invariant and positive homogeneous one. Indeed, the result is also valid for a function, ϱ, which is positive homogeneous of any order.

6.2. Maximizing Exponential Utility of Terminal Wealth

In this subsection, we consider the problem of maximizing E x 0 - 1 γ e - γ X T π with γ > 0 . For the classical stochastic case and only one stock, this has been done in [2]. We now directly consider the multi-asset model in the framework of deterministic strategies, i.e., we consider:
E x 0 [ - 1 γ e - γ X T π ] max π is a deterministic investment strategy
It turns out that the solution of this problem is very easy. We know already for deterministic π that X T π has a normal distribution N ( m π , s π 2 ) with parameters:
m π = e r T x 0 + 0 T e - r s α + ( b - r 1 ) π s d s s π 2 = e 2 r T 0 T e - 2 r s π s Σ π s + β 2 + 2 β ρ π s σ d s
Hence, we can write X T π = m π + s π Z , and the target function thus reduces to:
E x 0 [ - 1 γ e - γ X T π ] = - 1 γ e - γ m π E x 0 [ e - γ s π Z ] = - 1 γ e - γ m π + 1 2 γ 2 s π 2
Obviously, the problem is equivalent to minimizing - 2 γ m π + s π 2 , and we end up with the following deterministic control problem:
e 2 r T y ( T ) - 2 γ e r T x ( T ) min x ˙ ( t ) = e - r t α + ( b - r 1 ) π t y ˙ ( t ) = e - 2 r t π t Σ π t + β 2 + 2 β π t σ ρ π t R d
However, this is equivalent to problem P D ( λ ) with λ = 1 γ , and we know from Equation (20) that the optimal investment strategy is given by:
π t * = Σ - 1 ( b - r 1 ) 1 γ e - r ( T - t ) - β Σ - 1 σ ρ
Thus, due to Equation (23), there is a one-to-one relation between optimal mean-variance strategies and optimal strategies for the problem with the exponential utility function. For an early discussion about the relation of expected utility and mean-variance, see, e.g., [19,20]. Note that it can be shown that the optimal investment strategy we have computed here is also optimal within the larger class of adapted strategies.

6.3. Minimizing the Probability of Ruin

Another popular ‘risk measure’ in the actuarial sciences is the probability of ruin of a controlled risk reserve. When we consider the classical situation of ( F t ) -adapted investment strategies, it is very easy to find the one which minimizes the probability of ruin of process Equation (5). According to [21], the optimal feedback function, π * ( x ) , is obtained by maximizing the ratio of mean over variance of the process, i.e.:
r x + α + ( b - r 1 ) π π Σ π + β 2 + 2 β π σ ρ
Obviously, when r = 0 , then the maximizer, π * , is independent of x and deterministic. If further d = k = 1 and α = 0 , then π * = β σ .

7. Problems with Lévy Processes

The standard model for the risk reserve process of an insurance company is the so-called Cramér–Lundberg model. It assumes that the risk reserve process follows a Lévy process given as the difference of the premium income process and the claims that have been paid out so far. More precisely, it is usually assumed that:
Y t = x 0 + c t - k = 1 N t U k
where c > 0 is the premium income rate, N = ( N t ) is a Poisson process with parameter ν > 0 , which counts the number of claims, and U 1 , U 2 , are independent and identically distributed random variables, representing the claim sizes. We denote m : = E U and m 2 : = E U 2 . The process in Equation (1) can be seen as a diffusion approximation of process Equation (24) when claims are small and frequent. The mean-variance problems we have considered in Section 4 and Section 5 can be dealt with in a Lévy framework along the same lines. The solution of the classical problem (MV) may be derived from [10]. Here, we concentrate on the problem (MVD) with deterministic investment strategies. We assume that:
μ > x 0 e r T + ( e r T - 1 ) c - ν m r
In order to have an elegant notation, we write ( Y t ) with the help of its Poisson random measure, M ( [ 0 , t ] × B ) , t 0 , B B ( R + ) , which is the sum of all claims taking values in the set B up to time t. Hence, we can write:
Y t = x 0 + c t - [ 0 , t ] R + y M ( d s , d y )
For simplicity, we leave the financial market as in the sections before, though one might also allow for jumps there. We again assume here that admissible trading strategies π = ( π t ) are F 0 -measurable. The corresponding wealth of the insurance company follows the stochastic differential equation:
d X t π = r X t π + ( b - r 1 ) π t d t + d Y t + π t σ d W t
X 0 π = x 0
We consider again the deterministic mean-variance problem (MVD) of Section 5. As in Section 5, we start with problem P D ( λ ) : Next, we compute E x 0 [ X t π ] . To this end, note that Y t - ( c - ν m ) t is a martingale. Thus, we obtain:
E x 0 [ X t π ] = x 0 + 0 t r E x 0 [ X s π ] + ( b - r 1 ) π s d s + ( c - ν m ) t
This is an ordinary differential equation for f ( t ) : = E x 0 [ X t π ] of the form:
f ˙ ( t ) = r f ( t ) + ( b - r 1 ) π t + c - ν m
with boundary condition f ( 0 ) = x 0 . The solution is given by:
f ( t ) e - r t = x 0 + 0 t e - r s ( b - r 1 ) π s + c - ν m d s = : x ( t )
In order to compute the variance, we need the second moment of X t π . Using partial integration, we get:
( X t π ) 2 = x 0 2 + 2 0 t X s - π d X s π + [ X π , X π ] ( t ) = x 0 2 + 2 0 t r ( X s - π ) 2 + X s - π ( b - r 1 ) π s d s + 2 0 t X s - π π s σ d W s + 2 0 t c X s - π d s - 0 t R + X s - π y M ( d s , d y ) + 0 t π s Σ π s d s + 0 t R + y 2 M ( d s , d y )
Taking the expectation yields:
E x 0 [ ( X t π ) 2 ] = x 0 2 + 2 0 t r E x 0 [ ( X s - π ) 2 ] + E x 0 [ X s - π ] ( b - r 1 ) π s + c d s + 0 t π s Σ π s d s + m 2 ν t - 2 0 t E x 0 [ X s - π ] ν m d s
which is an ordinary differential equation for g ( t ) : = E x 0 [ ( X t π ) 2 ] of the form
g ˙ ( t ) = 2 r g ( t ) + 2 f ( t ) ( b - r 1 ) π t + c - ν m + π t Σ π t + m 2 ν
with the boundary condition given by g ( 0 ) = x 0 2 . When we define the variance as a function of time h ( t ) : = Var x 0 [ X t π ] = E x 0 [ ( X t π ) 2 ] - ( E x 0 [ X t π ] ) 2 = g ( t ) - f 2 ( t ) , it follows:
h ˙ ( t ) = g ˙ ( t ) - 2 f ( t ) f ˙ ( t ) = 2 r h ( t ) + π t Σ π t + m 2 ν
with boundary condition h ( 0 ) = 0 . Thus, we get:
h ( t ) e - 2 r t = 0 t e - 2 r s π s Σ π s + m 2 ν d s = : y ( t )
The target function of P D ( λ ) can be written as:
Var x 0 [ X T π ] + 2 λ μ - E x 0 [ X T π ] = e 2 r T y ( T ) + 2 λ ( μ - e r T x ( T ) )
and we obtain the deterministic control problem:
P D ( λ ) e 2 r T y ( T ) + 2 λ ( μ - e r T x ( T ) ) min x ˙ ( t ) = e - r t c - ν m + ( b - r 1 ) π t y ˙ ( t ) = e - 2 r t π t Σ π t + m 2 ν π t R d
which is the same as in Section 5, where we have to set ρ : = ( 0 , , 0 ) and replace α by c - ν m and β 2 by m 2 ν . We use, again, the Ansatz:
v ( t , x , y ) = e 2 r T ( y + g ( t ) ) + 2 λ μ - e r T ( x + f ( t ) )
The minimizer of the HJB Equation (18) is determined by:
π t * = Σ - 1 ( b - r 1 ) λ e - r ( T - t )
and the value function is given by Equation (28) with:
f ( t ) = t T e - r s c - ν m + ( b - r 1 ) π s * d s
g ( t ) = t T e - 2 r s ( π s * ) Σ π s * + m 2 ν d s
We summarize our results in the following theorem. A verification is straightforward.
Theorem 8 
The value function of problem P D ( λ ) is given by
V ( t , x , y ) = e 2 r T ( y + g ( t ) ) + 2 λ μ - e r T ( x + f ( t ) )
with f , g being solutions of Equations (30) and (31), respectively. The optimal investment strategy ( π t * ) is given by Equation (29).
Finally, we solve the problem (MVD). Note that:
E x 0 [ X T * ] = e r T x 0 + a λ T + c - ν m r ( e r T - 1 )
From E x 0 [ X T * ] = μ , we obtain:
λ * = ( a T ) - 1 μ - e r T x 0 + c - ν m r ( 1 - e r T )
which is positive, due to condition Equation (25). Thus, we obtain the following result:
Theorem 9 
The optimal investment strategy, π * , for problem (MVD) is determined by Equation (29) with λ * given by Equation (32).
As a result, we see that the optimal control depends only on the drift of the risk reserve (here, c - ν m ), and it is not important whether the process has jumps or not.

8. Conclusions

We have shown that stochastic control problems with deterministic investment strategies lead to deterministic control problems that are, in general, easier to solve. In particular, in the case of a Brownian setting, the terminal wealth has a normal distribution under any admissible deterministic investment strategy. This leads to some very favorable properties, like the insensitivity of the optimal control w.r.t. to a class of target functions. Moreover, there are some interesting links between these problems. Optimal deterministic investment strategies for mean-variance problems, for example, correspond to optimal investment strategies for an insurance company with exponential utility. Finally, we also show that the current approach works in the setting of Lévy processes.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. M. Christiansen, and M. Steffensen. “Deterministic mean-variance-optimal consumption and investment.” Stochastics 85 (2013): 620–636. [Google Scholar] [CrossRef]
  2. S. Browne. “Optimal investment policies for a firm with a random risk process: Exponential utility and minimizing the probability of ruin.” Math. Oper. Res. 20 (1995): 937–958. [Google Scholar] [CrossRef]
  3. Y. Zeng, and Z. Li. “Optimal time-consistent investment and reinsurance policies for mean-variance insurers.” Insur. Math. Econom. 49 (2011): 145–154. [Google Scholar] [CrossRef]
  4. W.H. Fleming, and H.M. Soner. Controlled Markov Processes and Viscosity Solutions. New York, NY, USA: Springer, 2006. [Google Scholar]
  5. X.Y. Zhou, and D. Li. “Continuous-time mean-variance portfolio selection: A stochastic LQ framework.” Appl. Math. Optim. 42 (2000): 19–33. [Google Scholar] [CrossRef]
  6. P. Antolin, S. Payet, and J. Yermo. “Assessing default investment strategies in defined contribution pension plans.” OECD J.: Financ. Mark. Trends 1 (2010): 1–29. [Google Scholar]
  7. R. Korn, and S. Trautmann. “Continuous-time portfolio optimization under terminal wealth constraints.” ZOR—Math. Methods Oper. Res. 42 (1995): 69–92. [Google Scholar] [CrossRef]
  8. D. Li, and W.L. Ng. “Optimal dynamic portfolio selection: Multiperiod mean-variance formulation.” Math. Financ. 10 (2000): 387–406. [Google Scholar] [CrossRef]
  9. Ł. Delong, and R. Gerrard. “Mean-variance portfolio selection for a non-life insurance company.” Math. Methods Oper. Res. 66 (2007): 339–367. [Google Scholar]
  10. W. Guo, and C. Xu. “Optimal portfolio selection when stock prices follow a jump-diffusion process.” Math. Methods Oper. Res. 60 (2004): 485–496. [Google Scholar] [CrossRef]
  11. N. Bäuerle, and A. Blatter. “Optimal control and dependence modeling of insurance portfolios with Lévy dynamics.” Insur. Math. Econom. 48 (2011): 398–405. [Google Scholar] [CrossRef]
  12. B. Øksendal, and A. Sulem. Applied Stochastic Control of Jump Diffusions. Berlin, Germany: Springer, 2007. [Google Scholar]
  13. N. Bäuerle. “Benchmark and mean-variance problems for insurers.” Math. Methods Oper. Res. 62 (2005): 159–165. [Google Scholar] [CrossRef]
  14. O.S. Alp, and R. Korn. “Continuous-time mean-variance portfolios: A comparison.” Optimization 62 (2013): 961–973. [Google Scholar] [CrossRef]
  15. S. Basak, and G. Chabakauri. “Dynamic mean-variance asset allocation.” Rev. Financ. 23 (2010): 2970–3016. [Google Scholar] [CrossRef]
  16. E.M. Kryger, and M. Stefensen. “Some Solvable Portfolio Problems With Quadratic and Collective Objectives.” 2010. Available online at http://ssrn.com/abstract=1577265 (accessed on 1 October 2013).
  17. N. Bäuerle, and U. Rieder. Markov Decision Processes With Applications to Finance. Heidelberg, Germany: Springer, 2011. [Google Scholar]
  18. J. Yong, and X.Y. Zhou. Stochastic Controls. New York, NY, USA: Springer, 1999. [Google Scholar]
  19. G. Chamberlain. “A characterization of the distributions that imply mean variance utility functions.” JET 29 (1983): 185–201. [Google Scholar] [CrossRef]
  20. H. Levy, and H.M. Markowitz. “Approximating expected utility by a function of mean and variance.” Am. Econ. Rev. 69 (1979): 308–317. [Google Scholar]
  21. N. Bäuerle, and E. Bayraktar. “A note on applications of stochastic ordering to control problems in insurance and finance.” Stochastics, 2013. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Bäuerle, N.; Rieder, U. Optimal Deterministic Investment Strategies for Insurers. Risks 2013, 1, 101-118. https://doi.org/10.3390/risks1030101

AMA Style

Bäuerle N, Rieder U. Optimal Deterministic Investment Strategies for Insurers. Risks. 2013; 1(3):101-118. https://doi.org/10.3390/risks1030101

Chicago/Turabian Style

Bäuerle, Nicole, and Ulrich Rieder. 2013. "Optimal Deterministic Investment Strategies for Insurers" Risks 1, no. 3: 101-118. https://doi.org/10.3390/risks1030101

APA Style

Bäuerle, N., & Rieder, U. (2013). Optimal Deterministic Investment Strategies for Insurers. Risks, 1(3), 101-118. https://doi.org/10.3390/risks1030101

Article Metrics

Back to TopTop