Next Article in Journal
A Meshless Method Based on the Laplace Transform for the 2D Multi-Term Time Fractional Partial Integro-Differential Equation
Next Article in Special Issue
Optimal Exploitation of a General Renewable Natural Resource under State and Delay Constraints
Previous Article in Journal
Cryptocurrencies as a Financial Tool: Acceptance Factors
Previous Article in Special Issue
Mean-Variance Portfolio Selection with Tracking Error Penalization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Optimal Control of Government Stabilization Funds

by
Abel Cadenillas
1,* and
Ricardo Huamán-Aguilar
2
1
Department of Mathematical and Statistical Sciences, University of Alberta, Central Academic Building 639, Edmonton, AB T6G 2G1, Canada
2
Department of Economics, Pontificia Universidad Católica del Perú, Av. Universitaria 1801, San Miguel, Lima 32, Peru
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(11), 1975; https://doi.org/10.3390/math8111975
Submission received: 20 September 2020 / Revised: 31 October 2020 / Accepted: 3 November 2020 / Published: 6 November 2020
(This article belongs to the Special Issue Stochastic Optimization Methods in Economics, Finance and Insurance)

Abstract

:
We study the optimal control of a government stabilization fund, which is a mechanism to save money during good economic times to be used in bad economic times. The objective of the fund manager is to keep the fund as close as possible to a predetermined target. Accordingly, we consider a running cost associated with the difference between the actual fiscal fund and the fund target. The fund manager exerts control over the fund by making deposits in or withdrawals from the fund. The withdrawals are used to pay public debt or to finance government programs. We obtain, for the first time in the literature, the optimal band for the government stabilization fund. Our results are of interest to practitioners. For instance, we find that the higher the volatility, the larger the size of the optimal band. In particular, each country and state should have its own optimal fund band, in contrast to the “one-size-fits-all” approach that is often used in practice.

1. Introduction

The most recent fiscal crises have led to more interest in countercyclical fiscal policies to mitigate the negative consequences of a crisis. This is the reason why many governments have created stabilization funds, which is a mechanism of fiscal policy to save money during good economic times to be used in bad economic times. In many circumstances, it is the best way to handle fiscal disruptions. Stabilization funds have been implemented in many countries around the world. To mention a few, Norway Oil Fund (also known as Government Pension Fund), Australian National Welfare Fund, State Oil Fund of Azerbaijan, Iran Oil Stabilization Fund, and Peru Fiscal Stabilization Fund. A similar tool exists in most states of the USA, the Budget Stabilization Funds (BSF) (or Rainy Day Funds).
Stabilization funds are especially important during the current coronavirus pandemic crisis. For instance, the U.S. Treasury is using its Exchange Stabilization Fund (ESF) to backstop new Federal Reserve lending programs. In general, stabilization funds can be used to protect against a variety of risks, such as climate risk, which is one of the objectives of green finance. A recent reference on green finance is Popescu and Popescu [1].
Usually, the natural level of the stabilization fund is associated with the price of a commodity (such as oil, gas, copper and hydrocarbon) and/or with the annual budget surplus (or deficit). The stabilization funds we want to study have in common that the government makes deposits in and withdrawals from the fund following some discretionary approach (not necessarily optimal). Thus, the government intervenes to modify the natural level of the stabilization fund. For instance, the Stabilization Fund of the Russian Federation imposes a maximum level of 500 billion rubles for its stabilization fund. The amount that exceeds such maximum is withdrawn in order to pay foreign public debt or cover part of the Pension Funds’ deficit. A minimum level of zero is implicitly being considered. In this paper, we study the optimal interventions of a government to manage its stabilization fund.
The only theoretical literature on the control of government stabilization funds is the one related to Rainy Day Funds, such as Joyce [2], Navine and Navine [3], and Vasche and Williams [4]. However, they have not addressed the issue of the optimal band and size of such funds. Specifically, Joyce [2] states that his “article has not attempted to answer the question of exactly how large the rainy day balance should be ...”. Navine and Navine [3] suggest a size fund of 13 % of the General Fund Revenue for the state of Ohio in the USA. According to Joyce [2], some states of the USA have operated under the rule of thumb of 5 % (which is an example of the “one-size-fits-all” approach). In all the above cases, the qualitative conclusions are made without a theoretical framework and a mathematical optimality criterion. Motivated by these facts, we present the first theoretical model to study the optimal stabilization fund.
Since commodity-related stabilization funds are more than 50 % of the sovereign wealth funds, we will focus on this type of government funds. Indeed, according to the most recent ranking published by the Sovereign Wealth Fund Institute [5], the total asset size of the 89 largest sovereign wealth funds worldwide as of September 2020 was about USD 7.834 trillion, of which at least 50% are oil and gas related.
We consider a government that wants to control the commodity-related stabilization fund by depositing money and withdrawing money from the fund. Having in mind that some government funds depend heavily on the prices of some commodity products (such as oil, cooper, and hydrocarbon), we model the fund dynamics without intervention of the government as a mean-reverting process (see, for example, Schwartz [6]). In addition, we assume that there is a fund target and that the fund manager wants to keep the fund as close as possible to it. The costs generated by the difference between the current fund and the fund target are modeled by a convex loss function. The government can intervene to modify the level of the fund by increasing (depositing) or decreasing (withdrawing) it. The interventions to increase the fund have costs. The goal of the government is to find the optimal control of dynamic interventions (decreasing or increasing the fund) that minimizes the expected total cost given by the loss function plus the cost of the interventions.
We model the control of a stabilization fund as a stochastic singular control problem. In this paper, the solution of the stochastic singular control problem is described in terms of band [ a , b ] , a closed bounded interval, which is the optimal fund band. The optimal singular control can be described, roughly speaking, as the intervention of the government to ensure that the stabilization fund stays within the band. The optimal band depends on all the underlying parameters of the model, including the cost of the interventions, the fund volatility, and the target level. Thus, we can analyze numerical examples to illustrate the effects of the parameters on the optimal band. We conclude, for example, that the cost of increasing the fund has an effect not only on the lower bound of the optimal fund a but also on the upper bound b.
We have made three main contributions. First, as pointed out above, this is the first theoretical model to study the optimal stabilization fund. We have transformed an important macrofinance problem into a tractable mathematical problem. Second, we present the solution of a two-sided stochastic singular control problem with a mean-reverting process and an asymmetric running cost function (see further explanations in Section 2). Third, in terms of practical results, we obtain the optimal fund band analytically and, in particular, we challenge the “one-size-fits-all” apach that is oftepron used in practice.
This paper is organized as follows. We present the model for government stabilization fund control in Section 2, which is a stochastic singular control problem. In Section 3, we present the verification theorem that states sufficient conditions for a fund control to be optimal. In Section 4, we find a candidate for solution of the problem and, then, we prove rigorously that this candidate is indeed the solution. In particular, Theorem 2 presents the solution to the government stabilization fund problem. In Section 5 we study the optimal times of the interventions. In Section 6, we perform an extensive economic analysis of the solution. We write the conclusions in Section 7.

2. The Stabilization Fund Model

Consider a complete probability space ( Ω , F , P ) endowed with a filtration { F t } = { F t , t [ 0 , ) } , which is the P-augmentation of the filtration generated by a one-dimensional Brownian motion W.
Government stabilization funds are reported in nominal terms. Accordingly, in this paper, the state variable is the stabilization fund level expressed in monetary units. As pointed out in the Introduction, most government stabilization funds depend on the prices of some commodities (such as oil and gas), and those prices follow a mean-reverting process (see, for instance, Schwartz [6]). Thus, we assume that, without government intervention, the stabilization fund follows a mean-reverting process. Taking into account government interventions, the stabilization fund X = { X t , t [ 0 , ) } is an { F t } -adapted stochastic process that follows the dynamics
X t = x + 0 t λ ( θ X s ) d s + 0 t σ d W s U t + L t ,
where λ > 0 is the speed of the mean reversion, θ R : = ( , ) is the long-term mean of the process, and σ ( 0 , ) is the volatility. We denote the initial stabilization fund of the country by x R . Sometimes, the government intervenes to withdraw money from the fund or to deposit money in the fund. The process U represents the cumulative withdrawals and L the cumulative deposits on the stabilization fund.
Definition 1.(Control) Let L and U be two { F t } -adapted, non-negative, and non-decreasing stochastic processes from [ 0 , ) × Ω to [ 0 , ) , with sample paths that are left-continuous with right-limits. The pair ( L , U ) is called a stochastic singular control. By convention, we set U 0 = L 0 = 0 .
Remark 1.
We can consider a more general dynamic in which (1) is replaced by
X t = x + 0 t μ + λ ( θ X s ) d s + 0 t σ d W s U t + L t ,
where μ ( , ) . If λ = 0 , then we have a simpler dynamics, which have been studied by [7]. If λ > 0 , then we can write X t = x + 0 t λ θ ˜ X s d s + 0 t σ d W s U t + L t , where θ ˜ = θ + μ λ . Thus, if λ > 0 , then we recover Equation (1) but with different parameters. In other words, Equations (1) and (2) are equivalent.
Problem 1.
The government wants to select the control ( L , U ) that minimizes the functional J defined by
J ( x ; L , U ) : = E x 0 e δ t h ( X t ) d t + 0 e δ t k L d L t + 0 e δ t k U d U t .
Here, k L ( 0 , ) is the proportional cost of increasing the fund, k U ( 0 , ) is the proportional cost of withdrawals, δ ( 0 , ) is the discount rate, and h is a cost function, which we assume to be non-negative and convex, with h ( 0 ) 0 .
In the above functional J, the integral 0 e δ t k L d L t represents the cumulative discounted cost associated with the specific and deliberate goal of increasing the level of the fund. Furthermore, k L represents the marginal cost of those increments. This cost is generated by fiscal adjustments, which can take the form of raising taxes or reducing expenses. On the other hand, 0 e δ t k U d U t is the cumulative discounted cost associated with reducing the fund (withdrawals).
The cost function h is non-negative and convex. We assume that
h ( x ) = ( x ρ ) 2 ,
where ρ > 0 is the fund target. The rationale of this form of h is as follows. The cost of having high stabilization funds (above the target) is that funds can be used in the present to pay public debt or to finance any other government program with high social and/or private returns. On the other hand, having stabilization funds that are too low (below the target) generates a cost associated with the risk of the government to need resources when the economy is going through bad economic times. The likelihood of this event increases when the actual level of the fund decreases. We do not assume that the fund target ρ and the long-term mean of the fund θ are equal. Indeed, the desired target ρ can be greater or lower than the long-term mean θ . If ρ > θ , it is certainly a challenge for the government to conduct the fund towards ρ . We note that h, defined in Equation (4), is symmetric around ρ . As pointed out in Remark 3, we can also solve Problem 1 under an asymmetric cost function h ˜ , given by Equation (32).
From a mathematical point of view, Problem 1 is a stochastic singular control problem. The theory of stochastic singular control has been studied, for instance, in Cadenillas and Haussmann [8], Fleming and Soner [9], and Karatzas [10,11]. Cadenillas et al. [12] solved a two-sided stochastic impulse control problem of a mean-reverting process, but they assumed fixed costs that do not exist in the government stabilization fund problem. The problem we solve in this paper cannot be derived as the limit of the solution of that impulse control problem because they do not present an explicit solution (in fact, an explicit solution does not exist for that problem). Harrison and Taksar [7] studied a two-sided stochastic singular control problem of a Brownian motion with drift, which is mathematically simpler than the above mean-reverting process, but did not present a full-blown algorithm to compute the boundary of the non-intervention region. Regarding the two-sided stochastic singular control problem of a mean-reverting process, more general settings than the present paper have been treated by Matomäki [13] and recently by Ferrari and Vargiolu [14]. We have solved the two-sided stochastic singular control problem of our paper not only for a symmetric but also for an asymmetric running cost function (see Remark 3 for details).
Since we want to minimize the functional J, we should consider only the fund controls ( L , U ) for which J is finite.
Definition 2.
[Admissible control] The pair ( L , U ) is called an admissible stochastic singular control if J ( x ; L , U ) < . For a fixed initial value of the fund x R , the set of all admissible controls is denoted by A ( x ) .
Remark 2.
If a control ( L , U ) is admissible, then
lim t E x [ e δ t X t ] = 0 .
Proof. 
See Appendix A. □
To illustrate our framework, let us consider the example in which the fund manager never intervenes.
Example 1.
The control ( L , U ) , with L t U t 0 , represents the control in which the government never intervenes. Let Y   =   Y t ; t [ 0 , ) be the corresponding fund process. Thus, Y is given by
Y t = x + 0 t λ ( θ Y s ) d s + 0 t σ d W s .
For each fixed t [ 0 , ) , the random variable Y t is normally distributed with expected value
E x [ Y t ] = e λ t x + θ ( 1 e λ t ) , t [ 0 , )
and variance
V a r x [ Y t ] = σ 2 2 λ ( 1 e 2 λ t ) σ 2 2 λ , t [ 0 , ) .
Fubini’s theorem implies that, if the government never intervenes, then
J ( x ; 0 , 0 ) = 0 e δ t E x [ Y t 2 ] 2 ρ E x [ Y t ] + ρ 2 d t = ξ 0 + ξ 1 x + ξ 2 x 2 < ,
where the constants ξ 0 , ξ 1 and ξ 2 are given by
ξ 0 : = 2 θ 2 λ 2 2 λ θ ( δ + 2 λ ) ρ + δ 2 ρ 2 + λ ( 2 λ ρ 2 + σ 2 ) + δ ( 3 λ ρ 2 + σ 2 ) δ ( δ + λ ) ( δ + 2 λ ) ,
ξ 1 : = 2 θ λ 2 ( δ + 2 λ ) ρ ( δ + λ ) ( δ + 2 λ ) ,
ξ 2 : = 1 δ + 2 λ > 0 .
Hence, the control in which the government never intervenes is an example of an admissible control.

3. The Value Function and a Verification Theorem

We define the value function V : R R by
V ( x ) : = inf ( L , U ) A ( x ) J ( x ; L , U ) .
This represents the smallest cost that can be achieved when the initial fund is x, and we consider all the admissible controls A ( x ) . As such, it is a measure of government well-being.
We have formulated the stabilization fund problem as a stochastic singular control problem. We define Δ U : = { t [ 0 , ) : U t U t + } , the set of times when the process U has a discontinuity. The set Δ U is countable because U is nondecreasing and, hence, can jump only a countable number of times during [ 0 , ) . We will denote the discontinuous part of U by U d , that is U t d : = 0 s < t , s Δ ( U s + U s ) , which is finite on [ 0 , t ] because it is bounded above by U t and below by U 0 = 0 . The continuous part of U will be denoted by U c , that is, U t c = U t U t d . Similarly, for the process L, we define Δ L , L d and L c . Furthermore, we denote Δ : = Δ U Δ L , the set in which U or L have a discontinuity.
Proposition 1.
The value function is non-negative and convex.
Proof. 
See Appendix B. □
Let ψ : R R be a function. We define the operator L by
L ψ ( x ) : = 1 2 σ 2 ψ ( x ) + λ ( θ x ) ψ ( x ) δ ψ ( x ) .
For a function v : R R , consider the Hamilton–Jacobi–Bellman (HJB) equation
x R : min L v ( x ) + h ( x ) , k L + v ( x ) , k U v ( x ) = 0 ,
This equation is equivalent to the variational inequalitites (see [15] for a classical reference):
L v ( x ) + h ( x ) 0 , k L + v ( x ) 0 , k U v ( x ) 0 , L v ( x ) + h ( x ) k L + v ( x ) k U v ( x ) = 0 .
We observe that a solution v of the HJB equation defines the regions C = C v , Σ 1 = Σ v and Σ 2 = Σ v as follows:
C : =   x R : L v ( x ) + h ( x ) = 0 a n d k L + v ( x ) > 0 a n d k U v ( x ) > 0 ;
Σ 1 : =   x R : L v ( x ) + h ( x ) > 0 a n d k L + v ( x ) = 0 a n d k U v ( x ) > 0 ;
Σ 2 : =   x R : L v ( x ) + h ( x ) > 0 a n d k L + v ( x ) > 0 a n d k U v ( x ) = 0 .
We note that C Σ 1 = , C Σ 2 = , and Σ 1 Σ 2 = .
It is possible to construct a control process associated with v in the following manner.
Definition 3.
[Associated control] Suppose C is an open subset of ( , + ) . The stochastic singular control ( L v , U v ) is said to be associated with the function v of (12) if the process X v , given by
(i) 
X t v = x + 0 t λ ( θ X s v ) d s + 0 t σ d W s U t v + L t v , t [ 0 , ) , P a . s . ,
satisfies the following three conditions:
(ii) 
X t v C ¯ , t ( 0 , ) , P a . s . ;
(iii) 
0 I X t v C Σ 2 d L t v = 0 , P a . s . ;
(iv) 
0 I X t v C Σ 1 d U t v = 0 , P a . s . .
Here, I A denotes the indicator function of the event A.
Next, we state a sufficient condition for a control to be optimal.
Theorem 1.
Let v : R [ 0 , ) be a twice continuously differentiable function that satisfies the Hamilton–Jacobi–Bellman Equation (12) for every x R . Suppose that there exists < a < b < , such that the region C associated with v is C v = ( a , b ) . Then, for every ( L , U ) A ( x ) :
v ( x ) J ( x ; L , U ) .
Furthermore, the stochastic control associated with v, ( L v , U v ) , satisfies
v ( x ) = J ( x ; L v , U v ) .
In other words, ( L ^ , U ^ ) = ( L v , U v ) is the optimal control and V = v is the value function for Problem 1.
Proof. 
See Appendix C and Appendix D. □

4. The Analytical Solution

At the beginning of this section, we are going to make conjectures to obtain a candidate for optimal control and a candidate for the value function. At the end of this section, we are going to apply Theorem 1 to prove rigorously that such conjectures are valid.
We want to find a function v and a control ( L v , U v ) that satisfy the conditions of Theorem 1. By condition (ii) of Definition 3, we note that ( L v , Z v ) makes the corresponding controlled process X v stay inside the closure of the region C all the time (except perhaps at time 0). Moreover, by condition (iii), the process L v remains constant when X t v C Σ 2 . It changes only if X t v Σ 1 . Similarly, by condition (iv), the process U v remains constant when X t v C Σ 1 . It changes only when X t v Σ 2 .
Based on the above observations, we conjecture that there exists a stabilization fund band [ a , b ] ( , ) , such that the government should intervene when the fund X ( , a ] [ b , ) , and should not intervene when X ( a , b ) . Accordingly, if v satisfies the Hamilton–Jacobi–Bellman equation, we will call C = ( a , b ) the continuation region and Σ = ( , a ] [ b , ) the intervention region. Thus, it is natural to define the stabilization fund band as follows.
Definition 4.(Fund band) Let v be a function that satisfies the HJB Equation (12), and C v the corresponding continuation region. If C v , the stabilization fund band [ a v , b v ] is given by
a v : = inf x R : v ( x ) > k L a n d b v : = sup x R : v ( x ) < k U .
Moreover, if v is equal to the value function V, then [ a V , b V ] is said to be the optimal stabilization fund band.
The Hamilton–Jacobi–Bellman Equation (12) in the continuation region C = ( a , b ) implies
1 2 σ 2 v ( x ) + λ ( θ x ) v ( x ) δ v ( x ) = h ( x ) = ( x ρ ) 2 .
In the intervention region we conjecture the following. If the initial fund x < a , then the optimal strategy is to jump to a. That is, we have
v ( x ) = v ( α ) + k L ( a x ) if x < a .
On the other hand, if x > b , then the optimal strategy is to jump to b. Then, we have
v ( x ) = v ( b ) + k U ( x b ) if x > b .
The general solution of (17)–(19) has the form
v ( x ) = { D k L x if x a , A G ( x ) + B H ( x ) + ξ 0 + ξ 1 x + ξ 2 x 2 if a < x < b , F + k U x if x b .
Here, A, B, D, and F are constants to be determined; the constants ξ 0 , ξ 1 and ξ 2 are given by (8)–(10); and the functions G and H are defined by
G ( x ) : = n = 0 α 2 n ( x θ ) 2 n ,
H ( x ) : = n = 0 β 2 n + 1 ( x θ ) 2 n + 1 ,
where the coefficients of the above two series are given by
α 2 n : =   1 if   n = 0 , 2 σ 2 n 1 ( 2 n ) ! Π i = 0 n 1 2 i λ + δ if   n 1 ,
and
β 2 n + 1 : =   1 if   n = 0 , 2 σ 2 n 1 ( 2 n + 1 ) ! Π i = 0 n 1 ( 2 i + 1 ) λ + δ if   n 1 .
We observe that the power series G converges absolutely in any open bounded interval of the form ( θ M , θ + M ) , where 0 < M < . Likewise, the power series H also converges in any bounded interval. In addition, we also observe that G ( θ ) = 1 , H ( θ ) = 0 , G ( θ ) = 0 , and H ( θ ) = 1 .
We also conjecture that v is twice continuously differentiable. Then, the six constants A, B, D, F, a, and b can be found from the following system of six equations:
φ ( a ) = D k L a ;
φ ( b ) = F + k U b ;
φ ( b ) = k U ;
φ ( a ) = k L ;
φ ( b ) = 0 ;
φ ( a ) = 0 ;
where
φ ( x ) = A G ( x ) + B H ( x ) + ξ 0 + ξ 1 x + ξ 2 x 2 .
Applying the methods of Ferrari and Vargiolu [14], it is possible to prove that there exists a unique solution for the system (25)–(30). Solving the above system determines the candidate for value function v and the candidate for optimal stabilization fund band [ a , b ] . Moreover, the candidate for optimal fund management is the control associated with v given in Definition 3.
To complete this section, we are going to prove that the above candidate for optimal fund control is indeed the optimal fund control, and the above candidate for value function is indeed the value function for the fund management problem.
Theorem 2.
Let A, B, a, b, D, and F, with < a < b < , be the solution of the system of Equations (25)–(30). Let us consider the function v defined by
v ( x ) = { D k L x i f x a , A G ( x ) + B H ( x ) + ξ 0 + ξ 1 x + ξ 2 x 2 i f a < x < b , F + k U x i f x b .
Then V = v is the value function for Problem 1, and the closed interval [ a , b ] is the optimal band for the government stabilization fund. Furthermore, the optimal stabilization fund control is the process ( L ^ , U ^ ) given by
(i) 
X ^ t = x + 0 t λ ( θ X ^ s ) d s + 0 t σ d W s U ^ t + L ^ t , t [ 0 , ) , P a . s . ,
(ii) 
X ^ t [ a , b ] , t ( 0 , ) , P a . s . ,
(iii) 
0 I X ^ t > a d L ^ t = 0 , P a . s . ,
(iv) 
0 I X ^ t < b d U ^ t = 0 , P a . s .
Here, X ^ denotes the fund process generated by the optimal stabilization fund control ( L ^ , U ^ ) .
Proof. 
See Appendix E and Appendix F. □
Remark 3.
We have obtained the explicit solution for the optimal fund band for the symmetric cost function h ( x ) = ( x ρ ) 2 . Inspired by Waud [16], who studied the policy implications of assuming an asymmetric cost function as opposed to a symmetric one, we have also considered an asymmetric cost function. Indeed, our methodology allows us to solve Problem 1 analytically for a variety of cost functions such as the asymmetric cost function h ˜ , given by
h ˜ ( x ) = e ν x + e η x ,
where ν > 0 and η > 0 . This function is positive, strictly convex with a minimum at
ρ ˜ = ln ( ν / η ) ( ν + η ) .
The meaning of the parameters ν and η is as follows. If ν > η , a given deviation from below the fund ρ ˜ generates more disutility than deviation from above by the same magnitude. The opposite holds when ν < η .
In Appendix H, we present the solution of Problem 1 under this asymmetric cost function. For numerical results, see Table 2 in Section 6.4.

5. Time to Increase or Decrease the Stabilization Fund

We study the time of the intervention under the optimal control obtained in Theorem 2. We consider the process Y given by the dynamics
Y t = x + 0 t λ ( θ Y s ) d s + 0 t σ d W s ,
where λ > 0 , θ R , and σ > 0 are constants, and W is a standard Brownian motion. For < a < x < b < , we define the stopping time
τ ( a , b ) : = inf { t ( 0 , ) : Y t ( a , b ) } ,
which represents the first time that the fund manager will intervene increasing or decreasing the stabilization fund. Accordingly, { Y τ ( a , b ) = a } describes the event in which the fund manager will increase the stabilization fund first (before decreasing it), and { Y τ ( a , b ) = b } is the event in which the fund manager will decrease the stabilization fund first (before increasing it).
To present the results of this section, we define the function
Erfid ( x , y ) : = 2 π 0 x / 2 e v 2 d v 2 π 0 y / 2 e v 2 d v .
Theorem 3.
We have
P x { Y τ ( a , b ) = a } = Erfid ( b θ ) 2 λ / σ , ( x θ ) 2 λ / σ Erfid ( b θ ) 2 λ / σ , ( a θ ) 2 λ / σ ,
P x { Y τ ( a , b ) = b } = Erfid ( x θ ) 2 λ / σ , ( a θ ) 2 λ / σ Erfid ( b θ ) 2 λ / σ , ( a θ ) 2 λ / σ ,
E x τ ( a , b ) = f ˜ ( x ) ,
E x τ ( a , b ) I { Y τ ( a , b ) = b } = g ˜ ( x ) ,
E x τ ( a , b ) | { Y τ ( a , b ) = a } : = E x [ τ ( a , b ) I { Y τ ( a , b ) = a } ] P x { Y τ ( a , b ) = a } = f ˜ ( x ) g ˜ ( x ) P x { Y τ ( a , b ) = a } ,
E x τ ( a , b ) | { Y τ ( a , b ) = b } : = E x [ τ ( a , b ) I { Y τ ( a , b ) = b } ] P x { Y τ ( a , b ) = b } = g ˜ ( x ) P x { Y τ ( a , b ) = b } .
Here, the functionals f ˜ and g ˜ are given in Appendix G.
Proof. 
See Appendix G. □

6. Analysis of the Solution

We analyze the optimal fund management we obtained in Theorem 2 in four subsections. First, we describe how the optimal fund management works in terms of the optimal fund band. Second, we study the relationship between the fund target and the optimal fund band. Third, we present numerical calculations on the time to intervene. Last, we perform some comparative statics to analyze the effects of some economic parameters on the optimal fund band. Among other results, we find that the higher the volatility, the larger the size of the fund band. In particular, there is no justification for the standard “one-size-fits-all” approach. In other words, we propose that each government should have its own fund band. Our results are consistent with the conjectures of Joyce [2].
In Theorem 2, we presented the solution for the case in which the cost of withdrawal is greater than zero ( k U > 0 ). In reality, k U is a very small positive number. Thus, in the numerical examples we will assume that k U = 10 6 .

6.1. The Optimal Stabilization Fund Management and the Optimal Stabilization Fund Band

Applying Theorem 2 we provide an explanation of how the optimal stabilization management process ( L ^ , U ^ ) works in terms of the optimal stabilization fund band [ a , b ] . When the current level of the stabilization fund X ^ t is both below b and above a, conditions (iii–iv) of Theorem 2 imply that both L ^ and U ^ remain constant. Hence, neither withdrawals nor deposits are needed in this case. However, when the stabilization fund reaches the upper level b, condition (ii) states that it is optimal for the manager to perform withdrawals in order to prevent the fund from crossing b. Similarly, if the stabilization fund reaches the lower level a, condition (ii) says that it is optimal to make deposits to preclude the fund from going below a.
Next, we study the management of the fund when its initial value is outside the optimal band. According to conditions (i)–(iii), if the initial debt fund X ^ 0 = x is strictly greater than b, it is optimal to reduce the fund to b immediately, that is, U ^ 0 + = x b , and hence, the stabilization fund goes from X ^ 0 = x to X ^ 0 + = b . Thus a withdrawal of x b is necessary. On the other hand, if the initial stabilization fund happens to be below a, conditions (i)–(iii) say that it is optimal to make a deposit by the amount a x . That is, L ^ 0 + = a x and, as a result, the stabilization fund moves to X ^ 0 + = a . In either case, after the corresponding interventions, the stabilization fund behaves as described in the previous paragraph, and thereby the controlled stabilization fund X ^ t [ a , b ] for every t > 0 , as described in condition (ii).

6.2. The Stabilization Fund Target ρ and the Optimal Stabilization Fund Band [ a , b ]

In this section, we study the relationship between the fund target ρ and the optimal fund band [ a , b ] . We begin by showing that the fund target lies inside the optimal fund band.
Proposition 2.
Let [ a , b ] be the optimal stabilization fund band given in Theorem 2. Then a ρ b .
Proof. 
It follows directly from inequalities (A17). □
Example 2.
As our baseline example, let us consider the following parameter values:
ρ = 2.0 , θ = 2.0 , σ = 0.75 , λ = 0.10 , δ = 0.07 , k L = 0.5 , k U = 10 6 .
Solving the nonlinear Equations (25)–(30), we obtain the optimal fund band
a = 1.49445 , b = 2.46192 .
We note that a < ρ < b , and that the difference b ρ = 0.46192 is relatively significant. Our model finds that ρ < b , the optimal upper bound for the fund is not equal to the fund target, even if the withdrawals have zero cost. This implies that withdrawing money from the fund until the fund target value is attained, is not optimal. What is the rationale for this result? The reason is that putting the value of the fund closer to ρ from above (and hence closer to a) makes it more likely that future deposits will be necessary to keep the fund level above a. Since our optimization problem is of infinite horizon, these potential future deposits generate costs that have to be taken into account. As a result, it is optimal for the upper bound b to be strictly greater than the target of the fund ρ . That is, the fund target ρ and the upper level of the optimal stabilization fund band b are two different things.
Another conclusion we draw from this example is that considering ρ as the reference point, a is farther than b. That is, | a ρ | > | b ρ | . This is clearly a consequence of the asymmetric costs that the fund manager faces. Since making deposits involves costs (comparing to zero costs of withdrawals), the part below ρ is the most difficult side for the fund manager, so it is more important to have a away from ρ than b away from ρ .
We remark that the patterns described above hold in all the numerical examples presented in this paper. See Section 6.4 for more details.

6.3. Time to Increase or Decrease the Stabilization Fund

Applying Theorem 3, we perform some numerical examples of the time to intervene. The parameters are the same as in the baseline scenario, Example 2. From (35) and (36), for x = ρ = 2.0 , we get
P ρ { Y τ ( a , b ) = a } = 0.47682 and P ρ { Y τ ( a , b ) = b } = 0.52318 .
Thus, starting at the stabilization fund target x = ρ = 2.0 , the probability of increasing the stabilization fund before reducing it is 0.47682, and the probability of reducing the stabilization fund before increasing it is 0.52318. Similarly, we calculate
P 1.6 { Y τ ( a , b ) = a } = 0.8884 and P 1.6 { Y τ ( a , b ) = b } = 0.1116 .
This means that starting at the initial stabilization fund x = 1.6 , the probability of increasing the stabilization fund before reducing it is 0.8884, and the probability of reducing the stabilization fund before increasing it is 0.1116. Hence, as expected, the closer is the initial stabilization fund x to the lower bound of the stabilization fund a, the larger the probability of increasing the stabilization fund before reducing it.
Furthermore,
E ρ [ τ ( a , b ) ] = 0.4209 . and E 1.6 [ τ ( a , b ) ] = 0.1656 .
Thus, starting at the stabilization fund target x = ρ = 2.0 , it takes on average 0.4209 time units to the first optimal intervention, either increasing or decreasing the stabilization fund. If instead the initial stabilization fund is x = 1.6 , it takes on average 0.1656 time units to the first optimal intervention, either increasing or decreasing the stabilization fund.
Lastly, from (39) and (40), we compute
E ρ τ ( a , b ) | { Y τ ( a , b ) = a } = 0.434121 and E ρ τ ( a , b ) | { Y τ ( a , b ) = b } = 0.40897 .
Hence, starting at the initial stabilization fund x = ρ = 2.0 , if the fund manager increases the stabilization fund before decreasing it, then the expected time to the first optimal intervention is 0.434121 time units. On the other hand, if the fund manager decreases the stabilization fund before increasing it, then the expected time to the first optimal intervention is 0.40897 time units.

6.4. Comparative Statics Analysis

We analyze the effects of the cost k L of making deposits, the stabilization fund volatility σ , the long-term mean of the fund θ , and the speed of convergence λ on the optimal stabilization fund band [ a , b ] .
We start by studying the effects of the stabilization fund volatility. Joyce [2] claims that the connection between volatility and fund size has to be established by scholars. In Table 1, we observe that when volatility increases, the lower bound of the optimal band a decreases, the upper bound of the optimal band b increases, and (consequently) the optimal size of the fund band b a increases. Thus, countries with high fund volatility should have a larger fund band.
Next, we analyze the effects of the cost of increasing the stabilization fund k L . In Table 1, we observe that if the cost of deposits in the fund k L increases, then the lower bound of the fund a decreases, the upper bound of the fund b increases, and (as a result) the size of the fund band b a increases. That is, as expected, if the cost of deposits increases, it is optimal not to increase the stabilization fund so frequently. It is surprising that the cost of deposits k L has also an effect on the upper bound of the stabilization fund band b. In particular, as the cost k L increases, it is optimal to withdraw money when the stabilization fund reaches a higher level. In addition, given the asymmetry of the costs of interventions, we have | a ρ | | b ρ | > 0 , and such a difference increases as k L does.
In Table 1, we note that, as the long-term mean of the stabilization fund θ increases, the lower bound of the optimal fund a decreases, because there is no need to increase the stabilization fund frequently. We also observe that, as the long-term mean of the stabilization fund θ increases, there is a smaller chance that the stabilization fund X takes values smaller than ρ , and hence the fund manager can take money out of the stabilization fund more frequently (smaller a).
Lastly, we analyze the impact of the speed of the stabilization fund λ to reach its long-term mean ρ when ρ = θ . In Table 1, we observe that as the speed of mean-reversion λ increases, the stabilization fund X converges to the target ρ sooner, and hence the fund manager can make deposits less frequently (smaller a) and can withdraw money more frequently (smaller b).
Remark 4.
As pointed out in Remark 3, we have also solved the problem under the asymmetric cost function
h ˜ ( x ) = e ν x + e η x , ν > 0 , η > 0 .
The numerical results are shown in Table 2. As we can observe, all of the above qualitative results hold.

7. Conclusions

We study the optimal control of a government stabilization fund. Our three main contributions are (1) the formulation of an important macrofinance problem as a stochastic singular control problem, (2) the analytical solution of this control problem, and (3) the connection between the optimal stabilization fund band [ a , b ] and key economic variables, such as the cost of increasing the fund, the volatility of the fund, and the long-term mean of the fund. Indeed, we have solved a two-sided stochastic singular control problem in which the dynamics is mean reverting, and the running cost function can be symmetric or asymmetric. As a consequence, we have computed the optimal band [ a , b ] for the control of a government stabilization fund. The optimal fund control states that, if the current stabilization fund is below a, the government should increase the fund to a; if the current stabilization fund is above b the government should withdraw money from the stabilization fund to reach b; otherwise, the government should leave the stabilization fund to follow its natural level. Our paper is consistent with the academic literature that, without obtaining the solution, suggests that the optimal stabilization fund band should depend at least on the volatility parameter of the fund. We point out that our paper differs dramatically from the “one-size-fits-all” approach that is often used in practice. In particular, we show that each country should have its own optimal stabilization fund band. Overall, our paper provides useful insights for stabilization fund managers.
For future research, it would be interesting to generalize our model to a regime-switching model. It would also be interesting to apply the theory developed in our paper to obtain the optimal stabilization funds for some countries, and compare these optimal stabilization funds with those currently used.

Author Contributions

Investigation, A.C. and R.H.-A.; resources, A.C. and R.H.-A.; data curation, A.C. and R.H.-A.; writing—original draft preparation, A.C. and R.H.-A.; writing—review and editing, A.C. and R.H.-A.; visualization, A.C. and R.H.-A.; supervision, A.C. and R.H.-A.; project administration, A.C. and R.H.-A.; funding acquisition, A.C. All authors have read and agreed to the published version of the manuscript.

Funding

The research of A. Cadenillas was funded by the Social Sciences and Humanities Research Council of Canada (SSHRC) grant 435-2017-0511.

Acknowledgments

We dedicate this paper to the memory of Ramón García-Cobián. Preliminary versions of this paper have been presented at the American Mathematical Society Spring Eastern Sectional Meeting, Hartford, USA 2019; at the International Congress on Actuarial Science and Quantitative Finance, Manizales, Colombia 2019; and at the 2019 Winter Meeting of the Canadian Mathematical Society, Toronto. We are grateful to the conference participants for their comments.

Conflicts of Interest

The authors declare no conflict of interest. The funder had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; and in the decision to publish the results.

Appendix A. Proof of Remark 2

Proof. 
Since J ( x ; L , U ) < , we have
E x 0 e δ t ( X t ρ ) 2 d t < ,
E x 0 e δ t d L t < ,
E x 0 e δ t d U t < .
We observe that
e δ t | X t | e δ t | X t ρ | + e δ t ρ e δ t I { | X t ρ | 1 } + e δ t ( X t ρ ) 2 I { | X t ρ | > 1 } + e δ t ρ .
Here I A denotes the indicator function of the set A Ω . Then, (A1) implies
E x 0 e δ t | X t | d t < .
By Ito’s Lemma, for every 0 s t :
E x [ e δ t X t ] E x [ e δ s X s ] = E x s t e δ u σ d W u + E x s t e δ u λ θ ( λ + δ ) X u d u + E x s t e δ u d L u s t e δ u d U u .
The first expected value on the right-hand side equals zero. If we take any sequence 0 t n , conditions (A2)–(A4) imply that the sequence E x [ e δ t n X t n ] is Cauchy and, hence, converges to a non-negative number. That number must be zero; otherwise, (A4) would not be satisfied. □

Appendix B. Proof of Proposition 1

Proof. 
The value function is non-negative because all the terms in Equation (3) are non-negative.
Next, consider x 1 x 2 with corresponding controls ( L 1 , U 1 ) A ( x 1 ) and ( L 2 , U 2 ) A ( x 2 ) . Let γ [ 0 , 1 ] . We define x 3 : = γ x 1 + ( 1 γ ) x 2 and the control ( L 3 , U 3 ) : = γ ( L 1 , U 1 ) + ( 1 γ ) ( L 2 , U 2 ) . We denote by X ( j ) the trajectory that starts at x j and is determined by the control ( L j , U j ) , for j = 1 , 2 , 3 . Since (2.1) is linear, we observe that for every t 0 :
X ( 3 ) = γ X ( 1 ) + ( 1 γ ) X ( 2 ) .
Since h is a convex function,
0 e λ t h ( X t ( 3 ) ) d t 0 e λ t γ h ( X t ( 1 ) ) + ( 1 γ ) h ( X t ( 2 ) ) d t = γ 0 e λ t h ( X t ( 1 ) ) d t + ( 1 γ ) 0 e λ t h ( X t ( 2 ) ) d t .
As a consequence,
J ( x 3 ; ( L 3 , U 3 ) ) γ J ( x 1 ; ( L 1 , U 1 ) ) + ( 1 γ ) J ( x 2 ; ( L 2 , U 2 ) ) .
Hence
V γ x 1 + ( 1 γ ) x 2 J γ x 1 + ( 1 γ ) x 2 , γ ( L 1 , U 1 ) ) + ( 1 γ ) ( L 2 , U 2 ) = J ( x 3 ; ( L 3 , U 3 ) ) γ J x 1 , ( L 1 , U 1 ) + ( 1 γ ) J x 2 , ( L 2 , U 2 ) .
Consequently,
V γ x 1 + ( 1 γ ) x 2 γ V ( x 1 ) + ( 1 γ ) V ( x 2 ) ,
which shows that V is convex. This completes the proof. □

Appendix C.

Lemma A1.
Suppose v satisfies the HJB Equation (12). Let ( L v , U v ) be the control associated with v, and X v the process generated by it. Then
0 T e δ t v ( X t v ) d ( U v ) t c = 0 T e δ t k U d ( U v ) t c ,
0 T e δ t v ( X t v ) d ( L v ) t c = 0 T e δ t k L d ( L v ) t c ,
v ( X t v ) v ( X t + v ) = k U ( U t + v U t v ) , t Δ U ,
v ( X t v ) v ( X t + v ) = k L ( L t + v L t v ) , t Δ L .
Proof. 
The left-hand side of (A5) can be expressed as
0 T e δ t v ( X t v ) I X t v C Σ 1 d ( U v ) t c + 0 T e δ t v ( X t v ) I X t v Σ 2 d ( U v ) t c .
We will show that the first term of (A9) is equal to zero, while the second one equals the right hand side of (A5). Indeed,
0 T e δ t v ( X t v ) I X t v C Σ 1 d ( U v ) t c 0 T e δ t v ( X t v ) I X t v C Σ 1 d ( U v ) t c 0 T e δ t max { k L , k U } I X t v C Σ 1 d ( U v ) t c ( k U + k L ) 0 I X t v C Σ 1 d U t v = 0 ,
where the last equality follows from condition ( i v ) of Definition 3. Similarly,
0 T e δ t k U I X t v C Σ 1 d ( U v ) t c = 0 .
On the other hand, since v ( x ) = k U for every x in Σ 2 , we have
0 T e δ t v ( X t v ) I X t v Σ 2 d ( U v ) t c = 0 T e δ t k U I X t v Σ 2 d ( U v ) t c .
Given (A10)–(A12) becomes
0 T e δ t v ( X t v ) d ( U v ) t c = 0 T e δ t k U d ( U v ) t c .
This establishes (A5). Next, we show (A7). By condition ( i v ) in Definition 3, if t Δ U , then X t v and X t + v are in Σ 2 . Since v ( x ) = k U for every x in Σ 2 , we have
v ( X t v ) v ( X t + v ) = k U ( X t v X t + v ) = k U ( U t + v U t v ) .
Similarly, we can prove (A6) and (A8). □

Appendix D. Proof of Theorem 1

Proof. 
According to the HJB Equation (12), we have k L v ( x ) k U for every x R . Thus, v and v are bounded functions. Since v is twice continuously differentiable, we may apply an appropriate version of Ito’s formula. Thus, according to Meyer [17] or Chapter 4 of Harrison [18],
v ( x ) = E x e δ T v ( X T ) E x 0 T e δ t 1 2 σ 2 v ( X t ) + λ ( θ X t ) v ( X t ) δ v ( X t ) d t + E x 0 T e δ t v ( X t ) d U t c E x t Δ U 0 t < T e δ t v ( X t + ) v ( X t ) E x 0 T e δ t v ( X t ) d L t c E x t Δ L 0 t < T e δ t v ( X t + ) v ( X t ) .
According to the HJB Equation (12), we have L v ( x ) + h ( x ) 0 and k L v ( x ) k U for all x R . Hence
E x 0 T e δ t 1 2 σ 2 v ( X t ) + λ ( θ X t ) v ( X t ) δ v ( X t ) d t E x 0 T e δ t h ( X t ) d t , E x 0 T e δ t v ( X t ) d U t c E x 0 T e δ t k U d L t c , E x 0 T e δ t v ( X t ) d L t c E x 0 T e δ t k L d L t c .
Moreover, since U and L are no-ndecreasing and left-continuous with right limits, we have X t X t + = U t + U t > 0 for every t Δ U , and X t + X t = L t + L t > 0 for every t Δ L . Consequently, from Equations (A7) and (A8) of Appendix C,
v ( X t ) v ( X t + ) k U ( X t X t + ) = k U ( U t + U t ) , t Δ U , v ( X t ) v ( X t + ) k L ( X t X t + ) = k L ( L t + L t ) , t Δ L .
From (A14) and the above inequalities,
v ( x ) E x e δ T v ( X T ) + E x 0 T e δ t h ( X t ) d t + E x 0 T e δ t k L d L t c + E x 0 T e δ t k U d U t c + E x t Δ L 0 t < T e δ t k L L t + L t + E x t Δ U 0 t < T e δ t k U U t + U t = E x e δ T v ( X T ) + E x 0 T e δ t h ( X t ) d t + [ 0 , T ) e δ t k L d L t + [ 0 , T ) e δ t k U d U t .
Now consider ( L , U ) admissible. Since v is bounded, there exist real numbers A and B, such that 0 v ( x ) A + B | x | . Thus, Equation (5) implies that
lim T E x e δ T v ( X T ) = 0 .
Applying the Monotone Convergence Theorem, we obtain the first part of this theorem.
Now, we will prove the second part of the theorem. Applying (A14) to the pair ( X v , Z v ) , we have
v ( x ) = E x e δ T v ( X T v ) E x 0 T e δ t 1 2 σ 2 v ( X t v ) + λ ( θ X t v ) v ( X t v ) δ v ( X t v ) d t + E x 0 T e δ t v ( X t v ) d ( U v ) t c E x t Δ U 0 t < T e δ t v ( X t + v ) v ( X t v ) E x 0 T e δ t v ( X t v ) d ( L v ) t c E x t Δ L 0 t < T e δ t v ( X t + v ) v ( X t v ) .
We recall that we are assuming C v = ( a , b ) . We also note that Definition 3 ( i i ) implies that for the control ( L v , U v ) we have X t v [ a , b ] for every t ( 0 , ) . Since L ( x ) : = L v ( x ) + h ( x ) is a continuous function for every x ( 0 , ) , using the definition of C v in (13), we have L ( x ) = 0 for every x [ a , b ] . Hence
0 T e δ t 1 2 σ 2 v ( X t v ) + λ ( θ X t v ) v ( X t v ) δ v ( X t v ) d t = 0 T e δ t h ( X t v ) d t .
Using the previous equality and the Lemma of Appendix C,
v ( x ) = E x e δ T v ( X T v ) + E x 0 T e δ t h ( X t v ) d t + [ 0 , T ) e δ t k L d L t v + [ 0 , T ) e δ t k U d U t v .
By ( i i ) in Definition 3, P { ω Ω : X v ( t , ω ) [ a , b ] , t ( 0 , ) } = 1 . That is, the fund process X t v is bounded. Then (A15) holds. Letting T , and applying the Monotone Convergence Theorem, we conclude
v ( x ) = E x 0 e δ t h ( X t v ) d t + 0 e δ t k L d L t v + 0 e δ t k U d U t v = J ( x ; L v , U v ) .
In particular, the control ( L v , U v ) is admissible. This completes the proof. □

Appendix E. Lemma A2 and Its Proof under the Assumption That the Cost Function Is Symmetric and θ = ρ

Lemma A2.
There exist constants A, B, a, b, D, and F, with < a < b < , that solve the system of Equations (25)–(30). Let us consider the function v defined by (20). Then,
v ( x ) > 0 , x ( a , b ) ,
( δ + λ ) k L 2 ( ρ a ) 0 , and 2 ( b ρ ) ( δ + λ ) k U 0 .
In this Appendix we are going to prove Lemma A.2 under the assumption that θ = ρ and the cost function is symmetric—i.e., k L = k U . We begin by proving a remark.
Remark A1.
If the cost function is symmetric, then the value function V is symmetric around ρ.
Proof of Remark.
Let x be an arbitrary initial stabilization fund level, and ( L , U ) be an admissible control for the initial fund level x. We define the Brownian motion W ˜ = W and the reflected process X ˜ = 2 ρ X . Thus, X ˜ is identical to X reflected at the level ρ . It follows that
X ˜ t = 2 ρ X t = 2 ρ x 0 t λ ( ρ X s ) d s σ W t + U t L t = x ˜ + 0 t λ ( ρ X ˜ s ) d s + σ W ˜ t U ˜ t + L ˜ t ,
where x ˜ = 2 ρ x , U ˜ = U , and L ˜ = L . It follows that for every admissible control ( L , U ) for the initial value x, there is a corresponding admissible control ( L ˜ , U ˜ ) for the initial value x ˜ . The cost of these two controls are the same because ( X t ρ ) 2 = ( X ˜ t ρ ) 2 and the cost function is symmetric. Therefore, the infimum of these costs over all the admissible controls also coincide, that is, V ( x ) = V ( 2 ρ x ) . This completes the proof of the Remark. □
Now let us analyze the function v of (20) in the interval [ a , b ] , which is our candidate for value function in the interval [ a , b ] . That is, let us consider the function φ ˜ : [ a , b ] ( , ) defined by
φ ˜ ( x ) : = A G ( x ) + B H ( x ) + ξ 2 x 2 + ξ 1 x + ξ 0 .
Since we are assuming θ = ρ in this Appendix, we can write
φ ˜ ( x ) : = A G ( x ) + B H ( x ) + σ 2 δ 1 2 λ + δ + 1 2 λ + δ ( x ρ ) 2 .
We observe that
φ ˜ ( x ) = A G ( x ) + B H ( x ) + 2 2 λ + δ ( x ρ ) , φ ˜ ( x ) = A G ( x ) + B H ( x ) + 2 2 λ + δ .
Here,
G ( x ) = n = 0 α 2 n + 2 ( 2 n + 2 ) ( x ρ ) 2 n + 1 G ( x ) = n = 0 α 2 n + 2 ( 2 n + 2 ) ( 2 n + 1 ) ( x ρ ) 2 n G ( x ) = n = 0 α 2 n + 4 ( 2 n + 4 ) ( 2 n + 3 ) ( 2 n + 2 ) ( x ρ ) 2 n + 1 .
The symmetry of v around ρ implies that φ ˜ ( ρ ) = 0 and this can happen only if B = 0 . Thus, we can reduce the six Equations (25)–(30) to the following three equations:
φ ˜ ( b ) = F + k U b ,
φ ˜ ( b ) = k U ,
φ ˜ ( b ) = 0 ,
where
φ ˜ ( x ) : = A G ( x ) + σ 2 δ 1 2 λ + δ + 1 2 λ + δ ( x ρ ) 2 .
We have three unknowns: A, b and F. Once they are determined, a, and D can be determined by symmetry, that is, a = 2 ρ b and D = 2 ρ F .
Proof of Lemma A2.
Let us consider the function v ˜ : [ ρ , ) ( , ) defined by
v ˜ ( x ) = A G ( x ) + σ 2 δ 1 2 λ + δ + 1 2 λ + δ ( x ρ ) 2 if ρ x < b , F + k U x if x b .
Because of the symmetry, it is good enough to prove that the constants A, b and F solve the system of Equations (A18)–(A20), and
v ( x ) > 0 , x ( a , b )
and 2 ( b ρ ) ( δ + λ ) k U 0 .
It is easy to see that v ˜ C 2 ( [ ρ , ) , ( , ) ) by the smooth-fit conditions (26), (27) and (29).
According to Equation (A20),
φ ˜ ( b ) = A G ( b ) + 2 2 λ + δ = 0 ,
which implies
A = 2 ( 2 λ + δ ) G ( b ) < 0 .
Combining (A19) and (A20), the constant b is given by
2 2 λ + δ G ( b ) G ( b ) ( b ρ ) = k U .
Let us consider the function G given by
G ( x ) : = 2 2 λ + δ G ( x ) G ( x ) ( x ρ ) .
We observe that G is a continuous function with G ( ρ ) = 0 and lim x G ( x ) = . Hence, there exists b ( ρ , ) such that G ( b ) = k U . After the constants b and A have been found above, the constant F can be found from (A18). This proves the first part of the Lemma.
Since for every x [ ρ , b ] : G ( x ) < G ( b ) , we conclude that for every x [ ρ , b ] :
φ ˜ ( x ) = A G ( x ) + 2 2 λ + δ > A G ( b ) + 2 2 λ + δ = 0 .
Therefore, φ ˜ is a strictly convex function in the interval [ ρ , b ] .
Next, we observe that for every x [ ρ , b ] :
φ ˜ ( x ) = A G ( x ) + 2 2 λ + δ ( x ρ ) < 2 2 λ + δ ( x ρ ) ,
because A < 0 . The function on the right-hand-side is an increasing linear function. Let us denote by z the point at which it reaches the level k U . Thus,
2 2 λ + δ ( z ρ ) = k U ,
or equivalently
z = ρ + k U 2 λ + δ 2 .
Since φ ˜ ( x ) < 2 2 λ + δ ( x ρ ) , for every x < b , we conclude that b > z = ρ + k U 2 λ + δ 2 . Therefore,
2 ( b ρ ) > k U ( 2 λ + δ ) > k U ( λ + δ ) .
This proves inequality (A23). We omit the proof for the asymmetric case. We remark that all the numerical solutions presented in this paper satisfy Lemma A.2. □

Appendix F. Proof of Theorem 2

Proof. 
It suffices to show that all the conditions of Theorem 1 are satisfied. By construction, v C 2 ( 0 , ) . Let us verify that v satisfies the HJB equation. By construction, v satisfies L v ( x ) + h ( x ) = 0 on ( a , b ) . Now we observe that
k L < v ( x ) < k U x ( a , b ) ,
due to (A16), v ( x ) = k L on ( , a ) , and v ( x ) = k U on ( b , ) . As a result, k L + v ( x ) > 0 and k U v ( x ) > 0 on ( a , b ) . Consequently, v satisfies the HJB equation on ( a , b ) .
Next, we verify the HJB equation in ( , a ) ( b , ) . We begin by claiming that (A17) implies L v ( x ) + h ( x ) > 0 , x ( , a ) ( b , ) . Let L ( x ) : = L v ( x ) + h ( x ) . We note that L is continuous on R and, by construction of v, equals zero on ( a , b ) . Hence L ( a ) = L ( b ) = 0 . Consider first the interval ( , a ) . Since L is strictly convex on ( , a ) , on that interval we have
d L ( x ) / d x = ( δ + λ ) k L 2 ( ρ x ) < ( δ + λ ) k L 2 ( ρ a ) 0 ,
where the last inequality follows from (A17). Thus L is strictly decreasing on ( , a ) . Hence L ( x ) > 0 for every x ( , a ) . Similarly, we note that L is also strictly convex on ( b , ) , and on that interval we have
d L ( x ) / d x = 2 ( x ρ ) ( δ + λ ) k U > 2 ( b ρ ) ( δ + λ ) k U 0 ,
where the last inequality follows from (A17). Consequently, L is strictly increasing on ( b , ) . As a result, L ( x ) > 0 for all x ( b , ) . This completes the proof of the claim. In addition, by construction, on ( , a ) , v satisfies k L + v ( x ) = 0 , thereby k U v ( x ) > 0 . Thus, v satisfies the HJB equation on ( , a ) . On the interval ( b , ) , by construction, k U v ( x ) = 0 , and hence k L + v ( x ) > 0 . Thus, v satisfies the HJB equation on ( , a ) ( b , ) as well. Therefore, the HJB equation is satisfied for each x R . As a consequence, C = ( a , b ) , Σ 1 = ( , a ] , and Σ 2 = [ b , ) .
From Theorem 1, ( L ^ , U ^ ) is the optimal stabilization fund control. Moreover, by Definition 4, the interval [ a , b ] is the optimal stabilization fund band. □

Appendix G. Proof of Theorem 3

Before discussing the proof, we define the functionals f ˜ and g ˜ . The functional f ˜ is given by
f ˜ ( x ) = A ˜ + B ˜ ( 2 λ / σ ) ( a θ ) ( 2 λ / σ ) ( x θ ) exp { w 2 / 2 } d w 1 λ ( 2 λ / σ ) ( x θ ) ( 2 λ / σ ) ( b θ ) w ( 2 λ / σ ) ( b θ ) exp { u 2 / 2 } d u exp { w 2 / 2 } d w ,
where the constants A ˜ and B ˜ are found from the equations
f ˜ ( a ) = 0 and f ˜ ( b ) = 0 .
The functional g ˜ is given by
g ˜ ( x ) = C ˜ + D ˜ ( 2 λ / σ ) ( a θ ) ( 2 λ / σ ) ( x θ ) exp { w 2 / 2 } d w + n = 0 c n ( x θ ) n .
Here, the sequence { c n } = { c 0 , c 1 , c 2 , } satisfies
c 0 = 0 , c 1 = 0 , c 2 = 1 σ 2 P Q ,
for every i { 2 , 3 , } :
c 2 i = 2 σ 2 i ( 2 i 2 ) ( 2 i 4 ) ( 2 i 6 ) ( 2 ) ( 2 i ) ! λ i 1 P Q ,
and for every i { 0 , 1 , 2 , 3 , } :
1 2 σ 2 ( 2 i + 3 ) ( 2 i + 2 ) c 2 i + 3 λ ( 2 i + 1 ) c 2 i + 1 = 2 π Q λ / σ 2 i + 1 1 i ! ( 2 i + 1 ) .
The constants P and Q are given by
P : = 2 π 0 λ ( θ a ) / σ e v 2 d v a n d Q : = Erfid ( b θ ) 2 λ σ , ( a θ ) 2 λ σ .
The constants C ˜ and D ˜ are found from the boundary conditions
g ˜ ( a ) = 0 and g ˜ ( b ) = 0 .
Proof. 
The probabilities (35) and (36) follow from Borodin and Salminen [19] (Chapter 7.3). For the proof of (37) and (38), see Appendix B of Cadenillas et al. [20]. The proof of (39) and (40) follow directly from (37) and (38). □

Appendix H. Solution of the Problem with the Cost Function h(x) = e -νx + eηx

We note that up to and including Section 3, the exact specification of h is not used. In particular, Theorem 1 remains valid for h ˜ . Certainly, the computation of the analytical solution is now different. Specifically, instead of Equation (17), the Hamilton–Jacobi–Bellman Equation (12) in the continuation region C = ( a , b ) implies
1 2 σ 2 v ( x ) + λ ( θ x ) v ( x ) δ v ( x ) = h ˜ ( x ) = e ν x e η x .
According to the theory of differential equations, the general solution v of the above equation comprises the sum of a homogeneous solution v h and a particular solution v p , namely,
v ( x ) = v h ( x ) + v p ( x ) .
The homogeneous solution is given by
v h ( x ) = G ( x ) + H ( x ) ,
where G and H are given by (21)–(24).
To find a particular solution, we proceed as follows. We consider the function v p defined by
v p ( x ) : = n = 0 a n x n .
Thus,
v p ( x ) : = n = 0 a n x n v p ( x ) = n = 0 ( n + 1 ) a n + 1 x n v p ( x ) = n = 0 ( n + 2 ) ( n + 1 ) a n + 2 x n
We notice that
e ν x + e η x = n = 0 1 n ! [ ν x ] n + n = 0 1 n ! [ η x ] n
Applying the power series method to solve ordinary differential equations, we obtain
a 2 = 2 λ θ a 1 + δ a 0 σ 2 a 3 = 2 λ θ a 2 + ( λ + δ ) a 1 + ν η 3 σ 2 a 4 = 3 λ θ a 3 + ( 2 λ + δ ) a 2 1 2 ! ( ν ) 2 1 2 ! ( η ) 2 ( 3 ) ( 2 ) σ 2
In general, for every n 2 :
a n + 2 = ( n λ + δ ) a n λ θ ( n + 1 ) a n + 1 1 n ! ( ν ) n 1 n ! ( η ) n ( n + 2 ) ( n + 1 ) 1 2 σ 2 .
We may simply select a 0 = a 1 = 0 .
Hence, the value function is given by
v ( x ) = { D ˜ k L x if x a ˜ , A ˜ G ( x ) + B ˜ H ( x ) + v p ( x ) if a ˜ < x < b ˜ , F ˜ + k U x if x b ˜ .
We also conjecture that v is twice continuously differentiable. Then, the six constants A ˜ , B ˜ , D ˜ , F ˜ , a ˜ , and b ˜ can be found from the following system of six equations:
A ˜ G ( a ˜ ) + B ˜ H ( a ˜ ) + v p ( a ˜ ) = D ˜ k L a ˜ , A ˜ G ( b ˜ ) + B ˜ H ( b ˜ ) + v p ( b ˜ ) = F ˜ + k U b ˜ , A ˜ G ( b ˜ ) + B ˜ H ( b ˜ ) + v p ( b ˜ ) = k U , A ˜ G ( a ˜ ) + B ˜ H ( a ˜ ) + v p ( a ˜ ) = k L , A ˜ G ( b ˜ ) + B ˜ H ( b ˜ ) + v p ( b ˜ ) = 0 A ˜ G ( a ˜ ) + B ˜ H ( a ˜ ) + v p ( a ˜ ) = 0
We have presented numerical solutions of the above nonlinear system in Table 2 of Section 6.4.

References

  1. Popescu, C.R.; Popescu, G.N. An Exploratory Study Based on a Questionaire Concerning Green and Sustainable Finance, Corporate Social Responsability, and Performance: Evidence from the Romanian Business Environment. J. Risk Financ. Manag. 2019, 12, 162. [Google Scholar] [CrossRef] [Green Version]
  2. Joyce, P.G. What is so magical about five percent? A Nationwide look at factors that influence the optimal size of the state rainy day funds. Public Budg. Financ. 2001, 21, 62–87. [Google Scholar] [CrossRef]
  3. Navin, J.C.; Navin, L.J. The Optimal Size of Countercyclical Budget Stabilization Funds: A Case Study of Ohio. Public Budg. Financ. 1997, 17, 114–127. [Google Scholar]
  4. Vasche, J.D.; Williams, B. Optimal Government Budgeting Contingency Reserve Funds. Public Budg. Financ. 2001, 7, 66–82. [Google Scholar] [CrossRef]
  5. Sovereign Wealth Fund Institute. SWFI. 2020. Available online: https://www.swfinstitute.org/sovereign-wealth-fund-rankings/ (accessed on 20 September 2020).
  6. Schwartz, E. The stochastic behavior of commodity prices: Implications for valuation and hedging. J. Financ. 1997, 52, 923–973. [Google Scholar] [CrossRef]
  7. Harrison, J.M.; Taksar, M.I. Instantaneous control of Brownian motion. Math. Oper. Res. 1983, 8, 439–453. [Google Scholar] [CrossRef]
  8. Cadenillas, A.; Haussmann, U. The Stochastic Maximum Principle for a Singular Control Problem. Stochastics Int. J. Probab. Stoch. Process. 1994, 49, 211–237. [Google Scholar] [CrossRef]
  9. Fleming, W.H.; Soner, H.M. Control Markov Processes and Viscosity Solutions, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
  10. Karatzas, I. A Class of Singular Stochastic Control Problems. Adv. Appl. Probab. 1983, 15, 225–254. [Google Scholar] [CrossRef]
  11. Karatzas, I. Probabilistic Aspects of Finite-fuel Stochastic Control. Proc. Natl. Acad. Sci. USA 1985, 82, 5579–5581. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Cadenillas, A.; Lakner, P.; Pinedo, M. Optimal Control of a Mean-Reverting Inventory. Oper. Res. 2010, 58, 1697–1710. [Google Scholar] [CrossRef]
  13. Matomäki, P. On solvability of a two-sided singular control problem. Math. Methods Oper. Res. 2012, 76, 239–271. [Google Scholar] [CrossRef]
  14. Ferrari, G.; Vargiolu, T. On the Singular Control of Exchange Rates. Ann. Oper. Res. 2020, 292, 795–832. [Google Scholar]
  15. Bensoussan, A.; Lions, J.L. Applications of Variational Inequalities in Stochastic Control; Elsevier: Amsterdam, The Netherlands, 1982. [Google Scholar]
  16. Waud, R.N. Asymmetric policymaker utility functions and optimal policy under uncertainty. Econometrica 1976, 44, 53–66. [Google Scholar] [CrossRef]
  17. Meyer, P.A. Un Cours Sur Les Integrales Stochastiques; Lecture Notes in Mathematics 511, Seminaire de Probabilites X; Springer: New York, NY, USA, 1976. [Google Scholar]
  18. Harrison, J.M. Brownian Motion and Stochastic Flow Systems; Wiley: New York, NY, USA, 1985. [Google Scholar]
  19. Borodin, A.N.; Salminen, P. Handbook of Brownian Motion—Facts and Formulae; Birkhaüser: Basel, Switzerland, 1996. [Google Scholar]
  20. Cadenillas, A.; Sarkar, S.; Zapatero, F. Optimal Dividend Policy with mean-reverting cash reservoir. Math. Financ. 2007, 17, 81–109. [Google Scholar]
Table 1. Effects of k L , θ , σ , and λ on [ a , b ] .
Table 1. Effects of k L , θ , σ , and λ on [ a , b ] .
k L = 0.4 k L = 0.5 k L = 0.6
a1.53501.49441.4581
b2.43022.46192.4893
b a 0.89510.96741.0312
σ = 0.6 σ = 0.75 σ = 0.9
a 1.55961.494451.43349
b 2.39652.461922.52302
b a 0.83690.967461.08905
θ = 0.0 θ = 2.0 θ = 2.5
a1.545921.494451.48116
b2.515872.461922.44880
b a 0.969950.9674670.96764
λ = 0.05 λ = 0.10 λ = 0.15
a 1.504971.499951.48370
b 2.464632.461922.45911
b a 0.959660.967460.97540
The default parameters are ρ = 2.0 , θ = 2.0 , σ = 0.75 , λ = 0.10 , δ = 0.07 , k L = 0.5 , and k U = 10 6 .
Table 2. Effects of k L , θ , σ , and λ on [ a ˜ , b ˜ ] .
Table 2. Effects of k L , θ , σ , and λ on [ a ˜ , b ˜ ] .
k L = 0.4 k L = 0.5 k L = 0.6
a ˜ 0.97140.86470.7758
b ˜ 2.70562.72072.7331
b ˜ a ˜ 1.73411.85601.9572
σ = 0.6 σ = 0.75 σ = 0.9
a ˜ 0.86950.86470.8192
b ˜ 2.24612.72073.0409
b ˜ a ˜ 1.37661.85602.2216
θ = 0.0 θ = 2.0 θ = 2.5
a ˜ 1.07240.86470.8135
b ˜ 3.05032.72072.6337
b ˜ a ˜ 1.97791.85601.8201
λ = 0.05 λ = 0.10 λ = 0.15
a ˜ 1.03330.86470.7189
b ˜ 3.15832.72072.3286
b ˜ a ˜ 2.12501.85601.6097
The default parameters are ρ ˜ = 2.0 , θ = 2.0 , σ = 0.75 , λ = 0.10 , δ = 0.07 , k L = 0.5 , k U = 10 6 , ν = 2.0924 , and η = 0.03 .
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cadenillas, A.; Huamán-Aguilar, R. The Optimal Control of Government Stabilization Funds. Mathematics 2020, 8, 1975. https://doi.org/10.3390/math8111975

AMA Style

Cadenillas A, Huamán-Aguilar R. The Optimal Control of Government Stabilization Funds. Mathematics. 2020; 8(11):1975. https://doi.org/10.3390/math8111975

Chicago/Turabian Style

Cadenillas, Abel, and Ricardo Huamán-Aguilar. 2020. "The Optimal Control of Government Stabilization Funds" Mathematics 8, no. 11: 1975. https://doi.org/10.3390/math8111975

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop