Next Article in Journal
CNN-Based End-to-End CPU-AP-UE Power Allocation for Spectral Efficiency Enhancement in Cell-Free Massive MIMO Networks
Previous Article in Journal
Two-Stage Distributionally Robust Optimal Scheduling for Integrated Energy Systems Considering Uncertainties in Renewable Generation and Loads
Previous Article in Special Issue
Stochastic Quasi-Geostrophic Equation with Jump Noise in Lp Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Optimal Control of Averaged SDDE with Semi-Markov Switching and with Application in Economics

by
Mariya Svishchuk
1 and
Anatoliy V. Swishchuk
2,*
1
Department of Mathematics and Computer Sciences, Mount Royal University, Calgary, AB T3E 6K6, Canada
2
Department of Mathematics and Statistics, University of Calgary, Calgary, AB T2N 1N4, Canada
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(9), 1440; https://doi.org/10.3390/math13091440
Submission received: 26 March 2025 / Revised: 21 April 2025 / Accepted: 24 April 2025 / Published: 28 April 2025
(This article belongs to the Special Issue Stochastic Models with Applications, 2nd Edition)

Abstract

:
This paper is devoted to the study of stochastic optimal control of averaged stochastic differential delay equations (SDDEs) with semi-Markov switchings and their applications in economics. By using the Dynkin formula and solution of the Dirichlet–Poisson problem, the Hamilton–Jacobi–Bellman (HJB) equation and the inverse HJB equation are derived. Applications are given to a new Ramsey stochastic models in economics, namely the averaged Ramsey diffusion model with semi-Markov switchings. A numerical example is presented as well.

1. Introduction

Many papers and books in the past were devoted to stochastic optimal control see, for example, [1,2,3]. This was not limited to the stochastic optimal control in infinite dimensions [4] and applied to stochastic control of jump diffusion [5]. In particular, stochastic delay equations were also a subject of studies in many papers, including [6,7,8,9,10]. For example, delay differential equations driven by Lévy processes were considered in [11]. However, no paper or book in this field considered stochastic optimal control problems for stochastic delay differential equations (SDDEs) with semi-Markov regime-switching or dealt with the Dynkin formula, the Dirichlet–Poisson problem, or the HJB and inverse HJB equations for such equations, or particularly their applications in economics. Therefore, the present paper is devoted to these topics, and as such, these results are new and original. We note that optimal control of stochastic differential delay equations with jumps and Markov switching and with an application in economics were studied in [12]. The stability of stochastic Ito equations with delay, Poisson jumps, and Markov switchings with applications to finance was considered in [13]. A survey of results on SDDE and their applications up to 2003 was presented in [14]. A good introduction to the theory of functional differential equation is [15].
In an earlier paper [16], the following controlled stochastic differential delay equation (SDDE) was introduced:
x ( t ) = x ( 0 ) + 0 t a ( x ( s T ) , u ( s ) ) d s + 0 t b ( x ( s T ) , u ( s ) ) d w ( s ) ,
where x ( t ) = ϕ ( t ) , t [ T , 0 ] , T > 0 , is a given continuous process, u ( t ) is a control process, and w ( t ) is a standard Wiener process.
We note that in the case of stochastic differential delay equations, the solution is not a Markov solution. However, it can be made a Markov solution by considering the pair ( x t , x ( t ) ) , where x t : = x ( t + s ) , s [ T , 0 ] is the path of the process on the interval [ t T , t ] (see details in [17]). The pair is a strong Markov process to which one can apply the theoretical basis of the corresponding weak infinitesimal generator (see, for example, [18] for more details).
The Ramsey growth model, or the Ramsey–Cass–Koopmans model is a neoclassical model of economic growth based primarily on the work of Frank P. Ramsey [19], with significant extensions by David Cass [20] and Tjalling Koopmans [21]. This model is very popular nowadays as well (see, for example, [22,23,24,25,26,27,28]).
The Ramsey diffusion model was described in [19] by the equation (see also [16,29])
d K ( t ) = [ A K ( t 1 ) u ( K ( t ) ) C ( t ) ] d t + σ ( K ( t 1 ) ) d w ( t )
where K is the capital, C is the production rate, u is a control process, A is a positive constant, and σ is the diffusion coefficient of the SDDE. The “initial capital”
K ( t ) = ϕ ( t ) , t [ 1 , 0 ] ,
is a continuous, bounded positive function. For this stochastic economic model, the optimal control was found to be u min = K ( 0 ) · C ( 0 ) .
Through time rescaling, the delay T was normalized to T = 1 , which will be our assumption in the theoretical considerations that follow. The obtained results are valid, however, for the general delay T > 0 .
In this paper, we consider SDDEs with semi-Markov switching:
x ( t ) = x ( 0 ) + 0 t a ( x ( s 1 ) , u ( s ) , r ( s ) ) d s + 0 t b ( x ( s 1 ) , u ( s ) , r ( s ) ) d w ( s ) ,
where r ( t ) is a semi-Markov process (SMP) with a state space X = { 1 , 2 , . . . , N } [30,31]. See the next section for the definition of an SMP.
We note that in the case of stochastic differential delay equations, the solution is not a Markov solution. However, it can be made a Markov solution by considering the vector ( x t , x ( t ) , r ( t ) , γ ( t ) ) , where x t : = x ( t + s ) , s [ 1 , 0 ] is the path of the process on the interval [ t 1 , t ] (see details in [17]). We mention that the process ( r ( t ) , γ ( t ) ) is a Markov process (see the next section). The vector is a strong Markov process to which one can apply the theoretical basis of the corresponding weak infinitesimal generator (see, for example, [18] for more details).
Due to the complicated expression associated with the generator of the vector ( x t , x ( t ) , r ( t ) , γ ( t ) ) and generator Q of the SMP r ( t ) , it is not possible to solve the optimal control problem exactly. Thus, we consider our model for x ( t ) in the series scheme
x ϵ ( t ) = x ϵ ( 0 ) + 0 t a ( x ϵ ( s 1 ) , u ( s ) , r ( s / ϵ ) ) d s + 0 t b ( x ϵ ( s 1 ) , u ( s ) , r ( s / ϵ ) ) d w ( s ) .
Therefore, to solve the stochastic optimal control problem, we take x ϵ ( t ) in a series scheme, where the semi-Markov process r ( t / ϵ ) is considered in the long run (i.e., when ϵ 0 ), and instead of the time t, we have t / ϵ .
Using the averaging principle for an SDDEs-in-series scheme under some conditions (A.1–4 in this paper), we find that x ϵ ( t ) ϵ 0 x ^ ( t ) weakly, where x ^ ( t ) satisfies the following limiting controlled, averaged SDDE:
x ^ ( t ) = x ( 0 ) + 0 t a ^ ( x ^ ( s 1 ) , u ( s ) ) d s + 0 t b ^ ( x ^ ( s 1 ) , u ( s ) ) d w ( s ) ,
where
a ^ ( x , u ) : = 1 m [ X a ( x , u , r ) m ( r ) π ( d r ) ] b ^ ( x , u ) : = 1 m [ X b ( x , u , r ) m ( r ) π ( d r ) ] .
Here, m ( r ) : = 0 + t G r ( d t ) , m : = X m ( r ) π ( d r ) , and π ( d r ) represents the stationary probabilities for a Markov chain r k (see Section 2).
An application in this paper is given to a stochastic model in economics, specifically a Ramsey model [19,29], which takes into account the delay and semi-Markov randomness in the production cycle.
Thus, the Ramsey model with semi-Markov switching in this paper is
d K ( t ) = [ A ( r ( t ) ) K ( t 1 ) u ( K ( t ) ) C ( t ) ] d t + σ ( K ( t 1 ) , r ( t ) ) d w ( t ) ,
where r ( t ) is a semi-Markov process.
The idea behind the semi-Markov switching in this Ramsey diffusion model is that we have different states of the economy, namely N states (e.g., if N = 2 , then they could be “bad” and “good” states), and in each state, the economy has not an exponential but an arbitrary distribution (Gamma, Beta, Weibull, etc.). In this paper, we consider a Weibull distribution.
We note that the averaged Ramsey diffusion model has the following form:
d K ^ ( t ) = [ A ^ K ^ ( t T ) u ( K ^ ( t ) ) C ( t ) ] d t + σ ^ ( K ^ ( t T ) ) d w ( t ) ,
where A ^ : = 1 m [ X π ( d r ) m ( r ) A ( r ) ] , m ( r ) : = 0 + t G r ( d t ) , m : = X m ( r ) π ( d r ) and σ ^ ( K ( t T ) ) : = 1 m [ X π ( d r ) m ( r ) σ ( K ( t T ) , r ) ] .
It should be mentioned that reducing the original SDDE with semi-Markov switching to the averaged SDDE and reducing the original Ramsey model with semi-Markov switching to the averaged Ramsey diffusion model does not limit both averaged or limiting models because all of the semi-Markov parameters and features are incorporated into the averaged parameters.
This paper is organized as follows. Section 2 introduces the semi-Markov process and its properties. Section 3 is devoted to the controlled stochastic differential delay equations with semi-Markov switching and a controlled, averaged SDDE. In Section 4, we consider a solution to the stochastic optimal control problem for the controlled, averaged SDDE which is based on the Hamilton–Jacobi–Bellman equation derived from the Dynkin formula and the solution to the Dirichlet–Poisson problem for the controlled, averaged SDDE. The economic Ramsey diffusion model with semi-Markov switching and the averaged Ramsey model are investigated in Section 5. We found optimal control for this model, and we present a numerical example for a two-state semi-Markov process with a Weibull distribution.

2. Semi-Markov Process

Let ( Ω , F , ( F t ) t R + , P ) be a filtered probability space with a right-continuous filtration ( F t ) t R + and a probability P.
Let ( X , X ) be a measurable space and
Q S M ( r , B , t ) : = P ( r , B ) G r ( t ) for r X , B X , t R + ,
be a semi-Markov kernel. Let ( r n , τ n ; n N ) be an ( X × R + , X B + ) -valued Markov renewal process with Q S M as the associated kernel; that is, we have
P ( r n + 1 B , τ n + 1 τ n t | F n ) = Q S M ( r n , B , t ) .
Let us then define the process
ν t : = s u p { n N : τ n t }
that gives the number of jumps of the Markov renewal process in the time interval ( 0 , t ] and
θ n : = τ n τ n 1
which gives the sojourn time of the Markov renewal process in the nth visited state. The semi-Markov process, associated with the Markov renewal process ( x n , τ n ) n N , is defined by [31]
r ( t ) : = r ν ( t ) for t R + .
Associated with the semi-Markov process, it is possible to define some auxiliary processes. We are interested in the backward recurrence time (or lifetime) process defined by
γ ( t ) : = t τ ν ( t ) for t R + .
The next well-known result characterizes the backward recurrence time process [31]. The backward recurrence time ( γ ( t ) ) t in Equation (3) is a Markov process with a generator
Q γ f ( t ) = f ( t ) + λ ( t ) [ f ( 0 ) f ( t ) ] ,
where λ ( t ) = G x ¯ ( t ) G r ¯ ( t ) , G r ¯ ( t ) = 1 G r ( t ) and D o m a i n ( Q γ ) = C 1 ( R + ) .
As is well known, semi-Markov processes preserve the lost memories property only at transition times, and thus ( r ( t ) ) t R + is not a Markov process. However, if we consider the joint process ( r ( t ) , γ ( t ) ) t R + , then we record at any instant the time already spent by the semi-Markov process in the present state, which then results in the fact that ( r ( t ) , γ ( t ) ) t R + is a Markov process with a generator (see [31])
Q f ( r , t ) = d f d t ( r , t ) + g r ( t ) G ¯ r ( t ) X P ( r , d y ) [ f ( y , 0 ) f ( r , t ) ] ,
where G r ( t ) is defined in Equation (1) and g r ( t ) = G r ( t ) .

3. Controlled Stochastic Differential Delay Equations (SDDEs) with Semi-Markov Switching and Averaged Controlled SDDEs

3.1. Controlled SDDEs with Semi-Markov Switching

Consider the following SDDEs with semi-Markov switching:
x ( t ) = x ( 0 ) + 0 t a ( x ( s 1 ) , u ( s ) , r ( s ) ) d s + 0 t b ( x ( s 1 ) , u ( s ) , r ( s ) ) d w ( s ) ,
where r ( t ) is a semi-Markov process (SMP) in Equation (2) with the state space X = { 1 , 2 , . . . , N } [31]. See the next section for the definition of an SMP.
It is important to mention again that in the case of stochastic differential delay equations, the solution is not a Markov solution, as was also indicated in the Introduction. However, it can made a Markov solution by considering the vector ( x t , x ( t ) , r ( t ) , γ ( t ) ) , where x t : = x ( t + s ) , s [ 1 , 0 ] is the path of the process on the interval [ t 1 , t ] (see details in [17]). We mention that the process ( r ( t ) , γ ( t ) ) (see Equations (2) and (3)) is a Markov process. The vector is a strong Markov process to which one can apply the theoretical basics of the corresponding weak infinitesimal generator (see, for example, [18] for more details).

3.2. Assumptions and Existence of Solutions

Below, we recall some basic notions and facts from [17,32,33] necessary for subsequent exposition in this paper. Let x ( t ) , t [ 1 , ) be a stochastic process and F u v ( x ) be a minimal σ -algebra with respect to which x ( t ) is measurable for every t [ u , v ] . Let w ( t ) , t [ 0 , ) be a Wiener process with w ( 0 ) = 0 , , and let F u v ( d w ) be a minimal Borel σ -algebra such that w ( t ) w ( s ) is measurable for all t , s with u s t v . Also, let F u v ( d r ) be a minimal Borel σ -algebra such that r ( t ) r ( s ) is measurable for all t , s with u s t v , where r ( t ) is a finite-state semi-Markov process with a state space X = { 1 , 2 , , N } [31].
Finally, let u ( t ) U , t [ 1 , ) be a stochastic process whose values can be chosen from the given Borel set U and such that u ( t ) is F u v ( u ) -adapted for all t [ u , v ] .
Let D denote the Banach space of all càdlàg functions defined on the interval [ 1 , 0 ] and equipped with the Skorokhod topology (see [34]). We note the initial process x ( t ) = ϕ ( t ) , t [ 1 , 0 ] , where ϕ D is a given càdlàg function. Therefore, we assume that the processes ϕ ( t ) , t [ 1 , 0 ] , w ( t ) , and u ( t ) , t 0 are defined on the probability space ( Ω , F , F t , P ) and F t : = F 1 t ( x ) F 0 t ( d w ) F 0 t ( u ) F 0 t ( d r ) .
Let the following conditions be satisfied for Equation (2):
A.1  a ( ϕ , u , r ) and b ( ϕ , u , r ) are continuous, real-valued functionals defined on D × U × X ;
A.2  ϕ D is a càdlàg function with a probability of one in the interval [ 1 , 0 ] , independent of w ( s ) , s 0 , and E | ϕ ( t ) | 4 < . Here, | . | is the Skorokhod norm [34].
A.3  ϕ , ψ D :
| a ( ϕ , u , r ) a ( ψ , u , r ) | + | b ( ϕ , u , r ) b ( ψ , u , r ) | K 1 0 | ϕ ( θ ) ψ ( θ ) | d θ ,
with | a ( η , u , r ) | + | b ( η , u , r ) | M for some constants, M , K > 0 , and all η D , u U , r = r ( 0 ) X .
Under assumptions A.1–3, the solution to the initial value problem in Equation (2) exists, and the vector ( x t , x ( t ) , r ( t ) , γ ( t ) ) is a unique stochastic Markov process [17,32,33]. The solution x ( t ) can be viewed at time t 0 as an element x t of the space D or as a point in R .
Equation (5) is expressed in the integral form as follows
x ( t ) = x ( 0 ) + 0 t a ( x ( s 1 ) , u ( s ) , r ( s ) ) d s + 0 t b ( x ( s 1 ) , u ( s ) , r ( s ) ) d w ( s ) ,
We note that the generator for the vector ( x t , x ( t ) , r ( t ) , γ ( t ) ) is
A G r F ( x ( 1 ) , x ( 0 ) , r ) : = F ( x ( 0 ) , x ( 0 ) , r ) F ( x ( 1 ) , x ( 0 ) , r ) +   + 1 0 L u F ( x ( s ) , x ( 0 ) , r ) d s +   + Q F ( x ( 0 ) , x ( 0 ) , r ) ,
where the operator Q is defined in Equation (4) and acts only on the r variable. The operator L u is defined by
L u F ( x ( 1 ) , x ( 0 ) , r ) : = = F x ( 0 ) ( x ( 1 ) , x ( 0 ) , r ) a ( x ( 1 ) , u , r ) + F x ( 0 ) x ( 0 ) ( x ( 1 ) , x ( 0 ) , r ) 1 2 b 2 ( x ( 1 ) , u , r )
and acts on F as a function of x ( 0 ) only, while u = u ( 0 ) (see [33]).

3.3. Controlled Averaged SDDE

Due to the complicated expression associated with the generator A G r in Equation (7) above and the generator Q in Equation (4) of the SMP r ( t ) , we consider our model for x ( t ) in Equation (5) in the series scheme:
x ϵ ( t ) = x ϵ ( 0 ) + 0 t a ( x ϵ ( s 1 ) , u ( s ) , r ( s / ϵ ) ) d s + 0 t b ( x ϵ ( s 1 ) , u ( s ) , r ( s / ϵ ) ) d w ( s ) ,
Therefore, to solve the stochastic optimal control problem, we take x ϵ ( t ) in a series scheme, where the semi-Markov process r ( t / ϵ ) is considered in the long run; in other words, instead of the time t, we have t / ϵ .
We need one more condition for this reason:
A.4. The embedded Markov chain r n of the semi-Markov process r ( t ) is ergodic, with the transition probabilities π ( d r ) .
Using the averaging principle for an SDDEs-in-series scheme under conditions A.1–4, we can find that x ϵ ( t ) ϵ 0 x ^ ( t ) weakly (see [13,31,35]), where x ^ ( t ) satisfied the following limiting controlled, averaged SDDE for the SDDE in Equation (9):
x ^ ( t ) = x ( 0 ) + 0 t a ^ ( x ^ ( s 1 ) , u ( s ) ) d s + 0 t b ^ ( x ^ ( s 1 ) , u ( s ) ) d w ( s ) ,
where
a ^ ( x , u ) : = 1 m [ X a ( x , u , r ) m ( r ) π ( d r ) ] b ^ ( x , u ) : = 1 m [ X b ( x , u , r ) m ( r ) π ( d r ) ] .
Here, m ( r ) : = 0 + t G r ( d t ) , m : = X m ( r ) π ( d r ) .
Remark 1. 
In the case of a finite or infinite but countable X value, the integrals above become sums or series, respectively. We present the above formulas for the case of a finite-state semi-Markov process (i.e., X = { 0 , 1 , 2 , , N } ):
σ ^ 2 : = 1 m i = 0 N σ 2 ( i ) m ( i ) π ( i ) , m : = i = 0 N m ( i ) π ( i ) , m ( i ) : = 0 + t G i ( d t ) .
In this way, the generator for the process x ^ ( t ) in Equation (10) takes the following form:
L ^ u F ( x ( 1 ) , x ( 0 ) ) : = = a ^ ( x ( 1 ) , u ) F x ( 0 ) ( x ( 1 ) , x ( 0 ) ) + 1 2 b ^ 2 ( x ( 1 ) , u ) , F x ( 0 ) x ( 0 ) ( x ( 1 ) , x ( 0 ) )
where a ^ and b ^ are defined in Equation (11).
We note that the pair ( x ^ t , x ^ ( t ) ) is now a strong Markov process with a generator L ^ u in Equation (12).

4. Solution to the Stochastic Optimal Control Problem for the Averaged SDDEs

The main idea of solving the stochastic optimal control problem for the averaged SDDEs with semi-Markov switching is to derive the Hamilton–Jacobi–Bellman (HJB) equation and the inverse HJB equation by applying the Dynkin formula and solution to the Dirichlet–Poisson problem for the averaged SDDE, As long as we have reduced the optimal control problem with semi-Markov switching in Equations (6) and (9) to the optimal control problem in diffusion and hence the Markov case, then we can apply the results from [16]. Below we list all of the results which are necessary for the solution of the stochastic optimal control problem in our setting.

4.1. Dynkin Formula for the SDDE with Semi-Markov Switching

Let τ be a stopping time for the strong Markov process ( x ^ t , x ^ ( t ) ) such that E x , x ( 0 ) | τ | < . Then, we have the following Dynkin formula (see [16]):
E x , x ( 0 ) F ^ ( x τ , x ( τ ) ) = + E x , x ( 0 ) 0 τ F ^ ( x ( s ) , x ( s ) ) d s 0 τ F ^ ( x ( s 1 ) , x ( s ) d s + E x , x ( 0 ) 0 τ 1 0 L ^ u F ( ϕ ( θ ) , x ( s ) ) d θ d s ,
where L ^ u is defined in Equation (12) and F ^ : = 1 m [ X F ( x ( 1 ) , x ( 0 ) , r ) m ( r ) π ( d r ) ] .

4.2. Solution to the Dirichlet–Poisson Problem for the SDDE with Semi-Markov Switching

Let ψ ( x , x ( 0 ) , r ) C ( ( H × G × X ) be bounded, and let F ( x , x ( 0 ) , u , r ) C ( D × C × U × X ) be such that
E x , x ( 0 ) 1 0 0 τ H × G | F ( ϕ ( θ ) , x ( s ) , u ( s ) , r ( s ) ) d s d θ <
for   all ( x , x ( 0 ) , r ) H × G × X ,
where τ H × G × X = inf { t : ( x t , x ( t ) , r ( t ) ) H × G × X } , C is a set of continuous and bounded functions on D × C × U × X , and ( H × G × X ) is a regular boundary of H × G × X , H C , G D (see, for example, [18]).
Define
J ( x , x ( 0 ) , u ) : = E x , x ( 0 ) 1 0 0 τ H × G F ^ ( ϕ ( θ ) , x ( s ) , u ( s ) ) d s d θ + E x , x ( 0 ) , r ψ ^ ( x τ G , x ( τ G ) ) , ( x , x ( 0 ) , r ) H × G × X .
Then (see [16]), it follows that
L ^ r J ( x , x ( 0 ) , u ) = 1 0 F ( ϕ ( θ ) , x , u ) d θ , in H × G u U
and
lim t τ G J ( x t , x ( t ) , u ) = ψ ( x ( τ G ) , x ( τ G ) ) ( x , x ( 0 ) ) H × G ,
where L ^ r is generator defined above in Equation (12).

4.3. Hamilton–Jacobi–Bellman (HJB) Equation for the SDDE with Semi-Markov Switching

We assume that the cost function is given in the form
J ( x , x ( 0 ) , u ) : = E x , x ( 0 ) 1 0 0 τ H × G × X F ^ ( ϕ ( θ ) , x ( s ) , u ( s ) ) d s d θ + ψ ( x τ H × G × X , x ( τ H × G × X ) ) ,
where ψ is a bounded real function, F is bounded, real, and continuous, and τ H × G × X is the exit time of the process ( x t , x ( t ) ) defined in Equation (7) from the fixed open set H × G × X D × C × X . In particular, τ H × G can be a fixed time t 0 . We assume that E x , x ( 0 ) , r | τ H × G × X | < , ( x , x ( 0 ) , r ) H × G × X . The function F ^ is defined in Equation (15).
The problem is as follows. For each ( x , x ( 0 ) , r ) H × G × X , find the number J ^ * ( x , x ( 0 ) ) and control u * = u * ( x , x ( 0 ) , r ) , ω ) such that
J ^ * ( x , x ( 0 ) ) : = inf u { J ^ ( x , x ( 0 ) , u ) } = J ^ ( x , x ( 0 ) , u * , r ) ,
where the infimum is taken over all F t -adapted processes u ( t ) U . Such a control u * , if it exists, is called an optimal control, and J ^ * ( x , x ( 0 ) ) is called the optimal performance.
We consider only the Markov controls u ( t ) . For every ν U , we define the following operator:
( A ^ ν J ) ( x , x ( 0 ) ) = F ^ ( x ( 0 ) , x ( 0 ) , ν ( 0 ) ) F ^ ( x ( 1 ) , x ( 0 ) , ν ( 0 ) ) + + 1 0 L ^ ν F ( ϕ ( θ ) , x ( 0 ) , ν ( 0 ) , r ) d θ ,
where the operator L ^ ν is given by Equation (12) and
J ^ ( x , x ( 0 ) ) : = 1 0 F ^ ( ϕ ( θ ) , x ( 0 ) , ν ( 0 ) ) d θ
Theorem 1 
(HJB equation for SDDEs with semi-Markov switchings). Define
J ^ * ( x , x ( 0 ) , r ) = inf { J ^ ( x , x ( 0 ) , u ) : u i s a M a r k o v c o n t r o l } .
Suppose that J C 2 ( H × G × X ) and the optimal control u * exists. Then, it follows that
inf ν U 1 0 F ^ ( ϕ ( θ ) , x , ν ) d θ + ( L ^ ν J ^ * ) ( x , x ( 0 ) ) = 0 , ( x , x ( 0 ) ) H × G ,
and
J ^ * ( x , x ( 0 ) ) = ψ ^ ( x , x ( 0 ) ) , ( x , x ( 0 ) , r ) ( H × G ) ,
where functions F and ψ are given by Equation (14), the operator L ^ ν is given by Equation (12), and ( H × G ) is the boundary of set H × G for ( x t , x ( t ) ) .
The infimum in Equation (15) is achieved when ν = u * , where u * is an optimal control. In other words, there is
1 0 F ^ ( ϕ ( θ ) , x , u * ) d θ + ( L ^ u * J * ) ( x , x ( 0 ) ) = 0 , ( x , x ( 0 ) ) H × G ,
which is Equation (15).
Proof. 
Follow the steps of Theorem 1 [16] while replacing the operator A ν with L ^ ν in Equation (12). □
Theorem 2 
(Converse of the HJB equation for SDDEs with semi-Markov switchings). Let g be a bounded function in C 2 ( H × G ) C ( ( H × G ) ) ) . Suppose that for all u U , the inequality
1 0 F ^ ( x , x ( 0 ) , ϕ ( θ ) , u ) d θ + ( L ^ u g ) ( x , x ( 0 ) ) 0 , ( x , x ( 0 ) ) H × G
and the boundary condition
g ( x , x ( 0 ) ) = ψ ^ ( x , x ( 0 ) ) , ( x , x ( 0 ) ) H × G
are satisfied. Then, g ( x , x ( 0 ) , r ) J ^ ( x , x ( 0 ) , u for all Markov controls u U and for all ( x , x ( 0 ) ) H × G .
Moreover, if for every ( x , x ( 0 ) , H × G , there exists u 0 such that
1 0 F ^ ( x , x ( 0 ) , ϕ ( θ ) , u 0 ) d θ + ( L ^ u 0 g ) ( x , x ( 0 ) ) = 0 ,
then u 0 is a Markov control g ( x , x ( 0 ) ) = J ^ ( x , x ( 0 ) , u 0 ) = J * ( x , x ( 0 ) ) , and therefore u 0 is an optimal control.
Proof. 
Follow the steps of Theorem 2 [16] while replacing A ν , r with L ^ u in Equation (12). □

5. Ramsey Diffusion Model in Economics with Semi-Markov Switching

We recall that the Ramsey diffusion model with semi-Markov switching is
d K ( t ) = [ A ( r ( t ) ) K ( t T ) u ( K ( t ) ) C ( t ) ] d t + σ ( K ( t T ) , r ( t ) ) d w ( t ) ,
where r ( t ) is a semi-Markov process. We suppose that functions A ( r ) > 0 and σ ( · , r ) > 0 are bounded and continuous with r on X . For this model, the initial condition is
K ( t ) = ϕ ( t ) , t [ 1 , 0 ] .
We note that the averaged Ramsey diffusion model has the following form, which follows from our previous results (see Section 3.3, Equations (9)–(12)):
d K ^ ( t ) = [ A ^ K ^ ( t T ) u ( K ^ ( t ) ) C ( t ) ] d t + σ ^ ( K ^ ( t T ) ) d w ( t ) ,
where A ^ : = 1 m [ X π ( d r ) m ( r ) A ( r ) ] , σ ^ ( K ( t T ) ) : = 1 m [ X π ( d r ) m ( r ) σ ( K ( t T ) , r ) ] , m ( r ) : = 0 + t G r ( d t ) , m : = X m ( r ) π ( d r ) , and π ( d r ) represents the stationary probabilities for the Markov chain r k .
In the next section, we present the solution for the stochastic optimal control problem for this averaged diffusion Ramsey model and a numerical example.

5.1. Optimal Control for Ramsey Diffusion Model in Economics with Semi-Markov Switching

Here, we consider the averaged Ramsey diffusion model in Equation (21) with the boundary condition in Equation (20).
Let us choose the following cost function with the modification of the first term being A ( r ) K 2 ( 0 ) 2 :
J ( K , u , r ) = A ( r ) K 2 ( 0 ) 2 + 1 0 ϕ 2 ( θ ) d θ + u 2 ( 0 ) 2 .
The operator A G u J has the following form, taking into account Equation (22):
A u , r J = A ( r ) K 2 ( 0 ) 2 + ϕ 2 ( 0 ) + u 2 ( 0 ) 2 A ( r ) K 2 ( 0 ) 2 + ϕ 2 ( 1 ) + u 2 ( 0 ) 2   + K ( 0 ) · ( A ( r ) · K ( 0 ) u ( 0 ) · C ( 0 ) ) + 1 2 σ 2 ( K ( 0 ) )   + 1 2 1 + [ ( K ( 0 ) + y K ( 0 ) ) 2 K 2 ( 0 ) 2 K ( 0 ) y K ( 0 ) ] Π ( d y )   = A ( r ) K 2 ( 0 ) 2 + ϕ 2 ( 0 ) + u 2 ( 0 ) 2 A ( r ) K 2 ( 0 ) 2 + ϕ 2 ( 1 ) + u 2 ( 0 ) 2   + K ( 0 ) · ( A ( r ) · K ( 0 ) u ( 0 ) · C ( 0 ) ) + 1 2 σ 2 ( K ( 0 ) )   + 1 2 K 2 ( 0 ) 1 + y 2 Π ( d y ) ,
since
F ( K ( 0 ) , K ( 0 ) , u ( 0 ) , r ) = A ( r ) K 2 ( 0 ) 2 + ϕ 2 ( 0 ) + u 2 ( 0 ) 2 , F ( K ( 0 ) , K ( 1 ) , u ( 0 ) , r ) = A ( r ) K 2 ( 0 ) 2 + ϕ 2 ( 1 ) + u 2 ( 0 ) 2 , L u J ( K , u , r ) = K ( 0 ) A ( r ) · K ( 0 ) u ( 0 ) · C ( 0 ) + 1 2 σ 2 ( K ( 0 ) ) .
From Theorem 1, (16), we obtain the following HJB equation:
inf u A ^ K 2 ( 0 ) 2 + 1 0 ϕ 2 ( θ ) d θ + u 2 ( 0 ) 2 + ϕ 2 ( 0 ) ϕ 2 ( 1 ) + A ^ · K 2 ( 0 )   + u ( 0 ) · K ( 0 ) · C ( 0 ) + 1 2 σ 2 ( K ( 0 ) ) + 1 2 K 2 ( 0 ) 1 + y 2 Π ( d y ) = 0 ,
or equivalently
inf u u 2 ( 0 ) 2 K ( 0 ) C ( 0 ) u ( 0 ) + ( 2 ϕ 2 ( 0 ) 2 ϕ 2 ( 1 ) + 2 1 0 ϕ 2 ( θ ) d θ   + K 2 ( 0 ) ( 1 + 2 A ^ ) + σ 2 ( K ( 0 ) ) ) + K 2 ( 0 ) 1 + y 2 Π ( d y ) = 0 ,
where A ^ : = 1 m [ X π ( d x ) m ( x ) A ( x ) ] , m ( x ) : = 0 + t G x ( d t ) , m : = X m ( x ) π ( d x ) , and π ( d x ) represents the stationary probabilities for the Markov chain r k (see also Equation (21)). Let
4 K 2 ( 0 ) C 2 ( 0 ) 4 ( 2 ϕ 2 ( 0 ) 2 ϕ 2 ( 1 ) + 2 1 0 ϕ 2 ( θ ) d θ   + K 2 ( 0 ) ( 1 + 2 A ^ ) + σ 2 ( K ( 0 ) ) ) + K 2 ( 0 ) 1 + y 2 Π ( d y ) ,
or
K 2 ( 0 ) · ( C 2 ( 0 ) 3 2 A ^ ) 2 1 0 ϕ 2 ( θ ) d θ 2 ϕ 2 ( 1 )   + σ 2 ( K ( 0 ) ) + K 2 ( 0 ) 1 + y 2 Π ( d y ) ,
since K ( 0 ) = ϕ ( 0 ) . Hence, the infinum is achieved when
u ( 0 ) = 2 K ( 0 ) · C ( 0 ) 2 = K ( 0 ) · C ( 0 ) .
Therefore, u min = K ( 0 ) · C ( 0 ) , and
J ( K , u min ) = = A ^ K 2 ( 0 ) 2 + K 2 ( 0 ) · C 2 ( 0 ) 2 + 1 0 ϕ 2 ( θ ) d θ =   = K 2 ( 0 ) 2 ( A ^ + C 2 ( 0 ) ) + 1 0 ϕ 2 ( θ ) d θ .

5.2. Numerical Example for Ramsey Diffusion Model in Economics with Semi-Markov Switching

We will now use the expressions defined above in further calculations below. Suppose that x n is a Markov chain with two states i = { 0 , 1 } and a transition matrix P = 0.7 0.3 0.4 0.6 .
Then, the stationary probabilities are π = π ( 0 ) π ( 1 ) = 0.571 0.429 , which follow from the solution of the following equation: π P = π .
Thus, we consider the case of a two-state semi-Markov chain x n with an arbitrary distribution G i ( x ) , i = 0 , 1 , for τ n . Let us take the Weibull distribution (see [33]) G i ( x ) for τ n with a probability density function f i ( x ) : = d G i ( x ) / d x :
f i ( x ) = λ ( i ) K ( i ) ( λ ( i ) x ) K ( i ) 1 exp [ ( λ ( i ) x ) K ( i ) ] , x 0 , 0 , x < 0 ,
where i = 0 , 1 . Recall that K ( i ) is the shape parameter and λ ( i ) is the scale parameter. We note that if we take K ( i ) = 1 , i = 0 , 1 , then we have the exponential distribution for G i ( t ) and the Markov case for the process x ( t ) .
Suppose that λ ( 0 ) = 8 and λ ( 1 ) = 10 . We recall that the mean value for a random variable with a Weibull density distribution is ( 1 / λ ( i ) ) Γ ( 1 + 1 / K ( i ) ) , where Γ ( · ) stands for the Gamma function. Of course, we could take other non-exponential distributions, such as Gamma or Beta.
We consider the cases K ( i ) = 2 ( K ( i ) > 1 ) , i = 0 , 1 , and the case K ( i ) = 1 / 2 ( K ( i ) < 1 ) , i = 0 , 1 , can be considered in a similar way. We note that the case K ( i ) = 1 , i = 0 , 1 , refers to the exponential distribution G i ( x ) = 1 e λ ( i ) x . .
Thus, let us take K ( i ) = 2 ( K ( i ) > 1 ) , i = 0 , 1 .
We can now calculate the parameters m ( 0 ) and m ( 1 ) (see Equation (11)):
m ( 0 ) = 1 / λ ( 0 ) ) Γ ( 1 + 1 / K ( 0 ) ) = ( 1 / 8 ) Γ ( 3 / 2 ) 0.111 m ( 1 ) = 1 / λ ( 1 ) ) Γ ( 1 + 1 / K ( 1 ) ) = ( 1 / 10 ) Γ ( 3 / 2 ) 0.089 .
Then (see Equation (11)), we have
m = π ( 0 ) m ( 0 ) + π ( 1 ) m ( 1 ) = 0.571 × 0.111 + 0.429 × 0.089 = 0.063381 + 0.038181 = 0.101562 0.102 .
Let the initial function ϕ ( t ) = e a t (see Equation (20)), where a could be positive or negative. Then, the formula for J ( K , u min ) has the following form (see Equation (23)):
J ( K , u min ) = K 2 ( 0 ) 2 ( A ^ + C 2 ( 0 ) ) + 1 e 2 a 2 a .
Now, suppose that C ( 0 ) = 1 , K ( 0 ) = 1 . We note that ϕ ( 0 ) = 1 = K ( 0 ) . We also take A ( 0 ) = 1 and A ( 1 ) = 2 .
Then, according to previous formulas, we have
A ^ = 1 m [ A ( 0 ) m ( 0 ) π ( 0 ) + A ( 1 ) m ( 1 ) π ( 1 ) ] 1.3 .
In addition, J ( K , u min ) = K 2 ( 0 ) 2 ( A ^ + C 2 ( 0 ) ) + 1 e 2 a 2 a = 1 2 [ 1.3 + 1 ] + 1 e 2 a 2 a = 1.15 + 1 e 2 a 2 a .
If we take a = 1 , then J ( K , u min ) = 1.58 , and if we take a = 1 , then J ( K , u min ) = 4.35 .
As long as t [ 1 , 0 ] for the initial function ϕ ( t ) = e a t , then a positive value of a = 1 corresponds to decaying capital (“bad” economy), and a negative value of a = 1 corresponds to increasing capital (“good” economy), which are reflected by the cost functions J ( K , u min ) = 1.58 and J ( K , u min ) = 4.35 , respectively.

Author Contributions

Conceptualization, M.S. and A.V.S.; Investigation, M.S.; Writing—original draft, M.S. and A.V.S.; Writing—review & editing, A.V.S.; Supervision, A.V.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Acknowledgments

The second author thanks the NSERC for their continuing support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nisio, M. Stochastic Control Theory; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  2. Fleming, W.; Rishel, R. Deterministic and Stochastic Optimal Control; Springer: Berlin/Heidelberg, Germany, 1975. [Google Scholar]
  3. Stengel, R. Stochastic Optimal Control: Theory and Applications; Wiley: Hoboken, NJ, USA, 1986. [Google Scholar]
  4. Fabbri, G.; Gozzi, F.; Swiech, A. Stochastic Optimal Control in Infinite Dimensions; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  5. Oksendal, B.; Sulem, A. Applied Stochastic Optimal Control of Jump Diffusions, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  6. Scheutzow, M. Stochastic Delay Equations; Lecture Notes: Berlin, Germany, 2018. [Google Scholar]
  7. Mohammed, S.; Scheutzow, M. Lyapunov exponent and statistical solutions for affine stochastic delay equations. Stochastics 1990, 29, 259–283. [Google Scholar]
  8. Mohammed, S.; Scheutzow, M. Lyapunov exponents of linear stochastic functional differential equations. Part II: Examples and Case Studies. Ann. Probab. 1997, 25, 1210–1240. [Google Scholar] [CrossRef]
  9. von Renesse, M.; Scheutzow, M. Existence and uniqueness of solutions of stochastic functional differential equations. Random Oper. Stoch. Equ. 2010, 18, 267–284. [Google Scholar] [CrossRef]
  10. Mohammed, S. Nonliner flows of stochastic liner delay equations. Stochastics 1986, 17, 2007–2013. [Google Scholar] [CrossRef]
  11. Reis, M.; Riedle, M.; van Gaans, O. Delay differential equations driven by Lévy processes: Stationarity and Feller properties. Stoch. Proccess. Their Appl. 2006, 116, 1409–1432. [Google Scholar] [CrossRef]
  12. Ivanov, A.F.; Svishchuk, M.Y.; Swishchuk, A.V.; Trofimchuk, S.A. Optimal control of stochastic differential delay equations with Jumps and Markov Switching, and with an application in economics. In Proceedings of the ICIAM 2023 Congress, Tokyo, Japan, 20–25 August 2023. [Google Scholar]
  13. Swishchuk, A.V.; Kazmerchuk, Y.I. Stability of stochastic Ito equations with delay, Poisson jumps and Markov switchings with applications to finance. Theory Probab. Math. Stat. 2002, 64, 45, (translated by AMS, N64, 2002). [Google Scholar]
  14. Ivanov, A.F.; Kazmerchuk, Y.; Swishchuk, A.V. Theory, Stochastic Stability and Applications of Stochastic Delay Differential Equations: A Survey of Recent Results. Differ. Equ. Dyn. Syst. 2003, 11, 55–115. [Google Scholar]
  15. Hale, J.K. Theory of Functional Differential Equations. Appl. Math. Sci. 1977, 3, 365. [Google Scholar]
  16. Ivanov, A.F.; Swishchuk, A.V. Optimal control of stochastic differential delay equations with application in economics. Int. J. Qual. Theory Differ. Equ. Appl. 2008, 2, 201–213. [Google Scholar]
  17. Ito, K.; Nisio, M. On stationary solutions of a stochastic differential equation. J. Math. Kyoto Univ. 1964, 4, 1–70. [Google Scholar]
  18. Dynkin, E.B. Markov Process; Fizmatgiz, 1963. English translation; Academic Press: New York, NY, USA, 1965; Volumes 1 and 2. [Google Scholar]
  19. Ramsey, F.P. A mathematical theory of savings. Econ. J. 1928, 38, 543–549. [Google Scholar] [CrossRef]
  20. Cass, D. Optimum Growth in an Aggregative Model of Capital Accumulation. Rev. Econ. Stud. 1965, 32, 233–240. [Google Scholar] [CrossRef]
  21. Koopmans, T.C. On the Concept of Optimal Economic Growth. The Economic Approach to Development Planning; Rand McNally: Chicago, IL, USA, 1965; pp. 225–287. [Google Scholar]
  22. Acemoglu, D. The Neoclassical Growth Model. Introduction to Modern Economic Growth; Princeton University Press: Princeton, NJ, USA, 2009; pp. 287–326. ISBN 978-0-691-13292-1. [Google Scholar]
  23. Barro, R.J.; Sala-i-Martin, X. Growth Models with Consumer Optimization. Economic Growth, 2nd ed.; McGraw-Hill: New York, NY, USA, 2024; pp. 85–142. ISBN 978-0-262-02553-9. [Google Scholar]
  24. Bénassy, J.-P. The Ramsey Model; Macroeconomic Theory; Oxford University Press: New York, NY, USA, 2015; pp. 145–160. ISBN 978-0-19-538771-1. [Google Scholar]
  25. Blanchard, O.J.; Fischer, S. Consumption and Investment: Basic Infinite Horizon Models. In Lectures on Macroeconomics; MIT Press: Cambridge, MA, USA, 1989; pp. 37–89. ISBN 978-0-262-02283-5. [Google Scholar]
  26. Miao, J. Neoclassical Growth Models. In Economic Dynamics in Discrete Time; MIT Press: Cambridge, MA, USA, 2014; pp. 353–364. ISBN 978-0-262-02761-8. [Google Scholar]
  27. Novales, A.; Fernández, E.; Ruíz, J. Optimal Growth: Continuous Time Analysis. In Economic Growth: Theory and Numerical Solution Methods; Springer: Berlin/Heidelberg, Germany, 2009; pp. 101–154. ISBN 978-3-540-68665-1. [Google Scholar]
  28. Romer, D. Infinite-Horizon and Overlapping-Generations Models. In Advanced Macroeconomics, 4th ed.; McGraw-Hill: New York, NY, USA, 2011; pp. 49–77. ISBN 978-0-07-351137-5. [Google Scholar]
  29. Gandolfo, G. Economic Dynamics; Springer: Berlin/Heidelberg, Germany, 1996; p. 610. [Google Scholar]
  30. Chung, K.L. Markov Chains, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 1967. [Google Scholar]
  31. Swishchuk, A.V. Random Evolutions and their Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1997; Volume 408, p. 215. [Google Scholar]
  32. Fleming, W.; Nisio, M. On the existence of optimal stochastic control. J. Math. Mech. 1966, 15, 777–794. [Google Scholar]
  33. Kushner, H. On the stability of processes defined by stochastic difference-differential equations. J. Differ. Equ. 1968, 4, 424–443. [Google Scholar] [CrossRef]
  34. Skorokhod, A.V. Studies in the Theory of Random Processes; Dover Publications, Inc.: Mineola, NY, USA, 1965. [Google Scholar]
  35. Skorokhod, A.V. Asymptotic Methods in the Theory of Stochastic Differential Equations; Naukova Dumka Publishers: Kyiv, Ukraine, 1989. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Svishchuk, M.; Swishchuk, A.V. Stochastic Optimal Control of Averaged SDDE with Semi-Markov Switching and with Application in Economics. Mathematics 2025, 13, 1440. https://doi.org/10.3390/math13091440

AMA Style

Svishchuk M, Swishchuk AV. Stochastic Optimal Control of Averaged SDDE with Semi-Markov Switching and with Application in Economics. Mathematics. 2025; 13(9):1440. https://doi.org/10.3390/math13091440

Chicago/Turabian Style

Svishchuk, Mariya, and Anatoliy V. Swishchuk. 2025. "Stochastic Optimal Control of Averaged SDDE with Semi-Markov Switching and with Application in Economics" Mathematics 13, no. 9: 1440. https://doi.org/10.3390/math13091440

APA Style

Svishchuk, M., & Swishchuk, A. V. (2025). Stochastic Optimal Control of Averaged SDDE with Semi-Markov Switching and with Application in Economics. Mathematics, 13(9), 1440. https://doi.org/10.3390/math13091440

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop