Abstract
This paper is devoted to the study of stochastic optimal control of averaged stochastic differential delay equations (SDDEs) with semi-Markov switchings and their applications in economics. By using the Dynkin formula and solution of the Dirichlet–Poisson problem, the Hamilton–Jacobi–Bellman (HJB) equation and the inverse HJB equation are derived. Applications are given to a new Ramsey stochastic models in economics, namely the averaged Ramsey diffusion model with semi-Markov switchings. A numerical example is presented as well.
Keywords:
stochastic differential delay equations; stochastic optimal control; Hamilton–Jacobi–Bellman equation; Dynkin formula; Dirichlet–Poisson problem; economics applications; semi-Markov process; Ramsey economics model with delay and semi-Markov switching MSC:
34K50; 60K15; 34K35; 34K60; 60F17; 91B70
1. Introduction
Many papers and books in the past were devoted to stochastic optimal control see, for example, [1,2,3]. This was not limited to the stochastic optimal control in infinite dimensions [4] and applied to stochastic control of jump diffusion [5]. In particular, stochastic delay equations were also a subject of studies in many papers, including [6,7,8,9,10]. For example, delay differential equations driven by Lévy processes were considered in [11]. However, no paper or book in this field considered stochastic optimal control problems for stochastic delay differential equations (SDDEs) with semi-Markov regime-switching or dealt with the Dynkin formula, the Dirichlet–Poisson problem, or the HJB and inverse HJB equations for such equations, or particularly their applications in economics. Therefore, the present paper is devoted to these topics, and as such, these results are new and original. We note that optimal control of stochastic differential delay equations with jumps and Markov switching and with an application in economics were studied in [12]. The stability of stochastic Ito equations with delay, Poisson jumps, and Markov switchings with applications to finance was considered in [13]. A survey of results on SDDE and their applications up to 2003 was presented in [14]. A good introduction to the theory of functional differential equation is [15].
In an earlier paper [16], the following controlled stochastic differential delay equation (SDDE) was introduced:
where is a given continuous process, is a control process, and is a standard Wiener process.
We note that in the case of stochastic differential delay equations, the solution is not a Markov solution. However, it can be made a Markov solution by considering the pair , where is the path of the process on the interval (see details in [17]). The pair is a strong Markov process to which one can apply the theoretical basis of the corresponding weak infinitesimal generator (see, for example, [18] for more details).
The Ramsey growth model, or the Ramsey–Cass–Koopmans model is a neoclassical model of economic growth based primarily on the work of Frank P. Ramsey [19], with significant extensions by David Cass [20] and Tjalling Koopmans [21]. This model is very popular nowadays as well (see, for example, [22,23,24,25,26,27,28]).
The Ramsey diffusion model was described in [19] by the equation (see also [16,29])
where K is the capital, C is the production rate, u is a control process, A is a positive constant, and is the diffusion coefficient of the SDDE. The “initial capital”
is a continuous, bounded positive function. For this stochastic economic model, the optimal control was found to be
Through time rescaling, the delay T was normalized to which will be our assumption in the theoretical considerations that follow. The obtained results are valid, however, for the general delay
In this paper, we consider SDDEs with semi-Markov switching:
where is a semi-Markov process (SMP) with a state space [30,31]. See the next section for the definition of an SMP.
We note that in the case of stochastic differential delay equations, the solution is not a Markov solution. However, it can be made a Markov solution by considering the vector , where is the path of the process on the interval (see details in [17]). We mention that the process is a Markov process (see the next section). The vector is a strong Markov process to which one can apply the theoretical basis of the corresponding weak infinitesimal generator (see, for example, [18] for more details).
Due to the complicated expression associated with the generator of the vector and generator Q of the SMP it is not possible to solve the optimal control problem exactly. Thus, we consider our model for in the series scheme
Therefore, to solve the stochastic optimal control problem, we take in a series scheme, where the semi-Markov process is considered in the long run (i.e., when ), and instead of the time t, we have
Using the averaging principle for an SDDEs-in-series scheme under some conditions (A.1–4 in this paper), we find that weakly, where satisfies the following limiting controlled, averaged SDDE:
where
Here, and represents the stationary probabilities for a Markov chain (see Section 2).
An application in this paper is given to a stochastic model in economics, specifically a Ramsey model [19,29], which takes into account the delay and semi-Markov randomness in the production cycle.
Thus, the Ramsey model with semi-Markov switching in this paper is
where is a semi-Markov process.
The idea behind the semi-Markov switching in this Ramsey diffusion model is that we have different states of the economy, namely N states (e.g., if , then they could be “bad” and “good” states), and in each state, the economy has not an exponential but an arbitrary distribution (Gamma, Beta, Weibull, etc.). In this paper, we consider a Weibull distribution.
We note that the averaged Ramsey diffusion model has the following form:
where and
It should be mentioned that reducing the original SDDE with semi-Markov switching to the averaged SDDE and reducing the original Ramsey model with semi-Markov switching to the averaged Ramsey diffusion model does not limit both averaged or limiting models because all of the semi-Markov parameters and features are incorporated into the averaged parameters.
This paper is organized as follows. Section 2 introduces the semi-Markov process and its properties. Section 3 is devoted to the controlled stochastic differential delay equations with semi-Markov switching and a controlled, averaged SDDE. In Section 4, we consider a solution to the stochastic optimal control problem for the controlled, averaged SDDE which is based on the Hamilton–Jacobi–Bellman equation derived from the Dynkin formula and the solution to the Dirichlet–Poisson problem for the controlled, averaged SDDE. The economic Ramsey diffusion model with semi-Markov switching and the averaged Ramsey model are investigated in Section 5. We found optimal control for this model, and we present a numerical example for a two-state semi-Markov process with a Weibull distribution.
2. Semi-Markov Process
Let be a filtered probability space with a right-continuous filtration and a probability P.
Let be a measurable space and
be a semi-Markov kernel. Let be an -valued Markov renewal process with as the associated kernel; that is, we have
Let us then define the process
that gives the number of jumps of the Markov renewal process in the time interval and
which gives the sojourn time of the Markov renewal process in the nth visited state. The semi-Markov process, associated with the Markov renewal process , is defined by [31]
Associated with the semi-Markov process, it is possible to define some auxiliary processes. We are interested in the backward recurrence time (or lifetime) process defined by
The next well-known result characterizes the backward recurrence time process [31]. The backward recurrence time in Equation (3) is a Markov process with a generator
where , and .
As is well known, semi-Markov processes preserve the lost memories property only at transition times, and thus is not a Markov process. However, if we consider the joint process , then we record at any instant the time already spent by the semi-Markov process in the present state, which then results in the fact that is a Markov process with a generator (see [31])
where is defined in Equation (1) and
3. Controlled Stochastic Differential Delay Equations (SDDEs) with Semi-Markov Switching and Averaged Controlled SDDEs
3.1. Controlled SDDEs with Semi-Markov Switching
Consider the following SDDEs with semi-Markov switching:
where is a semi-Markov process (SMP) in Equation (2) with the state space [31]. See the next section for the definition of an SMP.
It is important to mention again that in the case of stochastic differential delay equations, the solution is not a Markov solution, as was also indicated in the Introduction. However, it can made a Markov solution by considering the vector , where is the path of the process on the interval (see details in [17]). We mention that the process (see Equations (2) and (3)) is a Markov process. The vector is a strong Markov process to which one can apply the theoretical basics of the corresponding weak infinitesimal generator (see, for example, [18] for more details).
3.2. Assumptions and Existence of Solutions
Below, we recall some basic notions and facts from [17,32,33] necessary for subsequent exposition in this paper. Let be a stochastic process and be a minimal -algebra with respect to which is measurable for every . Let be a Wiener process with , and let be a minimal Borel -algebra such that is measurable for all with . Also, let be a minimal Borel -algebra such that is measurable for all with where is a finite-state semi-Markov process with a state space [31].
Finally, let be a stochastic process whose values can be chosen from the given Borel set and such that is -adapted for all .
Let denote the Banach space of all càdlàg functions defined on the interval and equipped with the Skorokhod topology (see [34]). We note the initial process , where is a given càdlàg function. Therefore, we assume that the processes , and are defined on the probability space and
Let the following conditions be satisfied for Equation (2):
A.1 and are continuous, real-valued functionals defined on ;
A.2 is a càdlàg function with a probability of one in the interval independent of and . Here, is the Skorokhod norm [34].
A.3 :
with for some constants, , and all .
Under assumptions A.1–3, the solution to the initial value problem in Equation (2) exists, and the vector is a unique stochastic Markov process [17,32,33]. The solution can be viewed at time as an element of the space or as a point in .
Equation (5) is expressed in the integral form as follows
We note that the generator for the vector is
where the operator Q is defined in Equation (4) and acts only on the r variable. The operator is defined by
and acts on F as a function of only, while (see [33]).
3.3. Controlled Averaged SDDE
Due to the complicated expression associated with the generator in Equation (7) above and the generator Q in Equation (4) of the SMP we consider our model for in Equation (5) in the series scheme:
Therefore, to solve the stochastic optimal control problem, we take in a series scheme, where the semi-Markov process is considered in the long run; in other words, instead of the time t, we have
We need one more condition for this reason:
A.4. The embedded Markov chain of the semi-Markov process is ergodic, with the transition probabilities
Using the averaging principle for an SDDEs-in-series scheme under conditions A.1–4, we can find that weakly (see [13,31,35]), where satisfied the following limiting controlled, averaged SDDE for the SDDE in Equation (9):
where
Here,
Remark 1.
In the case of a finite or infinite but countable X value, the integrals above become sums or series, respectively. We present the above formulas for the case of a finite-state semi-Markov process (i.e., ):
In this way, the generator for the process in Equation (10) takes the following form:
where and are defined in Equation (11).
We note that the pair is now a strong Markov process with a generator in Equation (12).
4. Solution to the Stochastic Optimal Control Problem for the Averaged SDDEs
The main idea of solving the stochastic optimal control problem for the averaged SDDEs with semi-Markov switching is to derive the Hamilton–Jacobi–Bellman (HJB) equation and the inverse HJB equation by applying the Dynkin formula and solution to the Dirichlet–Poisson problem for the averaged SDDE, As long as we have reduced the optimal control problem with semi-Markov switching in Equations (6) and (9) to the optimal control problem in diffusion and hence the Markov case, then we can apply the results from [16]. Below we list all of the results which are necessary for the solution of the stochastic optimal control problem in our setting.
4.1. Dynkin Formula for the SDDE with Semi-Markov Switching
Let be a stopping time for the strong Markov process such that . Then, we have the following Dynkin formula (see [16]):
where is defined in Equation (12) and
4.2. Solution to the Dirichlet–Poisson Problem for the SDDE with Semi-Markov Switching
Let be bounded, and let be such that
where , is a set of continuous and bounded functions on and is a regular boundary of , (see, for example, [18]).
Define
4.3. Hamilton–Jacobi–Bellman (HJB) Equation for the SDDE with Semi-Markov Switching
We assume that the cost function is given in the form
where is a bounded real function, F is bounded, real, and continuous, and is the exit time of the process defined in Equation (7) from the fixed open set . In particular, can be a fixed time . We assume that The function is defined in Equation (15).
The problem is as follows. For each , find the number and control such that
where the infimum is taken over all -adapted processes . Such a control , if it exists, is called an optimal control, and is called the optimal performance.
We consider only the Markov controls For every , we define the following operator:
where the operator is given by Equation (12) and
Theorem 1
(HJB equation for SDDEs with semi-Markov switchings). Define
Proof.
Follow the steps of Theorem 1 [16] while replacing the operator with in Equation (12). □
Theorem 2
(Converse of the HJB equation for SDDEs with semi-Markov switchings). Let g be a bounded function in Suppose that for all , the inequality
and the boundary condition
are satisfied. Then, for all Markov controls and for all
Moreover, if for every , there exists such that
then is a Markov control and therefore is an optimal control.
Proof.
Follow the steps of Theorem 2 [16] while replacing with in Equation (12). □
5. Ramsey Diffusion Model in Economics with Semi-Markov Switching
We recall that the Ramsey diffusion model with semi-Markov switching is
where is a semi-Markov process. We suppose that functions and are bounded and continuous with r on For this model, the initial condition is
We note that the averaged Ramsey diffusion model has the following form, which follows from our previous results (see Section 3.3, Equations (9)–(12)):
where and represents the stationary probabilities for the Markov chain
In the next section, we present the solution for the stochastic optimal control problem for this averaged diffusion Ramsey model and a numerical example.
5.1. Optimal Control for Ramsey Diffusion Model in Economics with Semi-Markov Switching
Here, we consider the averaged Ramsey diffusion model in Equation (21) with the boundary condition in Equation (20).
Let us choose the following cost function with the modification of the first term being :
From Theorem 1, (16), we obtain the following HJB equation:
or equivalently
where and represents the stationary probabilities for the Markov chain (see also Equation (21)). Let
or
since . Hence, the infinum is achieved when
Therefore, , and
5.2. Numerical Example for Ramsey Diffusion Model in Economics with Semi-Markov Switching
We will now use the expressions defined above in further calculations below. Suppose that is a Markov chain with two states and a transition matrix .
Then, the stationary probabilities are which follow from the solution of the following equation:
Thus, we consider the case of a two-state semi-Markov chain with an arbitrary distribution for Let us take the Weibull distribution (see [33]) for with a probability density function
where Recall that is the shape parameter and is the scale parameter. We note that if we take then we have the exponential distribution for and the Markov case for the process
Suppose that and We recall that the mean value for a random variable with a Weibull density distribution is where stands for the Gamma function. Of course, we could take other non-exponential distributions, such as Gamma or Beta.
We consider the cases and the case can be considered in a similar way. We note that the case refers to the exponential distribution
Thus, let us take
We can now calculate the parameters and (see Equation (11)):
Then (see Equation (11)), we have
Let the initial function (see Equation (20)), where a could be positive or negative. Then, the formula for has the following form (see Equation (23)):
Now, suppose that We note that We also take and
Then, according to previous formulas, we have
In addition,
If we take then , and if we take then
As long as for the initial function then a positive value of corresponds to decaying capital (“bad” economy), and a negative value of corresponds to increasing capital (“good” economy), which are reflected by the cost functions and respectively.
Author Contributions
Conceptualization, M.S. and A.V.S.; Investigation, M.S.; Writing—original draft, M.S. and A.V.S.; Writing—review & editing, A.V.S.; Supervision, A.V.S. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.
Acknowledgments
The second author thanks the NSERC for their continuing support.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Nisio, M. Stochastic Control Theory; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
- Fleming, W.; Rishel, R. Deterministic and Stochastic Optimal Control; Springer: Berlin/Heidelberg, Germany, 1975. [Google Scholar]
- Stengel, R. Stochastic Optimal Control: Theory and Applications; Wiley: Hoboken, NJ, USA, 1986. [Google Scholar]
- Fabbri, G.; Gozzi, F.; Swiech, A. Stochastic Optimal Control in Infinite Dimensions; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
- Oksendal, B.; Sulem, A. Applied Stochastic Optimal Control of Jump Diffusions, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
- Scheutzow, M. Stochastic Delay Equations; Lecture Notes: Berlin, Germany, 2018. [Google Scholar]
- Mohammed, S.; Scheutzow, M. Lyapunov exponent and statistical solutions for affine stochastic delay equations. Stochastics 1990, 29, 259–283. [Google Scholar]
- Mohammed, S.; Scheutzow, M. Lyapunov exponents of linear stochastic functional differential equations. Part II: Examples and Case Studies. Ann. Probab. 1997, 25, 1210–1240. [Google Scholar] [CrossRef]
- von Renesse, M.; Scheutzow, M. Existence and uniqueness of solutions of stochastic functional differential equations. Random Oper. Stoch. Equ. 2010, 18, 267–284. [Google Scholar] [CrossRef]
- Mohammed, S. Nonliner flows of stochastic liner delay equations. Stochastics 1986, 17, 2007–2013. [Google Scholar] [CrossRef]
- Reis, M.; Riedle, M.; van Gaans, O. Delay differential equations driven by Lévy processes: Stationarity and Feller properties. Stoch. Proccess. Their Appl. 2006, 116, 1409–1432. [Google Scholar] [CrossRef]
- Ivanov, A.F.; Svishchuk, M.Y.; Swishchuk, A.V.; Trofimchuk, S.A. Optimal control of stochastic differential delay equations with Jumps and Markov Switching, and with an application in economics. In Proceedings of the ICIAM 2023 Congress, Tokyo, Japan, 20–25 August 2023. [Google Scholar]
- Swishchuk, A.V.; Kazmerchuk, Y.I. Stability of stochastic Ito equations with delay, Poisson jumps and Markov switchings with applications to finance. Theory Probab. Math. Stat. 2002, 64, 45, (translated by AMS, N64, 2002). [Google Scholar]
- Ivanov, A.F.; Kazmerchuk, Y.; Swishchuk, A.V. Theory, Stochastic Stability and Applications of Stochastic Delay Differential Equations: A Survey of Recent Results. Differ. Equ. Dyn. Syst. 2003, 11, 55–115. [Google Scholar]
- Hale, J.K. Theory of Functional Differential Equations. Appl. Math. Sci. 1977, 3, 365. [Google Scholar]
- Ivanov, A.F.; Swishchuk, A.V. Optimal control of stochastic differential delay equations with application in economics. Int. J. Qual. Theory Differ. Equ. Appl. 2008, 2, 201–213. [Google Scholar]
- Ito, K.; Nisio, M. On stationary solutions of a stochastic differential equation. J. Math. Kyoto Univ. 1964, 4, 1–70. [Google Scholar]
- Dynkin, E.B. Markov Process; Fizmatgiz, 1963. English translation; Academic Press: New York, NY, USA, 1965; Volumes 1 and 2. [Google Scholar]
- Ramsey, F.P. A mathematical theory of savings. Econ. J. 1928, 38, 543–549. [Google Scholar] [CrossRef]
- Cass, D. Optimum Growth in an Aggregative Model of Capital Accumulation. Rev. Econ. Stud. 1965, 32, 233–240. [Google Scholar] [CrossRef]
- Koopmans, T.C. On the Concept of Optimal Economic Growth. The Economic Approach to Development Planning; Rand McNally: Chicago, IL, USA, 1965; pp. 225–287. [Google Scholar]
- Acemoglu, D. The Neoclassical Growth Model. Introduction to Modern Economic Growth; Princeton University Press: Princeton, NJ, USA, 2009; pp. 287–326. ISBN 978-0-691-13292-1. [Google Scholar]
- Barro, R.J.; Sala-i-Martin, X. Growth Models with Consumer Optimization. Economic Growth, 2nd ed.; McGraw-Hill: New York, NY, USA, 2024; pp. 85–142. ISBN 978-0-262-02553-9. [Google Scholar]
- Bénassy, J.-P. The Ramsey Model; Macroeconomic Theory; Oxford University Press: New York, NY, USA, 2015; pp. 145–160. ISBN 978-0-19-538771-1. [Google Scholar]
- Blanchard, O.J.; Fischer, S. Consumption and Investment: Basic Infinite Horizon Models. In Lectures on Macroeconomics; MIT Press: Cambridge, MA, USA, 1989; pp. 37–89. ISBN 978-0-262-02283-5. [Google Scholar]
- Miao, J. Neoclassical Growth Models. In Economic Dynamics in Discrete Time; MIT Press: Cambridge, MA, USA, 2014; pp. 353–364. ISBN 978-0-262-02761-8. [Google Scholar]
- Novales, A.; Fernández, E.; Ruíz, J. Optimal Growth: Continuous Time Analysis. In Economic Growth: Theory and Numerical Solution Methods; Springer: Berlin/Heidelberg, Germany, 2009; pp. 101–154. ISBN 978-3-540-68665-1. [Google Scholar]
- Romer, D. Infinite-Horizon and Overlapping-Generations Models. In Advanced Macroeconomics, 4th ed.; McGraw-Hill: New York, NY, USA, 2011; pp. 49–77. ISBN 978-0-07-351137-5. [Google Scholar]
- Gandolfo, G. Economic Dynamics; Springer: Berlin/Heidelberg, Germany, 1996; p. 610. [Google Scholar]
- Chung, K.L. Markov Chains, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 1967. [Google Scholar]
- Swishchuk, A.V. Random Evolutions and their Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1997; Volume 408, p. 215. [Google Scholar]
- Fleming, W.; Nisio, M. On the existence of optimal stochastic control. J. Math. Mech. 1966, 15, 777–794. [Google Scholar]
- Kushner, H. On the stability of processes defined by stochastic difference-differential equations. J. Differ. Equ. 1968, 4, 424–443. [Google Scholar] [CrossRef]
- Skorokhod, A.V. Studies in the Theory of Random Processes; Dover Publications, Inc.: Mineola, NY, USA, 1965. [Google Scholar]
- Skorokhod, A.V. Asymptotic Methods in the Theory of Stochastic Differential Equations; Naukova Dumka Publishers: Kyiv, Ukraine, 1989. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).