1. Introduction
One of the prerequisites for the physical realization of a process is its stability. Hence, ensuring stability is an essential task known as the stabilization problem.
The stabilization problem for stochastic dynamical systems with random structures was first solved by I.Ya. Kats in [
1]. For stochastic dynamical systems with random structure and Markov switches that lead to jumps of the phase vector, the problem of optimal stabilization was solved by the authors in [
2]. In that work, it was assumed that the moments of Markov switches are known. This assumption allowed a relatively straightforward transfer of basic properties from stochastic differential equations (SDEs) with continuous trajectories to systems with jumps. This global problem includes sub-problems related to the Markov property for solution
, the martingale properties for
, and other local characteristics [
3,
4,
5]. Similar problems for stochastic differential equations with delays have been studied in [
6,
7]. Stochastic games [
7,
8] have become widely used, in which it is assumed that two players have different objectives, and their strategies are described by stochastic differential equations. More general approaches to analyzing random fields and stochastic partial differential equations can be found in [
9,
10,
11,
12,
13].
The inclusion of an integral term with respect to a Poisson measure also allowed cases with random moments of finite jumps in the phase vector to be addressed. For such systems, an explicit control form that stabilizes linear systems to asymptotic stochastic stability was obtained in [
5], along with justification of exact and approximate methods for control calculation. A system of Riccati-type matrix equations has been derived to find a general solution to the stabilization problem.
In the works mentioned above, as well as in most studies involving trajectory jumps, i.e., distance between jumps satisfy the following condition:
. However, in catastrophe theory or resonant systems, cases often arise where jumps concentrate at a point, leading to the relation:
In this scenario, as previously indicated in [
14], the cumulative effect of jumps can result in the loss of system stability. Consider the simple example which illustrates problems with the existence of a concentration point:
with jumps defined by
at points
One can easily conclude that
provided that
. This straightforward example highlights the critical role of jump magnitudes in systems with concentration points.
In
Section 2, we introduce the mathematical model for dynamical systems with jumps, described by a system of stochastic differential equations with Markov parameters and switches, providing sufficient conditions for the existence and uniqueness of solutions.
Section 3 establishes sufficient conditions for exponential stability of the solution
(Theorem 1), which simultaneously define the class of admissible controls. Sufficient conditions for the existence of solutions to the optimal stabilization problem are established in
Section 4 (Theorem 2). The synthesis of optimal control in explicit form for a linear system with a quadratic quality functional is presented in
Section 5.
2. Task Definition
On the probabilistic basis
[
3,
4,
15], consider a controlled stochastic dynamical system of random structure given by a stochastic differential equation (SDE)
with Markov switches
and initial conditions
Here,
is a Markov chain with a finite number of states
and generator
;
is a Markov chain with values in the space
and transition probability matrix
;
;
is the control;
is an
m-dimensional standard Wiener process; the processes
, and
are independent [
3,
4,
15].
Define
as the minimal
-algebra with respect to which
for
and
are measurable.
Measurable functions , and satisfy the boundedness and global Lipschitz conditions:
Coefficients of stochastic differential equation are measurable maps:
,
,
satisfy the boundedness condition and the global Lipschitz condition
Consider the scenario with a concentration point of jumps, i.e.,
Assume the following conditions are satisfied:
and
Conditions (
4)–(
8) in fact are the sufficient conditions of existence and unique for a strong solution to the Cauchy problem (
1)–(
3) [
16].
Define the transition probability of the Markov chain
that determines the solution of the problem (
1)–(
3) at step
k as
Definition 1 ([
1])
. The discrete Lyapunov operator on a sequence of measurable scalar functions for the SDE (1) with Markov switches (2) is defined as Here, is a Lyapunov function defined by the following definition.
Definition 2 ([
1,
2])
. A Lyapunov function for the system (1)–(3) is a sequence of nonnegative non-decreasing functions satisfying the following conditions:- 1.
for all the discrete Lyapunov operator (9) is defined; - 2.
for all ;
- 3.
- 4.
and and are continuous and monotonous.
Definition 3 ([
17,
18])
. The stochastic system (1)–(3) is called:—stable in probability if for exist , such that impliesfor all ; —asymptotically stochastically stable if it is stable in probability and for any there exists , such thatfor all , and . Definition 4 ([
17,
18,
19])
. The system (1)–(3) is called exponentially stable in the mean square if , and , there exist constants , such that In general, these two types of convergences are not related to each other [
19], but in specific cases one type of convergence can be used to infer the other. A remark to Theorem 1 allows us to state that, provided that the Lyapunov function exists, the exponential stability in the mean square implies it is asymptotically stochastically stable. Thus, Theorem 1 allows us to draw conclusions, not only about the moment convergence of the solution to 0 but also about the probabilistic properties of the solution for large
T.
3. Stability
One common approach to establishing sufficient conditions for exponential stability involves imposing a constraint on the switching moments of the type
which excludes the possibility of concentration points of jumps [
1,
20,
21]. Clearly, in the case considered here, condition (
13) is not fulfilled. Therefore, it is essential to identify conditions under which the solution to the system (
1)–(
3) is exponentially stable in the mean square.
Theorem 1. Suppose that, for the system (1)–(3), there exist Lyapunov functions and strictly increasing on , positive and continuous functions and z, such that, under the condition,holds, along with the inequalityfor , andfor some integer , , where is a non-decreasing function satisfying . Assume also that Then, the system (1)–(3) is exponentially stable in the mean square. Proof of Theorem 1. On the interval
consider the weak infinitesimal operator acting on the Lyapunov function
. From (
15), we have
where the scalar
is defined as
By Dynkin’s formula [
4], for any
and some
,
Using (
16), it follows that
The last inequality implies that
which, by the Gronwall–Bellman lemma, implies
This estimate and (
14) imply exponential stability in the mean square of the system (
1)–(
3). Indeed, based on the estimate of
, the event
is equivalent to
, proving the theorem. □
Remark 1. Since the inequality (15) holds, the solution of (1)–(3) is asymptotically stable in probability [14]. 4. Stabilization
The problem of optimal stabilization for the system (
1)–(
3) consists of determining a control
, such that the trivial solution
of the system becomes asymptotically stable in probability.
It is assumed that the control
u is based on the full feedback principle and is continuous in
t for
, for all fixed
and
. Specifically, in the case of continuous dynamics (
1) and (
2), the control is defined by the relation
and the left-hand side boundary is considered precisely due to the presence of jumps (
2).
The set of admissible controls consists of those controls for which the system is exponentially stable [
1,
22], namely
In the previous section, we established sufficient conditions under which exponential stability in the mean square is equivalent to asymptotic stability in probability. Therefore, if these conditions are met, every admissible control will be optimal with respect to the stabilization problem, resulting in an infinite set of such controls. The optimal control must then be selected based on the best quality of the transient process, expressed through minimizing the quality functional
where
is a measurable function defined for
,
.
The functional (
17) can by calculated as follows:
Compute the trajectory
of the SDE (
1) for a given control
, e.g., using the Euler–Maruyama method [
23].
Put
,
, and
into the functional (
17).
Estimate the value of (
17) through statistical simulation (Monte Carlo method).
The choice of the functional
, determining the functional estimate
and the quality of the process
as a strong solution of the SDE (
1), must satisfy the following criteria:
- (a)
Minimization conditions of (
17) must ensure that the strong solution
of the SDE (
1) converges to zero rapidly on average, with high probability;
- (b)
The integral’s value should reasonably estimate the computational cost for generating the control ;
- (c)
The value of the quality functional should adequately reflect the computational effort required to determine the control ;
- (d)
The chosen functional must permit explicit or constructive solutions to the stabilization problem.
Definition 5. A control satisfyingwhere the minimum is taken over all controls continuous in variables t and x for each , and is called optimal with respect to the optimal stabilization of the strong solution of the system (1)–(3). Theorem 2. Let, for the system (1)–(3), exists, and the r-vector function exists, such that: - 1.
The sequence of functions is a Lyapunov functional, satisfying the conditions of Theorem 1;
- 2.
The sequence of r-dimensional control functions is measurable in all arguments, where ;
- 3.
The sequence of functions appearing in the criterion (17) by is positive definite, i.e., for , - 4.
The sequence of infinitesimal operators , calculated at , satisfies the condition for - 5.
The expression reaches its minimum at , i.e., - 6.
converges.
Then, the control stabilizes the solution of the problem (1)–(3). Moreover, the following equality holds: Proof of Theorem 2. The proof follows exactly the argument provided for Theorem 2 in [
5]. □
Since
is a Markov process with a finite number of states, then transition probability can be defined as follows:
According to this assumption, we obtain an equation that must be satisfied by the optimal Lyapunov functions and the optimal control .
Following [
14,
24], the weak infinitesimal operator (WIO) (
9) has the form
where
is a scalar product,
,
, “
T” denotes transposition,
is a trace of the matrix, and
is a conditional probability density
assuming
.
Using (
20), we derive the first equation for
by substituting the averaged infinitesimal operator
[
1] into the left-hand side of (
18). The resulting equation at the points
is:
For defining optimal control
we differentiate (
21) with respect to the variable
u:
where
–
-matrix of Jacobi, stacked with elements
.
Thus, according to Theorem 2, the problem of optimal stabilization reduces to solving a complex system of the nonlinear Equation (
18), involving partial derivatives, to find the unknown Lyapunov functions
, where
and
.
It is important to note that this nonlinear system is derived by eliminating the control
from Equations (
21) and (
22).
Given the inherent difficulty of solving such a nonlinear system directly, we will subsequently focus on linear stochastic systems, for which more tractable solution schemes can be constructed.
5. Stabilization of Linear Systems
Consider a linear case:
with Markov switching given by
where
, and initial conditions are
Here, are piecewise continuous integrable matrix functions of appropriate dimensions.
We assume that the jump conditions for the state vector
at a switching instant
, corresponding to the change in the structure of the system due to the transition
to
, are linear and expressed as
where
– are independent random variables satisfying
,
and
are given
-matrices.
Note that Equation (
26) can replace the general jump conditions under the following circumstances [
21]:
If jumps are deterministic, then
and Expression (
26) reduces to
Continuous changes in the phase vector correspond to and —the identity matrix of size .
The quality of the transient process is evaluated through the quadratic functional
where
and
are symmetric matrices of dimensions
and
, respectively.
The optimal Lyapunov functions are assumed in the quadratic form:
where
is a positive-definite symmetric matrix of dimension
.
Throughout this section, we assume that
is a Markov chain with a finite state space
, and
is a Markov chain with states
in a metric space
and transition probabilities
at step
k. We introduce the following notations:
Substituting the functional (
28) into Equations (
21) and (
22), we derive equations for determining the optimal Lyapunov function
and optimal control
for
. Using WIO form (
20), we find that:
Using (
30), we can derive optimal control for
and
:
Given the matrix equality
eliminating
from (
29) and setting to zero a quadratic form, a system of matrix Riccati-type differential equations for determining the matrices
, where
,
, corresponding to the interval
, are obtained:
Theorem 3. Suppose the system of matrix Equations (32) and (33) has positive-definite solutions of the order : Then, the control defined by (31) provides a solution to the optimal stabilization problem for the linear stochastic system (23)–(25) with jump conditions (26) and optimality criterion (27). Remark 2. Sufficient conditions of resolvability of Ricatti-type Equations (32) and (33) given in the work [25]. 6. Model Example
For comparison results, consider example from [
14]. For this example, define the linear autonomous stochastic differential equation
with perturbations
where breakpoints
are defined as
with concentration point
. Also define the non-random initial condition
In this autonomous case, the system (
32) has the next form [
5]:
The three cases of the parameters are considered as in [
14].
Case 1. Unstable system for :
- -
if : ;
- -
if : ;
Case 2. Stable system for :
- -
if : ;
- -
if : ;
Case 3. Unstable system with values of parameters from Case 2 and impulse action
The results of synthesis of the optimal control (
31) are visualized in
Figure 1.
As we can see, the optimal control stabilizes the unstable system in Case 1 and makes the decay of stable solutions faster in Cases 2 and 3.
7. Discussion
Optimal control theory relies on several fundamental methods, one of the most prominent being the Lyapunov function method. This method, along with its various modifications, is extensively employed to address practical problems in numerous mathematical models, including stochastic differential equations. In this study, particular emphasis is placed on applying Lyapunov functions to stochastic differential equations with Markov switches, specifically addressing scenarios involving concentration points. This approach could be extended by incorporating additional assumptions about the switching mechanism, such as semi-Markov processes, where state durations do not necessarily follow an exponential distribution.
The paper also considers a model example based on a similar example from [
14]. As can be seen from the simulation results, unstable systems can be stabilized; however, this is not possible in all cases, as illustrated in Case 3 of the model example. Thus, it remains an important issue to study the conditions for unconditional boundedness of solutions of the system (
1)–(
3).
Future research in this field will explore broader characteristics of the switching process and validate the theoretical results derived here through practical applications. Furthermore, the computational complexity of the algorithms proposed in Theorems 2 and 3 remains an area requiring further investigation, particularly in comparison to heuristic algorithms for optimal control estimation. Hence, subsequent research will include comparative analyses between the algorithms developed in this paper and heuristic methods. Further studies will primarily focus on linear systems, exploring necessary and sufficient conditions for stability and the existence of optimal controls.
8. Conclusions
In this paper, we have established sufficient conditions for ensuring stability in stochastic differential equations characterized by jump concentration points. Unlike most classical assumptions, which impose a strict minimal interval between jumps (i.e., ), our study deliberately omits this condition, thus allowing for jump concentration scenarios.
The stability analysis performed leverages a sequence of Lyapunov functions
, whose properties guarantee the stability of the solutions to Equations (
1)–(
3). Under assumption (
7), these Lyapunov functions can explicitly be constructed as
where constants
. Additionally, assumption (
7) significantly relaxes the previously stringent condition (
8) used in earlier studies [
16]. Thus, the derived stability conditions for stochastic differential equations with jump concentration points combine conditions from systems without jumps (
) and constraints on jump magnitudes.
In the special case of linear stochastic differential equations, the stability conditions simplify to the existence of positive-definite solutions to Riccati-type matrix equations, similar to the classical cases. These conditions, derived from Equation (
32), are sufficient but do not fully characterize all stable systems, as demonstrated by the examples in [
5].
Future research directions will focus specifically on linear systems, aiming to define both necessary and sufficient stability conditions and determine the existence of optimal control solutions.
Author Contributions
Conceptualization, T.L., I.V.M. and P.V.N.; methodology, T.L. and I.V.M.; formal analysis, T.L., I.V.M. and P.V.N.; writing—original draft preparation, T.L., I.V.M., V.P.S. and P.V.N.; writing—review and editing, T.L., I.V.M., V.P.S. and P.V.N.; supervision, V.P.S. and P.V.N.; project administration, V.P.S. and P.V.N.; funding acquisition, T.L. and P.V.N. All authors have read and agreed to the published version of the manuscript.
Funding
This work was funded by the Luxembourg National Research Fund C21/BM/15739125/ DIOMEDES to T.L. and P.V.N. For the purpose of open access, and in fulfilment of the obligations arising from the grant agreement, the author has applied a Creative Commons Attribution 4.0 International (CC BY 4.0) license to any Author Accepted Manuscript version arising from this submission.
Data Availability Statement
No new data were created or analyzed in this study.
Acknowledgments
We would like to acknowledge the administrations of the Luxembourg Institute of Health (LIH) and Luxembourg National Research Fund (FNR) for their support in organizing scientific contacts between research groups in Luxembourg and Ukraine, and Anna Golebiewska for support and fruitful discussions regarding the application of mathematical methods in cancer research.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
SDE | Stochastic Differential Equation |
WIO | Weak Infinitesimal Operator |
References
- Kats, I.Y. Lyapunov Function Method in Problems of Stability and Stabilization of Random-Structure Systems; Izd. Uralsk. Gosakademii Putei Soobshcheniya: Yekaterinburg, Russia, 1998. (In Russian) [Google Scholar]
- Yasinskaya, L.I.; Lukashiv, T.O.; Yasinskiy, V.K. Stabilization of stochastic diffusive dynamical systems with impulse markov switchings and parameters. Part II. stabilization of dynamical systems of random structure with external Markov switchings. J. Autom. Inf. Sci. 2009, 41, 26–42. [Google Scholar] [CrossRef]
- Doob, J.L. Stochastic Processes; Wiley: New York, NY, USA, 1953. [Google Scholar]
- Dynkin, E.B. Markov Processes; Academic Press: New York, NY, USA, 1965. [Google Scholar]
- Lukashiv, T.; Litvinchuk, Y.; Malyk, I.V.; Golebiewska, A.; Nazarov, P.V. Stabilization of Stochastic Dynamical Systems of a Random Structure with Markov Switches and Poisson Perturbations. Mathematics 2023, 11, 582. [Google Scholar] [CrossRef]
- Øksendal, B.; Sulem, A.; Zhang, T. Optimal control of stochastic delay equations and time-advanced backward stochastic differential equations. Adv. Appl. Probab. 2011, 43, 572–596. [Google Scholar] [CrossRef]
- Øksendal, B.; Sulem, A. Forward–Backward Stochastic Differential Games and Stochastic Control under Model Uncertainty. J. Optim. Theory Appl. 2014, 161, 22–55. [Google Scholar] [CrossRef]
- Savku, E.; Weber, G.W. Stochastic differential games for optimal investment problems in a Markov regime-switching jump-diffusion market. Ann. Oper. Res. 2022, 312, 1171–1196. [Google Scholar] [CrossRef]
- Davis, M.; Burstein, G. A Deterministic Approach to Stochastic Optimal Control with Application to Anticipative Control. Stoch. Stoch. Rep. 1992, 40, 203–256. [Google Scholar] [CrossRef]
- Dhayal, R.; Malik, M.; Abbas, S. Solvability and optimal controls of non-instantaneous impulsive stochastic fractional differential equation of order q∈(1, 2). Stochastics 2021, 93, 780–802. [Google Scholar] [CrossRef]
- Li, X.; Sun, J.; Xiong, J. Linear Quadratic Optimal Control Problems for Mean-Field Backward Stochastic Differential Equations. Appl. Math. Optim. 2019, 80, 223–250. [Google Scholar] [CrossRef]
- Rosseel, E.; Wells, G. Optimal control with stochastic PDE constraints and uncertain controls. Comput. Methods Appl. Mech. Eng. 2012, 213–216, 152–167. [Google Scholar] [CrossRef]
- Yong, J. A Linear-Quadratic Optimal Control Problem for Mean-Field Stochastic Differential Equations. Siam J. Control Optim. 2013, 51, 2809–2838. [Google Scholar] [CrossRef]
- Lukashiv, T.; Malyk, I.V.; Chepeleva, M.; Nazarov, P.V. Stability of stochastic dynamic systems of a random structure with Markov switching in the presence of concentration points. AIMS Math. 2023, 8, 24418–24433. [Google Scholar] [CrossRef]
- Øksendal, B. Stochastic Differential Equation; Springer: New York, NY, USA, 2013. [Google Scholar]
- Lukashiv, T.; Malyk, I. Existence and Uniqueness of Solution of Stochastic Dynamic Systems with Markov Switching and Concentration Points. Int. J. Differ. Equ. 2017, 7958398. [Google Scholar] [CrossRef]
- Mao, X. Stochastic Differential Equations and Applications, 2nd ed.; Woodhead Publishing: Cambridge, UK, 2008. [Google Scholar]
- Skorohod, A.V. Asymptotic Methods in the Theory of Stochastic Differential Equations; American Mathematical Society: Providence, RI, USA, 1989. [Google Scholar]
- Hu, L.; Shi, P.; Huang, B. Stochastic stability and robust control for sampled-data systems with Markovian jump parameters. J. Math. Anal. Appl. 2006, 313, 504–517. [Google Scholar] [CrossRef]
- Lyapunov, A.M. The General Problem of Stability of Motion; Gostekhizdat: Moscow, Russia, 1958. (In Russian) [Google Scholar]
- Andreeva, E.A.; Kolmanovskii, V.B.; Shaikhet, L.E. Control of Hereditary Systems; Nauka: Moskow, Russia, 1992. (In Russian) [Google Scholar]
- Feng, X.; Loparo, K.A.; Ji, Y.; Chizek, H.J. Stochastic stability poperties of jump linear systems. IEEE Trans. Autom. Control 1992, 37, 38–53. [Google Scholar] [CrossRef]
- Kloeden, P.E.; Platen, E. Numerical Solution of Stochastic Differential Equations; Springer: Berlin/Heidelberg, Germany, 1992. [Google Scholar]
- Lukashiv, T. One Form of Lyapunov Operator for Stochastic Dynamic System with Markov Parameters. J. Math. 2016, 2016, 1694935. [Google Scholar] [CrossRef]
- Antonyuk, S.V.; Byrka, M.F.; Gorbatenko, M.Y.; Lukashiv, T.O.; Malyk, I.V. Optimal Control of Stochastic Dynamic Systems of a Random Structure with Poisson Switches and Markov Switching. J. Math. 2020, 2020, 9457152. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).