Next Article in Journal
On an Ambrosetti-Prodi Type Problem with Applications in Modeling Real Phenomena
Previous Article in Journal
AI-Enabled Condition Monitoring Framework for Autonomous Pavement-Sweeping Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stabilization of Stochastic Dynamic Systems with Markov Parameters and Concentration Point

by
Taras Lukashiv
1,2,3,*,
Igor V. Malyk
4,
Venkata P. Satagopam
3 and
Petr V. Nazarov
1,5,*
1
Multiomics Data Science Research Group, Department of Cancer Research, Luxembourg Institute of Health, L-1445 Strassen, Luxembourg
2
NORLUX Neuro-Oncology Laboratory, Department of Cancer Research, Luxembourg Institute of Health, L-1210 Luxembourg, Luxembourg
3
Luxembourg Centre for Systems Biomedicine, University of Luxembourg, L-4370 Belvaux, Luxembourg
4
Department of Mathematical Problems of Control and Cybernetics, Yuriy Fedkovych Chernivtsi National University, 58000 Chernivtsi, Ukraine
5
Bioinformatics and AI Unit, Department of Medical Informatics, Luxembourg Institute of Health, L-1445 Strassen, Luxembourg
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(14), 2307; https://doi.org/10.3390/math13142307 (registering DOI)
Submission received: 30 May 2025 / Revised: 13 July 2025 / Accepted: 17 July 2025 / Published: 19 July 2025

Abstract

This paper addresses the problem of optimal stabilization for stochastic dynamical systems characterized by Markov switches and concentration points of jumps, which is a scenario not adequately covered by classical stability conditions. Unlike traditional approaches requiring a strictly positive minimal interval between jumps, we allow jump moments to accumulate at a finite point. Utilizing Lyapunov function methods, we derive sufficient conditions for exponential stability in the mean square and asymptotic stability in probability. We provide explicit constructions of Lyapunov functions adapted to scenarios with jump concentration points and develop conditions under which these functions ensure system stability. For linear stochastic differential equations, the stabilization problem is further simplified to solving a system of Riccati-type matrix equations. This work provides essential theoretical foundations and practical methodologies for stabilizing complex stochastic systems that feature concentration points, expanding the applicability of optimal control theory.

1. Introduction

One of the prerequisites for the physical realization of a process is its stability. Hence, ensuring stability is an essential task known as the stabilization problem.
The stabilization problem for stochastic dynamical systems with random structures was first solved by I.Ya. Kats in [1]. For stochastic dynamical systems with random structure and Markov switches that lead to jumps of the phase vector, the problem of optimal stabilization was solved by the authors in [2]. In that work, it was assumed that the moments of Markov switches are known. This assumption allowed a relatively straightforward transfer of basic properties from stochastic differential equations (SDEs) with continuous trajectories to systems with jumps. This global problem includes sub-problems related to the Markov property for solution x ( t ) , t 0 , the martingale properties for x ( t ) 2 , t 0 , and other local characteristics [3,4,5]. Similar problems for stochastic differential equations with delays have been studied in [6,7]. Stochastic games [7,8] have become widely used, in which it is assumed that two players have different objectives, and their strategies are described by stochastic differential equations. More general approaches to analyzing random fields and stochastic partial differential equations can be found in [9,10,11,12,13].
The inclusion of an integral term with respect to a Poisson measure also allowed cases with random moments of finite jumps in the phase vector to be addressed. For such systems, an explicit control form that stabilizes linear systems to asymptotic stochastic stability was obtained in [5], along with justification of exact and approximate methods for control calculation. A system of Riccati-type matrix equations has been derived to find a general solution to the stabilization problem.
In the works mentioned above, as well as in most studies involving trajectory jumps, i.e., distance between jumps satisfy the following condition: | t k t k 1 | > δ > 0 . However, in catastrophe theory or resonant systems, cases often arise where jumps concentrate at a point, leading to the relation:
lim k t k = t < .
In this scenario, as previously indicated in [14], the cumulative effect of jumps can result in the loss of system stability. Consider the simple example which illustrates problems with the existence of a concentration point:
d x ( t ) = x ( t ) d t
with jumps defined by
x ( t k ) = x ( t k ) ( 1 + k 2 )
at points
t k = α k , α > 0 .
One can easily conclude that
lim t α | x ( t ) | =
provided that x ( 0 ) 0 . This straightforward example highlights the critical role of jump magnitudes in systems with concentration points.
In Section 2, we introduce the mathematical model for dynamical systems with jumps, described by a system of stochastic differential equations with Markov parameters and switches, providing sufficient conditions for the existence and uniqueness of solutions. Section 3 establishes sufficient conditions for exponential stability of the solution x ( t ) , t 0 (Theorem 1), which simultaneously define the class of admissible controls. Sufficient conditions for the existence of solutions to the optimal stabilization problem are established in Section 4 (Theorem 2). The synthesis of optimal control in explicit form for a linear system with a quadratic quality functional is presented in Section 5.

2. Task Definition

On the probabilistic basis ( Ω , F , F , P ) [3,4,15], consider a controlled stochastic dynamical system of random structure given by a stochastic differential equation (SDE)
d x ( t ) = a ( t , ξ ( t ) , x ( t ) , u ( t ) ) d t + b ( t , ξ ( t ) , x ( t ) , u ( t ) ) d w ( t ) , t R + K ,
with Markov switches
Δ x ( t ) = g ( t k , ξ ( t k ) , η k , x ( t k ) ) , t k K = { t n } ,
and initial conditions
x ( 0 ) = x 0 R m , ξ ( 0 ) = y Y , η 0 = h H .
Here, ξ ( t ) , t 0 is a Markov chain with a finite number of states Y = { 1 , 2 , , N ¯ } and generator Q = { q i j } , i , j = 1 , , N ¯ ; { η k , k 0 } is a Markov chain with values in the space H and transition probability matrix P H ; x : [ 0 , + ) × Ω R m ; u ( t ) R m is the control; w ( t ) , t 0 , is an m-dimensional standard Wiener process; the processes w , ξ , and η are independent [3,4,15].
Define
F t k = σ ( ξ ( s ) , w ( s ) , η e , s t k , t e t k )
as the minimal σ -algebra with respect to which ξ ( t ) for t [ 0 ,   t k ] and η n , n k are measurable.
Measurable functions a : R + × Y × R m R m , b : R + × Y × R m R m × R m and g : R + × Y × H × R m R m satisfy the boundedness and global Lipschitz conditions:
Coefficients of stochastic differential equation are measurable maps: a : R + × Y × R m R m , b : R + × Y × R m R m × R m , g : R + × Y × H × R m R m satisfy the boundedness condition and the global Lipschitz condition
| a ( t , y , x , u ) | 2 + | b ( t , y , x , u ) | 2 + | g ( t , y , h , x ) | 2 C ( 1 + | x | 2 ) ;
| a ( t , y , x 1 , u ) a ( t , y , x 2 , u ) | 2 + | b ( t , y , x 1 , u ) b ( t , y , x 2 , u ) | 2 L | x 1 x 2 | 2 , x 1 , x 2 R m ;
| g ( t k , y , h , x 1 ) g ( t k , y , h , x 2 ) | 2 L k | x 1 x 2 | 2 , x 1 , x 2 R m , k = 1 L k < .
Consider the scenario with a concentration point of jumps, i.e.,
lim n t n = t < .
Assume the following conditions are satisfied:
k = 1 γ k < , γ k = sup x R m , y Y , h H | g ( t k , y , h , x ) |
and
lim ε 0 ln ε + N ε k = 1 N ε L k = , N ε : = i n f k 1 : m = k γ m < ε .
Conditions (4)–(8) in fact are the sufficient conditions of existence and unique for a strong solution to the Cauchy problem (1)–(3) [16].
Define the transition probability of the Markov chain ( ξ ( t k ) , η k , x ( t k ) ) that determines the solution of the problem (1)–(3) at step k as
P k ( ( y , h , x ) , Γ × G × C ) : =
: = P ( ( ξ ( t k + 1 ) , η k + 1 , x ( t k + 1 ) ) Γ × G × C | ( ξ ( t k ) , η k , x ( t k ) ) = ( y , h , x ) ) .
Definition 1
([1]). The discrete Lyapunov operator ( l v k ) ( y , h , x ) on a sequence of measurable scalar functions v k ( y , h , x ) : Y × H × R m R 1 , k N { 0 } , for the SDE (1) with Markov switches (2) is defined as
( l v k ) ( y , h , x ) : =
: = Y × H × R m P k ( y , h , x ) ( d u × d z × d l ) v k + 1 ( u , z , l ) v k ( y , h , x ) .
Here, v k ( y , h , x ) , k N is a Lyapunov function defined by the following definition.
Definition 2
([1,2]). A Lyapunov function for the system (1)–(3) is a sequence of nonnegative non-decreasing functions v k ( y , h , x ) , k 0 , satisfying the following conditions:
1. 
for all k 0 , y Y , h H , x R m the discrete Lyapunov operator ( l v k ) ( y , h , x ) (9) is defined;
2. 
v k ( y , h , x ) v k + 1 ( y , h , x ) for all k 0 , y Y , h H , x R m ;
3. 
if r
v ¯ ( r ) inf k N , y Y , h H , | x | r v k ( y , h , x ) + ;
4. 
if r 0
v ̲ ( r ) sup k N , y Y , h H , | x | r v k ( y , h , x ) 0 ;
and v ¯ ( r ) and v ̲ ( r ) are continuous and monotonous.
Definition 3
([17,18]). The stochastic system (1)–(3) is called:
—stable in probability if for ε 1 > 0 , ε 2 > 0 exist δ = δ ( ε 1 , ε 2 ) > 0 , such that | x | < δ implies
P sup t 0 | x ( t ) | > ε 1 < ε 2
for all y Y , h H ;
—asymptotically stochastically stable if it is stable in probability and for any ε > 0 there exists δ 2 > 0 , such that
lim T P sup t T | x ( t ) | > ε = 0
for all | x | < δ 2 , y Y , h H and T 0 .
Definition 4
([17,18,19]). The system (1)–(3) is called exponentially stable in the mean square if x 0 R m , ξ ( 0 ) and η 0 , there exist constants α > 0 , β > 0 , such that
E | x ( t ) | 2 α | x 0 | 2 e β t , t 0 .
In general, these two types of convergences are not related to each other [19], but in specific cases one type of convergence can be used to infer the other. A remark to Theorem 1 allows us to state that, provided that the Lyapunov function exists, the exponential stability in the mean square implies it is asymptotically stochastically stable. Thus, Theorem 1 allows us to draw conclusions, not only about the moment convergence of the solution to 0 but also about the probabilistic properties of the solution for large T.

3. Stability

One common approach to establishing sufficient conditions for exponential stability involves imposing a constraint on the switching moments of the type
| t k + 1 t k | > Δ , Δ > 0 = const ,
which excludes the possibility of concentration points of jumps [1,20,21]. Clearly, in the case considered here, condition (13) is not fulfilled. Therefore, it is essential to identify conditions under which the solution to the system (1)–(3) is exponentially stable in the mean square.
Theorem 1.
Suppose that, for the system (1)–(3), there exist Lyapunov functions v k ( x , y , h ) , k 0 , and strictly increasing on [ 0 , ) , positive and continuous functions c , f and z, c ( 0 ) = f ( 0 ) = z ( 0 ) = 0 , such that, under the condition,
f ( | x ( t ) | 2 ) v k ( x ( t ) , y , h ) z ( | x ( t ) | 2 )
holds, along with the inequality
l v k ( x ( t ) , y , h ) c ( | x ( t ) | 2 ) ,
for t [ t k , t k + 1 ) , k 0 , and
j = k N T k N T + n 1 E { v j ( x , ξ ( t j ) , η j ) } χ k ( v k ( x ( t k ) , ξ ( t k ) , η k ) ) ,
for some integer N T 0 , n = 1 , 2 , , N T , where χ k : R + R + is a non-decreasing function satisfying χ k ( s ) s . Assume also that
inf x ( 0 , ) c ( x ) z ( x ) > 0 .
Then, the system (1)–(3) is exponentially stable in the mean square.
Proof of Theorem 1.
On the interval [ t k , t k + 1 ) , k 0 , consider the weak infinitesimal operator acting on the Lyapunov function v k ( x , y , h ) . From (15), we have
l v k ( x , y , h ) c ( | x ( t ) | ) = c ( | x ( t ) | ) v k ( x , y , h ) · v k ( x , y , h ) α v k ( x , y , h ) ,
where the scalar α > 0 is defined as
α = inf x ( 0 , ) c ( x ) z ( x ) .
By Dynkin’s formula [4], for any t [ t k ¯ , t k ¯ + 1 ) , and some k ¯ 0 ,
E j = 0 k ¯ 1 t j t j + 1 l v j ( x ( s ) , y , h ) d s + t k ¯ t l v k ¯ ( x ( s ) , y , h ) d s =
= j = 0 k ¯ 1 E { v j + 1 ( x ( t j + 1 ) , ξ ( t j + 1 ) , η ( t j + 1 ) ) } v j ( x ( t j ) , ξ ( t j ) , η ( t j ) ) +
+ E v k ¯ ( x ( t k ¯ ) , ξ ( t k ¯ ) , η ( t k ¯ ) ) v k ¯ ( x , y , h ) =
E { v k ¯ ( x ( t k ¯ ) , ξ ( t k ¯ ) , η ( t k ¯ ) ) } v 0 ( x 0 , y , h ) +
k = 0 k ¯ N T 1 j = k N T ( k + 1 ) N T 1 E { v j ( x ( t j ) , ξ ( t j ) , η ( t j ) ) } v j ( x ( t j ) , ξ ( t j ) , η ( t j ) ) +
+ j = k ¯ N T N T k ¯ E { v j ( x ( t j ) , ξ ( t j ) , η ( t j ) ) } v j ( x ( t j ) , ξ ( t j ) , η ( t j ) ) .
Using (16), it follows that
E { v k ¯ ( x ( t k ¯ ) , ξ ( t k ¯ ) , η ( t k ¯ ) ) } v 0 ( x 0 , y , h )
E j = 0 k ¯ 1 t j t j + 1 l v j ( x ( s ) , ξ ( t j ) , η ( t j ) ) d s + t k ¯ t l v k ¯ ( x ( s ) , ξ ( t k ¯ ) , η ( t k ¯ ) ) d s
α E j = 0 k ¯ 1 t j t j + 1 v j ( x ( s ) , ξ ( t j ) , η ( t j ) ) d s + t k ¯ t v k ¯ ( x ( s ) , ξ ( t k ¯ ) , η ( t k ¯ ) ) d s =
= α E 0 t v k ¯ ( x ( s ) , ξ ( t k ¯ ) , η ( t k ¯ ) ) d s .
The last inequality implies that
d d t E { v k ( x , ξ ( t k ) , η k ) } α d d t 0 t E { v k ( x ( s ) , ξ ( t k ) , η k ) } d s = α E { v k ( x , y , h ) } ,
which, by the Gronwall–Bellman lemma, implies
E { v k ( x ( t ) , y , h ) } v k ( x 0 , y , h ) e α t , t [ 0 ,   t k ] .
This estimate and (14) imply exponential stability in the mean square of the system (1)–(3). Indeed, based on the estimate of E { v k ( x , y , h ) } , the event lim t | x ( t ) | = 0 is equivalent to lim t k t , t E { v k ( x ( t ) , y , h ) } = 0 , proving the theorem. □
Remark 1.
Since the inequality (15) holds, the solution of (1)–(3) is asymptotically stable in probability [14].

4. Stabilization

The problem of optimal stabilization for the system (1)–(3) consists of determining a control u ( t , x ( t ) ) , such that the trivial solution x ( t ) 0 of the system becomes asymptotically stable in probability.
It is assumed that the control u is based on the full feedback principle and is continuous in t for t 0 , x R m , for all fixed ξ ( t ) = y Y and η k = h H . Specifically, in the case of continuous dynamics (1) and (2), the control is defined by the relation
u ( t ) = u ( t , x ( t ) ) ,
and the left-hand side boundary is considered precisely due to the presence of jumps (2).
The set of admissible controls consists of those controls for which the system is exponentially stable [1,22], namely
U = u ( t ) = u ( t , x ( t ) ) | E | x ( t ) | | x 0 | e α ( u ) t , α ( u ) > 0 .
In the previous section, we established sufficient conditions under which exponential stability in the mean square is equivalent to asymptotic stability in probability. Therefore, if these conditions are met, every admissible control will be optimal with respect to the stabilization problem, resulting in an infinite set of such controls. The optimal control must then be selected based on the best quality of the transient process, expressed through minimizing the quality functional
I u ( y , h , x 0 ) : =
: = k = 0 t k t k + 1 E W ( t , x ( t ) , u ( t ) ) / ξ ( 0 ) = y , η 0 = h , x ( 0 ) = x 0 d t ,
where W ( t , x , u ) 0 is a measurable function defined for t 0 , x R m , u R r .
The functional (17) can by calculated as follows:
  • Compute the trajectory x ( t ) of the SDE (1) for a given control u ( t , y , h , x ) , e.g., using the Euler–Maruyama method [23].
  • Put x ( t ) , ξ ( t ) , and u ( t ) = u ( t , x ( t ) ) into the functional (17).
  • Estimate the value of (17) through statistical simulation (Monte Carlo method).
  • The choice of the functional W ( t , x , u ) , determining the functional estimate I u and the quality of the process x ( t ) as a strong solution of the SDE (1), must satisfy the following criteria:
    (a)
    Minimization conditions of (17) must ensure that the strong solution x ( t ) of the SDE (1) converges to zero rapidly on average, with high probability;
    (b)
    The integral’s value should reasonably estimate the computational cost for generating the control u ( t ) ;
    (c)
    The value of the quality functional should adequately reflect the computational effort required to determine the control u ( t ) ;
    (d)
    The chosen functional W ( t , x , u ) must permit explicit or constructive solutions to the stabilization problem.
Definition 5.
A control u 0 ( t ) satisfying
I u 0 ( y , h , x 0 ) = min I u ( y , h , x 0 ) ,
where the minimum is taken over all controls continuous in variables t and x for each ξ ( 0 ) = y Y , and η 0 = h H is called optimal with respect to the optimal stabilization of the strong solution x R m of the system (1)–(3).
Theorem 2.
Let, for the system (1)–(3), v 0 ( t k , y , h , x ) exists, and the r-vector function u 0 ( t , y , h , x ) R r exists, such that:
1. 
The sequence of functions v k 0 ( y , h , x ) v 0 ( t k , y , h , x ) is a Lyapunov functional, satisfying the conditions of Theorem 1;
2. 
The sequence of r-dimensional control functions
u k 0 ( y , h , x ) u 0 ( t k , y , h , x ) R r
is measurable in all arguments, where 0 t k < t k + 1 , k 0 ;
3. 
The sequence of functions appearing in the criterion (17) by x R m is positive definite, i.e., for t [ t k , t k + 1 ) , k 0 ,
W ( t , x , u k 0 ( y , h , x ) ) > 0 ;
4. 
The sequence of infinitesimal operators ( l v k 0 ) u k 0 , calculated at u k 0 u 0 ( y , h , x ) , satisfies the condition for t [ t k , t k + 1 )
( l v k 0 ) u k 0 = W ( t , x , u k 0 ) ;
5. 
The expression ( l v k 0 ) + W ( t , x , u ) reaches its minimum at u = u 0 , k 0 , i.e.,
( l v k 0 ) u k 0 + W ( t , x , u k 0 ) =
= inf u R r { ( l v k 0 ) | u + W ( t , x , u ) } = 0 .
6. 
The series
k = 0 t k t k + 1 E W ( t , x ( t ) , u ( t ) ) / x ( t k ) d t <
converges.
Then, the control u k 0 u 0 ( t k , y , h , x ) , k 0 stabilizes the solution of the problem (1)–(3). Moreover, the following equality holds:
v 0 ( y , h , x 0 )
k = 0 t k t k + 1 E W ( t , x ( t ) , u ( t ) ) | x ( t k ) d t =
= min u R r k = 0 t k t k + 1 E W ( t , x ( t ) , u ( t ) ) | x ( t k ) d t I u 0 ( y , h , x 0 ) .
Proof of Theorem 2.
The proof follows exactly the argument provided for Theorem 2 in [5]. □
Since ξ ( t k ) is a Markov process with a finite number of states, then transition probability can be defined as follows:
P { ω : ξ ( t + Δ t ) = y j ξ ( t ) = y i , y i y j } = q i j ( t ) Δ t + o ( Δ t ) , i , j = 1 , N ¯ .
According to this assumption, we obtain an equation that must be satisfied by the optimal Lyapunov functions v k 0 ( y , h , x ) and the optimal control u k 0 ( t , x ) t [ t k , t k + 1 ) .
Following [14,24], the weak infinitesimal operator (WIO) (9) has the form
( l v k ) ( y , h , x ) = v k ( y , h , x ) t + ( v k ( y , h , x ) , a ( t , y , x , u ) ) +
+ 1 2 S p ( b T ( t , y , x , u ) · 2 v k ( y , h , x ) · b ( t , y , x , u ) ) +
+ j i N [ R m v j ( t , x ) p i j ( t , z / x ) d z v i ( t , x ) ] q i j ,
where ( · , · ) is a scalar product, v k = v k x 1 , , v k x m T , 2 v k = 2 v k x i x j i , j = 1 m , k 0 , “T” denotes transposition, S p is a trace of the matrix, and p i j ( t , z / x ) is a conditional probability density
P x ( τ ) [ z , z + d z ] / x ( τ 0 ) = x = p i j ( τ , z / x ) d z + o ( d z )
assuming ξ ( τ 0 ) = y i , ξ ( τ ) = y j .
Using (20), we derive the first equation for v 0 by substituting the averaged infinitesimal operator ( l v k 0 ) | u [1] into the left-hand side of (18). The resulting equation at the points ( t k , y j , η k , x ) is:
v k 0 t + v k 0 x T · a ( t , y , x , u ) + 1 2 S p b T ( t , y i , x ) · 2 v k 0 x 2 · b ( t , y i , x ) +
+ j i l + v j 0 ( y j , h , x j ) p i j t , z / x d z v i 0 ( y i , h , x ) q i j t d t +
+ W ( t , x , u ) = 0 .
For defining optimal control u k 0 ( t , y , h , x ) we differentiate (21) with respect to the variable u:
v 0 x T · a u + W u T u = u k 0 = 0 ,
where a u m × r -matrix of Jacobi, stacked with elements a n u s , n = 1 , m ¯ , s = 1 , r ¯ ; W u W u 1 , , W u r , k 0 .
Thus, according to Theorem 2, the problem of optimal stabilization reduces to solving a complex system of the nonlinear Equation (18), involving partial derivatives, to find the unknown Lyapunov functions v i k 0 v k 0 ( y , h , x ) , where i = 1 , l ¯ and k 0 .
It is important to note that this nonlinear system is derived by eliminating the control u k 0 = u 0 ( t , y , h , x ) from Equations (21) and (22).
Given the inherent difficulty of solving such a nonlinear system directly, we will subsequently focus on linear stochastic systems, for which more tractable solution schemes can be constructed.

5. Stabilization of Linear Systems

Consider a linear case:
d x ( t ) = [ A ( t , ξ ( t ) ) x ( t ) + B ( t , ξ ( t ) ) u ( t ) ] d t +
+ σ ( t , ξ ( t ) ) x ( t ) d w ( t ) , t R + K ,
with Markov switching given by
Δ x ( t ) | t = t k = g ( t k , ξ ( t k ) , η k , x ( t k ) ) , t k K = { t n }
where lim n + t n = + , and initial conditions are
x ( 0 ) = x 0 R m , ξ ( 0 ) = y Y , η 0 = h H .
Here, A , B , σ are piecewise continuous integrable matrix functions of appropriate dimensions.
We assume that the jump conditions for the state vector x R m at a switching instant t = t , corresponding to the change in the structure of the system due to the transition ξ ( t ) = y i to ξ ( t ) = y j y i , are linear and expressed as
x ( t ) = K i j x ( t ) + s = 1 N ξ s Q s x ( t ) ,
where ξ s : = ξ s ( ω ) – are independent random variables satisfying E ξ s = 0 , E ξ s 2 = 1 , K i j and Q s are given ( m × m ) -matrices.
Note that Equation (26) can replace the general jump conditions under the following circumstances [21]:
If jumps are deterministic, then Q s = 0 and Expression (26) reduces to
x ( t ) = K i j x ( t ) ;
Continuous changes in the phase vector correspond to Q s = 0 and K i j = I —the identity matrix of size ( m × m ) .
The quality of the transient process is evaluated through the quadratic functional
I u ( y , h , x 0 ) : =
: = k = 0 t k t k + 1 E x T ( t ) M ( t ) x ( t ) + u T ( t ) D ( t ) u ( t ) / y , h , x 0 d t ,
where M ( t ) 0 and D ( t ) > 0 are symmetric matrices of dimensions ( m × m ) and ( r × r ) , respectively.
The optimal Lyapunov functions are assumed in the quadratic form:
v k 0 ( y , h , x ) = x T G ( t , y , h ) x ,
where G ( t , y , h ) is a positive-definite symmetric matrix of dimension ( m × m ) .
Throughout this section, we assume that ξ ( t ) is a Markov chain with a finite state space Y = { y 1 , y 2 , , y l } , and η k , k 0 is a Markov chain with states h k in a metric space H and transition probabilities P k ( h , G ) at step k. We introduce the following notations:
A i ( t ) : = A ( t , y i ) , B i ( t ) : = B ( t , y i ) , σ i ( t ) : = σ ( t , y i ) ,
G i k ( t ) : = G ( t , y i , h k ) , v i k : = v ( y i , h k , x ) .
Substituting the functional (28) into Equations (21) and (22), we derive equations for determining the optimal Lyapunov function v k 0 ( y , h , x ) and optimal control u k 0 ( t , x ) for t [ t k , t k + 1 ) . Using WIO form (20), we find that:
x T ( t ) d G i k ( t ) d t x ( t ) + 2 [ A i ( t ) x ( t ) + B i ( t ) u ( t ) ] G i k ( t ) x ( t ) +
+ S p ( x T ( t ) σ i T ( t ) G i k ( t ) σ i ( t ) x ( t ) ) +
+ x T ( t ) j i N K i j T G i k ( t ) K i j + s = 0 l Q s T G i k ( t ) Q s G i k ( t ) q i j x ( t ) +
+ x T ( t ) M i k ( t ) x ( t ) + u T ( t ) D i k ( t ) u ( t ) = 0 ,
2 x T ( t ) G i k ( t ) B i ( t ) + 2 u T ( t ) D i k ( t ) = 0 .
Using (30), we can derive optimal control for ξ ( t ) = y i and η k = h k , k 0 :
u i k 0 ( t , x ) = D i k 1 ( t ) B i T ( t ) G i k ( t ) x ( t ) .
Given the matrix equality
2 x T ( t ) G i k ( t ) A i ( t ) x = x T ( t ) ( G i k ( t ) A i ( t ) + A i T ( t ) G i k ( t ) ) x ( t ) ,
eliminating u i k 0 from (29) and setting to zero a quadratic form, a system of matrix Riccati-type differential equations for determining the matrices G i k ( t ) , where i = 1 , 2 , , l , k 0 , corresponding to the interval [ t k , t k + 1 ) , are obtained:
d G i k ( t ) d t + G i k ( t ) A i ( t ) B i ( t ) D i k 1 ( t ) B i T ( t ) G i k ( t ) +
+ S p ( σ i T ( t ) G i k ( t ) σ i ( t ) ) +
+ j i N K i j T G i k ( t ) K i j + s = 0 l Q s T G i k ( t ) Q s G i k ( t ) q i j + M i k ( t ) = 0 ,
lim t G i k ( t ) = 0 , i = 1 , N ¯ , k 0 .
Theorem 3.
Suppose the system of matrix Equations (32) and (33) has positive-definite solutions of the order ( m × m ) :
G 1 k ( t ) > 0 , G 2 k ( t ) > 0 , , G l k ( t ) > 0 .
Then, the control defined by (31) provides a solution to the optimal stabilization problem for the linear stochastic system (23)–(25) with jump conditions (26) and optimality criterion (27).
Remark 2.
Sufficient conditions of resolvability of Ricatti-type Equations (32) and (33) given in the work [25].

6. Model Example

For comparison results, consider example from [14]. For this example, define the linear autonomous stochastic differential equation
d x ( t ) = ( a ( ξ ( t ) ) x ( t ) + b ( ξ ( t ) ) u ( t ) ) d t + σ ( ξ ( t ) ) x ( t ) d w ( t ) , t 0 ,
with perturbations
x t k = x t k + e α k η k x t k 1 ,
where breakpoints t k are defined as
t k = 2 1 k , k 1
with concentration point t = lim k = 2 . Also define the non-random initial condition
x ( 0 ) = 10 , ξ ( 0 ) = y 0 Y , η 0 = 1 .
In this autonomous case, the system (32) has the next form [5]:
G i k A i + A i T G i k B i D i k 1 B i T G i k + σ i T G i k σ i +
+ j i N K i j T G i k K i j + s = 0 l Q s T G i k Q s G i k q i j + M i k = 0 , i = 1 , N ¯ , k 0 .
The three cases of the parameters are considered as in [14].
Case 1. Unstable system for b 0 :
-
if ξ = 1 : a = 1 , σ = 0.3 , b = 1 ;
-
if ξ = 2 : a = 0.5 , σ = 2.1 , b = 1 ;
Case 2. Stable system for b 0 :
-
if ξ = 1 : a = 1 , σ = 0.3 , b = 1 ;
-
if ξ = 2 : a = 0.5 , σ = 2 , b = 1 ;
Case 3. Unstable system with values of parameters from Case 2 and impulse action
x t k = x t k + e α k η k x t k 1 .
The results of synthesis of the optimal control (31) are visualized in Figure 1.
As we can see, the optimal control stabilizes the unstable system in Case 1 and makes the decay of stable solutions faster in Cases 2 and 3.

7. Discussion

Optimal control theory relies on several fundamental methods, one of the most prominent being the Lyapunov function method. This method, along with its various modifications, is extensively employed to address practical problems in numerous mathematical models, including stochastic differential equations. In this study, particular emphasis is placed on applying Lyapunov functions to stochastic differential equations with Markov switches, specifically addressing scenarios involving concentration points. This approach could be extended by incorporating additional assumptions about the switching mechanism, such as semi-Markov processes, where state durations do not necessarily follow an exponential distribution.
The paper also considers a model example based on a similar example from [14]. As can be seen from the simulation results, unstable systems can be stabilized; however, this is not possible in all cases, as illustrated in Case 3 of the model example. Thus, it remains an important issue to study the conditions for unconditional boundedness of solutions of the system (1)–(3).
Future research in this field will explore broader characteristics of the switching process ξ ( t ) and validate the theoretical results derived here through practical applications. Furthermore, the computational complexity of the algorithms proposed in Theorems 2 and 3 remains an area requiring further investigation, particularly in comparison to heuristic algorithms for optimal control estimation. Hence, subsequent research will include comparative analyses between the algorithms developed in this paper and heuristic methods. Further studies will primarily focus on linear systems, exploring necessary and sufficient conditions for stability and the existence of optimal controls.

8. Conclusions

In this paper, we have established sufficient conditions for ensuring stability in stochastic differential equations characterized by jump concentration points. Unlike most classical assumptions, which impose a strict minimal interval between jumps (i.e., | t k t k 1 | > Δ ), our study deliberately omits this condition, thus allowing for jump concentration scenarios.
The stability analysis performed leverages a sequence of Lyapunov functions v k ( y , h , x ) , k 0 , whose properties guarantee the stability of the solutions to Equations (1)–(3). Under assumption (7), these Lyapunov functions can explicitly be constructed as
v k ( y , h , x ) = d k v 0 ( y , h , x ) ,
where constants d k = 1 + m = 1 k γ m < . Additionally, assumption (7) significantly relaxes the previously stringent condition (8) used in earlier studies [16]. Thus, the derived stability conditions for stochastic differential equations with jump concentration points combine conditions from systems without jumps ( g = 0 ) and constraints on jump magnitudes.
In the special case of linear stochastic differential equations, the stability conditions simplify to the existence of positive-definite solutions to Riccati-type matrix equations, similar to the classical cases. These conditions, derived from Equation (32), are sufficient but do not fully characterize all stable systems, as demonstrated by the examples in [5].
Future research directions will focus specifically on linear systems, aiming to define both necessary and sufficient stability conditions and determine the existence of optimal control solutions.

Author Contributions

Conceptualization, T.L., I.V.M. and P.V.N.; methodology, T.L. and I.V.M.; formal analysis, T.L., I.V.M. and P.V.N.; writing—original draft preparation, T.L., I.V.M., V.P.S. and P.V.N.; writing—review and editing, T.L., I.V.M., V.P.S. and P.V.N.; supervision, V.P.S. and P.V.N.; project administration, V.P.S. and P.V.N.; funding acquisition, T.L. and P.V.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Luxembourg National Research Fund C21/BM/15739125/ DIOMEDES to T.L. and P.V.N. For the purpose of open access, and in fulfilment of the obligations arising from the grant agreement, the author has applied a Creative Commons Attribution 4.0 International (CC BY 4.0) license to any Author Accepted Manuscript version arising from this submission.

Data Availability Statement

No new data were created or analyzed in this study.

Acknowledgments

We would like to acknowledge the administrations of the Luxembourg Institute of Health (LIH) and Luxembourg National Research Fund (FNR) for their support in organizing scientific contacts between research groups in Luxembourg and Ukraine, and Anna Golebiewska for support and fruitful discussions regarding the application of mathematical methods in cancer research.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SDEStochastic Differential Equation
WIOWeak Infinitesimal Operator

References

  1. Kats, I.Y. Lyapunov Function Method in Problems of Stability and Stabilization of Random-Structure Systems; Izd. Uralsk. Gosakademii Putei Soobshcheniya: Yekaterinburg, Russia, 1998. (In Russian) [Google Scholar]
  2. Yasinskaya, L.I.; Lukashiv, T.O.; Yasinskiy, V.K. Stabilization of stochastic diffusive dynamical systems with impulse markov switchings and parameters. Part II. stabilization of dynamical systems of random structure with external Markov switchings. J. Autom. Inf. Sci. 2009, 41, 26–42. [Google Scholar] [CrossRef]
  3. Doob, J.L. Stochastic Processes; Wiley: New York, NY, USA, 1953. [Google Scholar]
  4. Dynkin, E.B. Markov Processes; Academic Press: New York, NY, USA, 1965. [Google Scholar]
  5. Lukashiv, T.; Litvinchuk, Y.; Malyk, I.V.; Golebiewska, A.; Nazarov, P.V. Stabilization of Stochastic Dynamical Systems of a Random Structure with Markov Switches and Poisson Perturbations. Mathematics 2023, 11, 582. [Google Scholar] [CrossRef]
  6. Øksendal, B.; Sulem, A.; Zhang, T. Optimal control of stochastic delay equations and time-advanced backward stochastic differential equations. Adv. Appl. Probab. 2011, 43, 572–596. [Google Scholar] [CrossRef]
  7. Øksendal, B.; Sulem, A. Forward–Backward Stochastic Differential Games and Stochastic Control under Model Uncertainty. J. Optim. Theory Appl. 2014, 161, 22–55. [Google Scholar] [CrossRef]
  8. Savku, E.; Weber, G.W. Stochastic differential games for optimal investment problems in a Markov regime-switching jump-diffusion market. Ann. Oper. Res. 2022, 312, 1171–1196. [Google Scholar] [CrossRef]
  9. Davis, M.; Burstein, G. A Deterministic Approach to Stochastic Optimal Control with Application to Anticipative Control. Stoch. Stoch. Rep. 1992, 40, 203–256. [Google Scholar] [CrossRef]
  10. Dhayal, R.; Malik, M.; Abbas, S. Solvability and optimal controls of non-instantaneous impulsive stochastic fractional differential equation of order q∈(1, 2). Stochastics 2021, 93, 780–802. [Google Scholar] [CrossRef]
  11. Li, X.; Sun, J.; Xiong, J. Linear Quadratic Optimal Control Problems for Mean-Field Backward Stochastic Differential Equations. Appl. Math. Optim. 2019, 80, 223–250. [Google Scholar] [CrossRef]
  12. Rosseel, E.; Wells, G. Optimal control with stochastic PDE constraints and uncertain controls. Comput. Methods Appl. Mech. Eng. 2012, 213–216, 152–167. [Google Scholar] [CrossRef]
  13. Yong, J. A Linear-Quadratic Optimal Control Problem for Mean-Field Stochastic Differential Equations. Siam J. Control Optim. 2013, 51, 2809–2838. [Google Scholar] [CrossRef]
  14. Lukashiv, T.; Malyk, I.V.; Chepeleva, M.; Nazarov, P.V. Stability of stochastic dynamic systems of a random structure with Markov switching in the presence of concentration points. AIMS Math. 2023, 8, 24418–24433. [Google Scholar] [CrossRef]
  15. Øksendal, B. Stochastic Differential Equation; Springer: New York, NY, USA, 2013. [Google Scholar]
  16. Lukashiv, T.; Malyk, I. Existence and Uniqueness of Solution of Stochastic Dynamic Systems with Markov Switching and Concentration Points. Int. J. Differ. Equ. 2017, 7958398. [Google Scholar] [CrossRef]
  17. Mao, X. Stochastic Differential Equations and Applications, 2nd ed.; Woodhead Publishing: Cambridge, UK, 2008. [Google Scholar]
  18. Skorohod, A.V. Asymptotic Methods in the Theory of Stochastic Differential Equations; American Mathematical Society: Providence, RI, USA, 1989. [Google Scholar]
  19. Hu, L.; Shi, P.; Huang, B. Stochastic stability and robust control for sampled-data systems with Markovian jump parameters. J. Math. Anal. Appl. 2006, 313, 504–517. [Google Scholar] [CrossRef]
  20. Lyapunov, A.M. The General Problem of Stability of Motion; Gostekhizdat: Moscow, Russia, 1958. (In Russian) [Google Scholar]
  21. Andreeva, E.A.; Kolmanovskii, V.B.; Shaikhet, L.E. Control of Hereditary Systems; Nauka: Moskow, Russia, 1992. (In Russian) [Google Scholar]
  22. Feng, X.; Loparo, K.A.; Ji, Y.; Chizek, H.J. Stochastic stability poperties of jump linear systems. IEEE Trans. Autom. Control 1992, 37, 38–53. [Google Scholar] [CrossRef]
  23. Kloeden, P.E.; Platen, E. Numerical Solution of Stochastic Differential Equations; Springer: Berlin/Heidelberg, Germany, 1992. [Google Scholar]
  24. Lukashiv, T. One Form of Lyapunov Operator for Stochastic Dynamic System with Markov Parameters. J. Math. 2016, 2016, 1694935. [Google Scholar] [CrossRef]
  25. Antonyuk, S.V.; Byrka, M.F.; Gorbatenko, M.Y.; Lukashiv, T.O.; Malyk, I.V. Optimal Control of Stochastic Dynamic Systems of a Random Structure with Poisson Switches and Markov Switching. J. Math. 2020, 2020, 9457152. [Google Scholar] [CrossRef]
Figure 1. Examples of solution trajectories estimated by the Euler–Maruyama method (previously shown in [14]): (a) Case 1—unstable, (b) Case 2—stable, and (c) Case 3—unstable with an extreme growth at t = 2. Uncontrolled (red lines) and controlled (green lines) solutions with control given by (31). Blue marks indicate moments of impulse actions. Optimal control stabilizes the system’s trajectory.
Figure 1. Examples of solution trajectories estimated by the Euler–Maruyama method (previously shown in [14]): (a) Case 1—unstable, (b) Case 2—stable, and (c) Case 3—unstable with an extreme growth at t = 2. Uncontrolled (red lines) and controlled (green lines) solutions with control given by (31). Blue marks indicate moments of impulse actions. Optimal control stabilizes the system’s trajectory.
Mathematics 13 02307 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lukashiv, T.; Malyk, I.V.; Satagopam, V.P.; Nazarov, P.V. Stabilization of Stochastic Dynamic Systems with Markov Parameters and Concentration Point. Mathematics 2025, 13, 2307. https://doi.org/10.3390/math13142307

AMA Style

Lukashiv T, Malyk IV, Satagopam VP, Nazarov PV. Stabilization of Stochastic Dynamic Systems with Markov Parameters and Concentration Point. Mathematics. 2025; 13(14):2307. https://doi.org/10.3390/math13142307

Chicago/Turabian Style

Lukashiv, Taras, Igor V. Malyk, Venkata P. Satagopam, and Petr V. Nazarov. 2025. "Stabilization of Stochastic Dynamic Systems with Markov Parameters and Concentration Point" Mathematics 13, no. 14: 2307. https://doi.org/10.3390/math13142307

APA Style

Lukashiv, T., Malyk, I. V., Satagopam, V. P., & Nazarov, P. V. (2025). Stabilization of Stochastic Dynamic Systems with Markov Parameters and Concentration Point. Mathematics, 13(14), 2307. https://doi.org/10.3390/math13142307

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop