Next Article in Journal
Construction of Consistent Fuzzy Competence Spaces and Learning Path Recommendation
Previous Article in Journal
Modular Abbott Algebras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Finite Time Stability and Optimal Control for Stochastic Dynamical Systems

School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0150, USA
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(10), 767; https://doi.org/10.3390/axioms14100767
Submission received: 30 September 2025 / Revised: 6 October 2025 / Accepted: 10 October 2025 / Published: 16 October 2025

Abstract

In real-world applications, finite time convergence to a desired Lyapunov stable equilibrium is often necessary. This notion of stability is known as finite time stability and refers to systems in which the state trajectory reaches an equilibrium in finite time. This paper explores the notion of finite time stability in probability within the context of nonlinear stochastic dynamical systems. Specifically, we introduce sufficient conditions based on Lyapunov methods, utilizing Lyapunov functions that satisfy scalar differential inequalities involving fractional powers for guaranteeing finite time stability in probability. Then, we address the finite time optimal control problem by developing a framework for designing optimal feedback control laws that achieve finite time stochastic stability of the closed-loop system using a Lyapunov function that also serves as the solution to the steady-state stochastic Hamilton–Jacobi–Bellman equation.

1. Introduction

The classical notions of asymptotic and exponential stability in dynamical systems theory involve system trajectories approaching a Lyapunov stable equilibrium state over an infinite time horizon. However, in many engineering and scientific applications, it is necessary for the system to reach a stable equilibrium within finite time rather than asymptotically. Achieving finite time stability in deterministic systems requires the closed-loop system dynamics to be non-Lipschitz, which can result in nonuniqueness of solutions in backward time. Nonetheless, it is possible to preserve forward time uniqueness of solutions for finite time convergent systems.
Sufficient conditions for forward time uniqueness in deterministic systems without assumption of the Lipschitz continuity of the system dynamics are given in [1]. For stochastic systems, refs. [2,3,4] provide criteria that guarantee the forward time uniqueness of solutions without requiring Lipschitz continuity. These works also show that if the dynamics are continuous and forward time uniqueness holds, then the system trajectories are almost surely continuous with respect to the system initial conditions, even in the absence of the Lipschitz continuity of the drift and diffusion functions characterizing the stochastic dynamical system.
The concept of finite time stability, that is, convergence of the system trajectories to a Lyapunov stable equilibrium in finite time, was first introduced by Roxin [5] and further developed in [6,7] for time-invariant deterministic systems, as well as in [8,9,10] for time-varying deterministic systems. In particular, Lyapunov and converse Lyapunov theorems for finite time stability were established using Lyapunov functions that satisfy scalar differential inequalities with fractional powers, and it was shown that the regularity of the Lyapunov function depends on the properties of the settling time function, which characterizes the finite time convergence behavior of the system.
Even though extensions of finite time stability for stochastic systems have been addressed in the literature [11,12,13,14], several of these results contain errors. Specifically, as pointed out in [15], several definitions and the main result in [11] are incorrect. Moreover, the authors in [13] used the results of [11] to develop partial-state stabilization in finite time, propagating the errors of [11] in their work, whereas the proof of the main theorem of [12] used Jensen’s inequality incorrectly, thus invalidating their result. And finally, ref. [14] failed to provide a bound on the expectation of the settling time function, which is crucial in providing a complete theory for finite time stability and stabilization for stochastic dynamical systems.
In this paper, we correct these oversights to present a self-contained theory for finite time stability in probability and build upon the framework established in [16,17] to address the problem of optimal finite time stabilization for stochastic nonlinear systems. Specifically, we ensure finite time stability in probability for the closed-loop system using a Lyapunov function that satisfies a scalar differential inequality that involves fractional powers. This Lyapunov function is shown to correspond to the steady-state solution of the stochastic Hamilton–Jacobi–Bellman equation, ensuring both finite time stability in probability and optimal performance. Finally, we also develop connections of our approach to inverse optimal control [18,19] by constructing a family of finite time stabilizing stochastic feedback laws that minimize a derived cost functional.

2. Mathematical Preliminaries

We will start by reviewing some basic results on nonlinear stochastic dynamical systems [20,21,22,23]. First, however, we require some notations and definitions. The notation, definitions, and mathematical preliminaries in this section are adopted from [17]. A probability space is a mathematical construct that provides a model for a random experiment and consists of the triple ( Ω , F , P ) . The sample space Ω is the set of all possible outcomes of the experiment. The event space F is a collection of subsets of the sample space, where each subset represents an event. The event space F has the algebraic structure of a σ-algebra. The pair ( Ω , F ) is a measurable space, and the function P defines a probability measure on the σ -algebra F , assigning a probability to each event in the event space F . A complete probability space is one in which the σ -algebra F includes all the subsets of sets with a probability measure zero.
A Borel set is a set in a topological space that is derived from open (or closed) sets through the repeated operation of countable unions, countable intersections, and relative complements. The Borel σ-algebra on A , denoted by B ( A ) , is the smallest σ -algebra containing all the open sets in A . An R n -valued random vector x is a measurable function x : Ω R n , that is, for every Borel set B B ( R n ) , its preimage is x 1 ( B ) = ω Ω : x ( ω ) B F . A random variable is a scalar random vector. In general, given a random vector x and a σ -algebra E , we say that x is E -measurable if x 1 ( B ) E for all B B ( R n ) . Given an integrable random vector x and a sub- σ -algebra E F , E [ x ] and E [ x | E ] denote, respectively, the expectation of the random vector x and the conditional expectation of x given E under the probability measure P .
A continuous-time stochastic process  x ( t ) : t 0 is a collection of random vectors defined on the probability space ( Ω , F , P ) , and it is indexed by the set of nonnegative real numbers R ¯ + that are greater than or equal to 0. Occasionally, we write x ( t , ω ) for x ( t ) to denote the explicit dependence of the random variable x ( t ) on the outcome ω Ω . For every fixed time t 0 , the random variable ω x ( t , ω ) assigns a vector to every outcome ω Ω , and for every fixed ω Ω , the mapping t x ( t , ω ) generates a sample path (or sample trajectory) of the stochastic process x ( · ) , where, for convenience, we write x ( · ) to denote the stochastic process { x ( t ) : t 0 } .
A (continuous-time) filtration  { F t : t 0 } on ( Ω , F , P ) is a collection of sub- σ -fields of F , and it is indexed by R ¯ + , such that F s F t , 0 s t . A filtration is complete if every F t contains the set of all sets that are contained in a P -null set. The stochastic process x ( · ) is progressively measurable with respect to { F t : t 0 } if, for every t 0 , the map ( s , ω ) x ( s , ω ) defined on [ 0 , t ] × Ω is B ( [ 0 , t ] ) × F t -measurable. The stochastic process x ( · ) is said to be adapted with respect to { F t : t 0 } , or it is simply F t -adapted if x ( t ) is F t -measurable for every t 0 . An adapted stochastic process with right continuous (or left continuous) sample paths is progressively measurable [24].
The stochastic process V ( t ) : t 0 , where V ( t ) , t 0 , is a random variable, is a martingale with respect to the filtration { F t : t 0 } if it is F t -adapted, E [ | V ( t ) | ] < , t 0 , and E [ V ( t ) | F s ] = V ( s ) , 0 s < t . If we replace the equality in E [ V ( t ) | F s ] = V ( s ) with “≤” (respectively, “≥”), then V ( · ) is a supermartingale (respectively, submartingale). A random variable τ : Ω R ¯ + is called a stopping time of the filtration { F t : t 0 } if { ω Ω : τ ( ω ) t } F t for all t 0 . A stopping time τ is a bounded stopping time if the event { τ M } , where M 0 is a constant, has a probability of one. (For an additional discussion on stochastic processes, filtrations, martingales, and stopping times, see [24]).
In this paper, we consider stochastic dynamical systems G of the form
d x ( t ) = f ( x ( t ) ) d t + D ( x ( t ) ) d w ( t ) , x ( 0 ) = x 0 , t 0 ,
where (1) is a stochastic differential equation. The stochastic process x ( · ) represents the system state, x 0 is a random system initial condition vector, and w ( · ) is a d-dimensional Brownian motion process. For every t 0 , the random variable x ( t ) takes values in the state space R n . The Borel measurable mappings f : R n R n and D : R n R n × d satisfy f ( 0 ) = 0 and D ( 0 ) = 0 , and they are known as the system drift and diffusion functions. The stochastic differential equation (1) is interpreted as a way of expressing the integral equation
x ( t ) = x ( 0 ) + 0 t f ( x ( s ) ) d s + 0 t D ( x ( s ) ) d w ( s ) , x ( 0 ) = x 0 , t 0 ,
where the first integral in (2) is a Lebesgue integral and the second is an Itô integral [25].
Let ( Ω , F , { F t : t 0 } , P ) be a fixed complete filtered probability space, w ( · ) be a F t -adapted Brownian motion, and x 0 be a F 0 -measurable initial condition. A solution to (1) is an R n -valued F t -adapted process x ( · ) with continuous sample paths such that the integrals in (2) exist and (2) holds almost surely for all t 0 . For a Brownian motion disturbance and initial condition given in a prescribed probability space, the solution to (2) is known as a strong solution [26]. In this paper, we focus on strong solutions, and we will simply use the term “solution” to refer to a strong solution. A solution to (1) is unique if for any two solutions x 1 ( · ) and x 2 ( · ) that satisfy (1), x 1 ( t ) = x 2 ( t ) for all t 0 almost surely.
For a stochastic system of the form given by (1) with solution x ( · ) , the (infinitesimal) generator A of x ( · ) is an operator acting on the function V : R n R , and it is defined as ([22])
A V ( x ) = lim t 0 + E x [ V ( x ( t ) ) ] V ( x ) t , x R n ,
where E x [ · ] denotes the expectation given that x is a fixed point in R n . The set of functions V : R n R for which the limit in (3) exists is denoted by D A . If V C 2 ( R n ) has compact support, where C r ( R n ) denotes the space of functions with r-continuous derivatives, then V D A and A V ( x ) = L V ( x ) , where
L V ( x ) V ( x ) f ( x ) + 1 2 tr D T ( x ) V ( x ) D ( x ) , x R n ,
V ( x ) V x ( x ) , and V ( x ) 2 V x 2 ( x ) [22]. Note that the differential operator L introduced in (4) is defined for every V C 2 ( R n ) , and it is characterized by the system drift and diffusion functions. With a minor abuse in terminology, we will refer to the differential operator L as the (infinitesimal) generator of the system G .
If V C 2 ( R n ) , then it follows from Itô’s formula [26] that the stochastic process { V ( x ( t ) ) : t 0 } satisfies
V ( x ( t ) ) = V ( x ( 0 ) ) + 0 t L V ( x ( s ) ) d s + 0 t V ( x ( s ) ) D ( x ( s ) ) d w ( s ) , t 0 .
If the terms appearing in (5) are integrable and the Itô integral in (5) is a martingale, then it follows from (5) that
E [ V ( x ( t ) ) ] = E [ V ( x ( 0 ) ) ] + E 0 t L V ( x ( s ) ) d s , t 0 .
A more general version of (6) that holds for t is replaced by a stopping time τ is known as Dynkin’s formula [22].
We say that a function g : R n R is of polynomial growth if there exist positive constants C and m such that
| g ( x ) | C ( 1 + | | x | | m ) , x R n .
For V C r ( R n ) , r N , where N denotes the set of natural numbers, we write V C p r ( R n ) if V and all its partial derivatives up to order r are of polynomial growth. As shown in [27], V C p r ( R n ) is a sufficient condition for the integrability of the terms in (5). In this case, it can be shown that (3) implies (4). In this paper, we assume all Lyapunov functions V are of polynomial growth with r = 1 .
Given a function V : R n R , we say that V is positive definite with respect to x e R n if V ( x e ) = 0 and V ( x ) > 0 , x R n , x x e . If x e = 0 , then V is positive definite with respect to the origin, or V is simply positive definite. Moreover, we say a function V is nonnegative definite if V ( x ) 0 , x R n . We say that V is negative definite if V is positive definite. In addition, we say that V is radially unbounded if lim x V ( x ) = .
As discussed in the introduction, in order to achieve convergence in finite time for stochastic dynamical systems, the drift and diffusion functions characterizing the system dynamics need to be non-Lipschitzian giving rise to nonuniqueness of solutions in backward time ([20], Lemma 5.3). Uniqueness of solutions in forward time, however, can be preserved in the case of finite time convergence. The next result establishes the existence and uniqueness of solutions for the stochastic differential equation (1) with non-Lipchitzian drift and diffusion functions. For the statement of this result, B δ ( x e ) , δ > 0 , denotes the open ball centered at x e with radius δ in the Euclidean norm.
Theorem 1
([4]). Consider the nonlinear stochastic dynamical system (1) with initial condition x 0 such that E [ x 0 p ] < , p N , and assume that the following conditions hold:
(i)
Continuity. f : R n R n and D : R n R n × d are continuous.
(ii)
Linear growth. There exists a constant K > 0 such that, for all x R n ,
| | f ( x ) | | + | | D ( x ) | | K ( 1 + | | x | | ) .
(iii)
For every q N , there exist a strictly increasing, continuous, and concave function g q : [ 0 , ) [ 0 , ) , as well as a constant c q 0 such that, for all x , y B q ( 0 ) ,
0 d σ g q ( σ ) =
and
2 ( x y ) T [ f ( x ) f ( y ) ] + D ( x ) D ( y ) 2 c q g q ( x y 2 ) .
Then, there exists a unique solution to (1). Furthermore,
E sup 0 s t | | x ( s ) | | p < , t 0 , p N .
Assumption 1.
For the remainder of this paper, we assume that the conditions for existence and uniqueness given in Theorem 1 are satisfied for system (1).

3. Finite Time Stability for Stochastic Dynamical Systems

In this section, we introduce the notion of finite time stability in probability and present sufficient conditions for finite time stability of (1) using a Lyapunov function that satisfies a scalar differential inequality involving fractional powers. First, however, we require some additional definitions and results. We denote the solution { x ( t ) : t 0 } to (1) with the initial condition x ( 0 ) = x at time t by s ( t , x , ω ) , and we define s t s ( t , · , ω ) : R n R n and s x s ( · , x , · ) : R ¯ + × Ω R n , with s x denoting the sample trajectory of (1). Thus, for every x R n , there exists a trajectory defined for all t 0 and ω Ω satisfying the dynamical process (1) with initial condition x ( 0 ) = x 0 . For simplicity of exposition, we write s ( t , x ) for s ( t , x , ω ) and s x ( t ) for s x ( t , ω ) omitting their dependence on ω .
The following definitions introduce the notions of a stochastic settling time and finite time stability in probability for stochastic dynamical systems. Here, we assume that the initial condition x 0 is a constant, and hence, whenever we write x 0 R n , we mean that x 0 is a constant vector. In this case, we will find it convenient to introduce the notation P x 0 [ · ] and E x 0 [ · ] to denote the probability and expected value, respectively, given that the initial condition x ( 0 ) is the fixed point x 0 R n almost surely.
Definition 1.
Consider the nonlinear stochastic dynamical system given by (1). The stochastic settling time T : R n × Ω { R ¯ + } is a stopping time with respect to { F t : t 0 } , and is defined as
T ( x , ω ) inf { t 0 : s ( t , x , ω ) = 0 } .
Note that if s ( t , x , ω ) 0 , t 0 , then T ( x , ω ) = + . For simplicity of exposition, we write T ( x ) for T ( x , ω ) , omitting its dependence on ω .
Definition 2.
Consider the nonlinear stochastic dynamical system given by (1) and let T : R n { R ¯ + } be as in Definition 1, then the zero solution x ( · ) 0 to (1) is finite time stable in probability if the following statements hold:
(i)
The zero solution x ( · ) 0 to (1) is Lyapunov stable in probability, that is, for every ε > 0 , we have
lim x 0 0 P x 0 sup t 0 x ( t ) > ε = 0
or, equivalently, for every ε > 0 and ρ ( 0 , 1 ) , there exists δ = δ ( ρ , ε ) > 0 such that, for all x 0 B δ ( 0 ) ,
P x 0 sup t 0 x ( t ) > ε ρ .
(ii)
For every ρ ( 0 , 1 ) , there exists δ = δ ( ρ ) > 0 such that if x 0 B δ ( 0 ) , then
P x 0 ( T ( x 0 ) < ) 1 ρ .
  • The zero solution x ( · ) 0 to (1) is globally finite time stable in probability if P x 0 ( T ( x 0 ) < ) = 1 for all x 0 R n .
Proposition 1.
Consider the nonlinear stochastic dynamical system (1). Assume that the zero solution x ( · ) 0 to (1) is globally finite time stable in probability. Let T : R n [ 0 , ) be as in Definition 1, then, for all x 0 R n ,
P x 0 sup t T ( x 0 ) s ( t , x 0 ) = 0 = 1 .
Proof. 
Since (1) is globally finite time stable, T ( x 0 , ω ) < and x ( T ( x 0 , ω ) , ω ) = 0 for all x 0 R n and almost all ω Ω . Defining x ˜ : R ¯ + × Ω R n as
x ˜ ( t , ω ) x ( t , ω ) , t T ( x 0 , ω ) , 0 , t > T ( x 0 , ω ) ,
and using the fact that x ( t , ω ) is sample continuous in t, it follows that
lim t T ( x 0 , ω ) x ˜ ( t , ω ) = x ˜ ( T ( x 0 , ω ) , ω ) = 0 = lim t T ( x 0 , ω ) + x ˜ ( t , ω ) ,
which implies x ˜ ( t , ω ) is sample continuous in t. Since x ( t , ω ) is F t -adapted, it follows that x ˜ ( t , ω ) is F t -adapted. Clearly, x ˜ ( t , ω ) satisfies (2) for all t T ( x 0 , ω ) .
Next, using the fact that f ( 0 ) = 0 and D ( 0 ) = 0 , it follows that, for all t > T ( x 0 , ω ) ,
x 0 + 0 t f ( x ˜ ( s , ω ) ) d s + 0 t D ( x ˜ ( s , ω ) ) d w ( s ) = x ˜ ( T ( x 0 , ω ) , ω ) + T ( x 0 , ω ) t f ( x ˜ ( s , ω ) ) d s + T ( x 0 , ω ) t D ( x ˜ ( s , ω ) ) d w ( s ) = x ˜ ( T ( x 0 , ω ) , ω ) = 0 = x ˜ ( t , ω ) ,
and hence, it follows that (2) is satisfied for all t 0 . Now, since the solution to (1) is unique, x ˜ ( t , ω ) is the only solution to (1) with x ( 0 ) = x 0 . Finally, the result follows by noting that this holds for almost all ω Ω . □
Proposition 1 implies that, for every ε > 0 and ρ ( 0 , 1 ) , there exists δ = δ ( ρ , ε ) > 0 such that, for all x 0 B δ ( 0 ) ,
P x 0 sup t 0 x ( t ) > ε ρ
and
P x 0 sup 0 t T ( x 0 ) x ( t ) > ε ρ ,
which are equivalent. Hence, it follows from Proposition 1 that if the zero solution x ( · ) 0 to (1) is globally finite time stable in probability, then it is globally asymptotically stable in probability. Thus, global finite time stability in probability is a stronger notion than global asymptotic stability in probability.
The following theorem based on the results appearing in [14,28] gives sufficient conditions for stochastic finite time stability using a Lyapunov function involving a scalar differential inequality. For completeness, we give a self-contained proof of this result as it forms the foundation for all later developments in this paper. For the statement of this theorem, 1 { A } denotes the indicator function of the set A , that is,
1 { A } ( x ) = 1 , if x A , 0 , if x A .
Theorem 2.
Let D R n be an open subset containing the origin. Consider the nonlinear stochastic dynamical system (1) and assume that there exist a two-times continuously differentiable function V : D R and constants a > 0 and α ( 0 , 1 ) such that
V ( 0 ) = 0 ,
V ( x ) > 0 , x D , x 0 ,
L V ( x ) a ( V ( x ) ) α , x D .
Then, the zero solution x ( · ) 0 to (1) is finite time stable in probability. If, in addition, D = R n , V is radially unbounded, and (19) and (20) hold on R n { 0 } , then the zero solution x ( · ) 0 to (1) is globally finite time stable in probability. Moreover, there exists a stochastic settling time T : R n [ 0 , ) such that
E x 0 [ T ( x 0 ) ] V ( x 0 ) 1 α a ( 1 α ) , x 0 R n .
Proof. 
Conditions (18)–(20) imply Lyapunov stability in probability by Theorem 2 of [17]. Thus, for every ε > 0 and ρ ( 0 , 1 ) such that B ¯ ε ( 0 ) D , there exists δ = δ ( ρ , ε ) > 0 such that, for all x 0 B δ ( 0 ) ,
P x 0 sup t 0 x ( t ) > ε ρ .
Next, note that finite time stability in probability holds trivially for x 0 = 0 . For all x 0 B δ ( 0 ) { 0 } , define the stopping times τ ε inf { t 0 : s ( t , x 0 ) > ε } , τ k inf { t 0 : s ( t , x 0 ) | | 1 k } , τ k ( t ) min { t , τ k } , and τ k , ε ( t ) min { t , τ k , τ ε } , where k N , k K and | | x 0 > 1 K .
Since V ( x ) 1 α is two-times continuously differentiable for all x R n such that x > 1 K , it follows from Itô’s formula [22] that, for all t 0 and k K ,
V ( x ( τ k , ε ( t ) ) ) 1 α = V ( x 0 ) 1 α + 0 τ k , ε ( t ) ( 1 α ) V ( x ( s ) ) α V ( x ( s ) ) f ( x ( s ) ) d s + 0 τ k , ε ( t ) 1 2 tr [ D T ( x ( s ) ) ( α ( 1 α ) V ( x ( s ) ) α 1 V T ( x ( s ) ) V ( x ( s ) ) + ( 1 α ) V ( x ( s ) ) α V ( x ( s ) ) ) D ( x ( s ) ) ] d s + 0 t 1 { s τ k } 1 { s τ ε } ( 1 α ) V ( x ( s ) ) α V ( x ( s ) ) D ( x ( s ) ) d w ( s ) .
The process ( t , ω ) 1 { t τ k } 1 { t τ ε } ( 1 α ) V ( x ( t ) ) α V ( x ( t ) ) D ( x ( t ) ) is B [ 0 , ) × F -measurable and F t -adapted due to the measurability of the mappings involved and the properties of the processes x ( · ) . Now, since V is continuously differentiable and D is continuous, it follows that, for all t 0 and k K ,
E x 0 [ 0 t 1 { s τ k } 1 { s τ ε } ( 1 α ) 2 V ( x ( s ) ) α V ( x ( s ) ) D ( x ( s ) ) 2 d s ] ( 1 α ) 2 max 1 k x ε { V ( x ) α V ( x ) D ( x ) 2 } t < ,
and hence, by Corollary 3.2.6 of [22], the Itô integral in (23) is a martingale. Now, (23) and the properties of the expectation of a martingale yield
E x 0 [ V ( x ( τ k , ε ( t ) ) ) 1 α ] = V ( x 0 ) 1 α + E x 0 [ 0 τ k , ε ( t ) ( 1 α ) V ( x ( s ) ) α ( V ( x ( s ) ) f ( x ( s ) ) + 1 2 tr D T ( x ( s ) ) V ( x ( s ) ) D ( x ( s ) ) ) d s ] E x 0 [ 0 τ k , ε ( t ) V ( x ) α 1 α ( 1 α ) tr ( D T ( x ( s ) ) V T ( x ( s ) )
· V ( x ( s ) ) D ( x ( s ) ) ) d s ] V ( x 0 ) 1 α + E x 0 0 τ k , ε ( t ) ( 1 α ) V ( x ( s ) ) α L V ( x ( s ) ) d s .
Note that the expectations in (25) are well defined because τ k , ε ( t ) is a stopping time for exiting the bounded set S k { x R n : 1 k < x ε } and the integrands are continuous in S k .
Next, it follows from (19), (20), and (25) that
V ( x 0 ) 1 α E x 0 0 τ k , ε ( t ) a ( 1 α ) = a ( 1 α ) E x 0 [ τ k , ε ( t ) ] a ( 1 α ) E x 0 [ 1 { τ ε = } τ k ( t ) ] .
Since (26) holds for all t 0 , it also holds for t = k K , implying that
E x 0 [ 1 { τ ε = } τ k ( k ) ] V ( x 0 ) 1 α a ( 1 α ) , k K .
We claim that
lim k τ k ( k ) 1 { τ ε = } = T ( x 0 ) 1 { τ ε = } .
To see this, note that, since the stopping times τ k are increasing with k, lim k τ k exists or is infinite for almost all ω Ω . Next, since x ( · ) is sample continuous, τ k T ( x 0 ) for all k K , which yields
lim k τ k T ( x 0 ) .
Now, for almost all ω { τ ε < } , (27) holds trivially; moreover, for some ω ¯ { τ ε = } , if lim k τ k ( ω ¯ ) = , then (28) implies (27). Alternatively, if lim k τ k ( ω ¯ ) < , then assume that, ad absurdum, lim k τ k ( ω ¯ ) < T ( x 0 , ω ¯ ) . It follows from the definition of τ k that
s ( τ k ( ω ¯ ) , x 0 , ω ¯ ) 1 k , k K ,
which implies
lim k s ( τ k ( ω ¯ ) , x 0 , ω ¯ ) = 0 .
Next, since s ( t , x , ω ) is sample continuous in t, we conclude
s lim k τ k ( ω ¯ ) , x 0 , ω ¯ = 0 ,
which contradicts lim k τ k ( ω ¯ ) < T ( x 0 , ω ¯ ) , and hence, (27) holds for almost all ω Ω .
Now, it follows from Fatou’s lemma [29] that
E x 0 [ 1 { τ ε = } T ( x 0 ) ] V ( x 0 ) 1 α a ( 1 α ) ,
which implies that
P ( T ( x 0 ) < ) P ( τ ε = ) 1 ρ ,
proving finite time stability in probability.
To show global finite time stability in probability, for every x 0 R n , define the stopping times τ l inf { t 0 : s ( t , x 0 ) ( 1 / l , l ) } and τ l ( t ) min { t , τ l } , where l N and l L , such that x 0 ( 1 / L , L ) . Using a similar argument as the one used to obtain (26) with τ k , ε ( t ) , replacing τ l ( t ) yields
E x 0 [ τ l ( t ) ] V ( x 0 ) 1 α a ( 1 α ) , t 0 .
Since (31) holds for all t 0 , it also holds for t = l L , which implies
E x 0 [ τ l ( l ) ] V ( x 0 ) 1 α a ( 1 α ) , l L .
Next, we claim that
lim l τ l ( l ) = T ( x 0 ) .
Since the stopping times τ l are increasing with l, lim l τ l exists or is infinite for almost all ω Ω . Now, using an analogous argument as the one used to obtain (28) gives
lim l τ l T ( x 0 ) .
Next, for almost all ω { lim l τ l = } , (33) implies (32). Alternatively, for some ω ¯ { lim l τ l < } , assume that, ad absurdum, lim l τ l ( ω ¯ ) < T ( x 0 , ω ¯ ) . Since (18)−(20) hold and V is radially unbounded, lim t x ( t , ω ¯ ) = 0 applies by Theorem 2 of [17]. Thus, there exists M ( x 0 , ω ¯ ) > 0 such that x ( t , ω ¯ ) M ( x 0 , ω ¯ ) for all t 0 , and hence,
s ( τ l ( ω ¯ ) , x 0 , ω ¯ ) 1 l , l > max { L , M ( x 0 , ω ¯ ) } ,
which implies
lim l s ( τ l ( ω ¯ ) , x 0 , ω ¯ ) = 0 .
Now, since s ( t , x , ω ) is sample continuous in t, we conclude
s lim l τ l ( ω ¯ ) , x 0 , ω ¯ = 0 ,
which contradicts lim l τ l ( ω ¯ ) < T ( x 0 , ω ¯ ) , and hence, (32) holds for almost all ω Ω .
Finally, it follows from Fatou’s lemma [29] that
E x 0 [ T ( x 0 ) ] V ( x 0 ) 1 α a ( 1 α ) ,
and hence,
P ( T ( x 0 ) < ) = 1 ,
which proves global finite time stability in probability. □

4. Stochastic Optimal Finite Time Stabilization

In the first part of this section, we provide connections between the Lyapunov functions and nonquadratic cost evaluation. Specifically, we consider the problem of evaluating a nonlinear–nonquadratic performance measure that depends on the solution of the stochastic nonlinear dynamical system given by (1). In particular, we show that the nonlinear–nonquadratic performance measure
J ( x 0 ) E x 0 0 L ( x ( t ) ) d t ,
where L : R n R and x ( t ) , t 0 , satisfies (1) with x ( 0 ) = x 0 , can be evaluated in a convenient form so long as (1) is related to an underlying Lyapunov function that proves finite time stability in the probability of (1).
The following theorem generalizes Theorem 6 of [17] to finite time stability.
Theorem 3.
Consider the nonlinear stochastic dynamical system given by (1) with the nonlinear–nonquadratic performance measure (36), where x ( · ) is the solution to (1). Furthermore, assume that there exist a two-times continuously differentiable radially unbounded function V C p 1 ( R n ) and constants a > 0 and α ( 0 , 1 ) such that
V ( 0 ) = 0 ,
V ( x ) > 0 , x R n , x 0 ,
L V ( x ) a ( V ( x ) ) α , x R n ,
L V p ( x ) 0 , x R n ,
L ( x ) + L V ( x ) = 0 , x R n ,
where p > 1 . Then, the zero solution x ( · ) 0 to (1) is globally finite time stable in probability. Moreover, there exists a stochastic settling time T : R n [ 0 , ) such that
E x 0 [ T ( x 0 ) ] V ( x 0 ) 1 α a ( 1 α ) , x 0 R n ,
and
J ( x 0 ) = V ( x 0 ) .
Proof. 
Global finite time stability in probability along with the existence of a stochastic settling time T : R n [ 0 , ) such that (42) holds are a direct consequence of (37)–(39) and V being radially unbounded by Theorem 2. Using the fact that V is continuous and E [ T ( x 0 ) ] < yields
lim t V ( x ( t ) ) = 0
almost surely.
Next, we show that the stochastic process { 0 t V ( x ( s ) ) D ( x ( s ) ) d w ( s ) : t 0 } is a martingale. To see this, first note that the process ( t , ω ) V ( x ( t ) ) D ( x ( t ) ) is B [ 0 , ] × F -measurable and F t -adapted because of the measurability of the mappings involved and the properties of the process x ( · ) . Now, using Tonelli’s theorem [29] it follows that, for all t 0 ,
E 0 t V ( x ( s ) ) D ( x ( s ) ) 2 d s = 0 t E V ( x ( s ) ) D ( x ( s ) ) 2 d s 0 t E α ( 1 + x ( s ) β ) d s 0 t α 1 + E sup 0 s t | | x ( s ) | | β d s < ,
for some positive constants α and β , and hence, by Corollary 3.2.6 of [22] the Itô integral
0 t V ( x ( s ) ) D ( x ( s ) , u ( s ) ) d w ( s ) , t 0 ,
is a martingale. To arrive at (45) we used the fact that V C p 1 ( R n ) , the linear growth condition (8), and the finiteness of the expected value of the supremum of the moments of the system state (10). Note that the supremum in (45) exists because of the continuity of the sample paths of x ( · ) .
Next, the measurability of the solution of (1) implies that V ( x ( t ) ) is F t -adapted and E [ V ( x ( t ) ) ] exists since V C p 1 ( R n ) and (10) holds. Now, using Itô’s formula [22] and (39) we have, for every t > s 0 and almost every ω Ω ,
E [ V ( x ( t ) ) | F s ] V ( x ( s ) ) + E s t V ( x ( r ) ) D ( x ( r ) ) d w ( r ) | F s .
Since the Itô integral in (46) is a martingale, (46) implies
E [ V ( x ( t ) ) | F s ] V ( x ( s ) ) , t > s 0 ,
which shows that the process { V ( x ( t ) ) : t 0 } is a nonnegative supermartingale. An analogous argument implies that the process { V p ( x ( t ) ) : t 0 } is also a nonnegative supermartingale for any p > 1 .
Next, using the fact that supermartingales have decreasing expectations gives
sup t 0 E x 0 [ V p ( x ( t ) ) ] E x 0 [ V p ( x 0 ) ] = V p ( x 0 ) < ,
which shows that the stochastic process { V ( x ( t ) ) : t 0 } is uniformly integrable [22] (p. 323). Thus, by Doob’s martingale convergence theorem [22], V ( x ( t ) ) converges in L 1 , and hence,
lim t E x 0 [ V ( x ( t ) ) ] = E x 0 lim t V ( x ( t ) ) = 0 .
Furthermore, it follows from (41) and Itô’s formula that, for all t 0 ,
0 t L ( x ( s ) ) d s = 0 t L V ( x ( s ) ) d s = V ( x 0 ) V ( x ( t ) ) + 0 t V ( x ( s ) ) D ( x ( s ) ) d w ( s ) .
Taking the expected value operator on both sides of (50) and using the martingale property of the stochastic integral in (50) yields
E x 0 0 t L ( x ( s ) ) d s = V ( x 0 ) E x 0 [ V ( x ( t ) ) ] .
Now, taking the limit as t and using (49) yields
lim t E x 0 0 t L ( x ( s ) ) d s = V ( x 0 ) lim t E x 0 [ V ( x ( t ) ) ] = V ( x 0 ) .
Finally, note that
J ( x 0 ) = E x 0 lim t 0 t L ( x ( s ) ) d s = lim t E x 0 0 t L ( x ( s ) ) d s = V ( x 0 ) ,
where the interchanging of the limit with the expectation operator in the second equality in (52) follows from the Lebesgue monotone convergence theorem [30] by noting that, by (39) and (41), L ( x ) 0 , x R n , and hence, 0 t L ( x ( s ) ) d s , t 0 , is monotone increasing, and thus, converges pointwise to lim t 0 t L ( x ( s ) ) d s . □
Next, we use the framework developed in Theorem 3 to obtain a characterization of the stochastic optimal feedback controllers that guarantee closed-loop, finite time stabilization in probability. Specifically, sufficient conditions for optimality are given in a form that corresponds to a steady-state version of the stochastic Hamilton–Jacobi–Bellman equation. To address the problem of characterizing finite time stabilizing feedback controllers, consider the nonlinear controlled stochastic dynamical system
d x ( t ) = F ( x ( t ) , u ( t ) ) d t + D ( x ( t ) , u ( t ) ) d w ( t ) , x ( 0 ) = x 0 , u ( · ) U , t 0 ,
where the stochastic process u ( · ) represents the controller input, and U is the set of admissible control inputs. For every t 0 , the random variables x ( t ) and u ( t ) take values in the state space R n and the control space R m , respectively. The mappings F : R n × R m R n and D : R n × R m R n × d satisfy (8) and (9), with f ( x ) = F ( x , u ) and D ( x ) = D ( x , u ) , uniformly in u, and F ( 0 , 0 ) = 0 and D ( 0 , 0 ) = 0 .
We assume that every u ( · ) U is an R m -valued Markov control process. An input process u ( · ) is a Markov control process if there exists a function ϕ : R ¯ + × R n R m such that u ( t ) = ϕ ( t , x ( t ) ) , t 0 . Note that the class of Markov controls encompasses both time-varying inputs (i.e., possibly open-loop control input processes) and state-dependent inputs (i.e., possibly a state feedback control input u ( t ) = ϕ ( x ( t ) ) , where ϕ : R n R m is a feedback control law). Note that, given the feedback control law ϕ , the closed-loop system (53) takes the form
d x ( t ) = F ( x ( t ) , ϕ ( x ( t ) ) ) d t + D ( x ( t ) , ϕ ( x ( t ) ) ) d w ( t ) , x ( 0 ) = x 0 . t 0 .
The following theorem generalizes Theorem 7 of [17] to optimal finite time stabilization.
Theorem 4.
Consider the nonlinear stochastic dynamical system given by (53) with nonlinear-nonquadratic performance measure
J ( x 0 , u ( · ) ) E x 0 0 L ( x ( t ) , u ( t ) ) d t ,
where x ( · ) is the solution to (53) with control input u ( · ) . Furthermore, assume that there exist a two-times continuously differentiable function V C p 1 ( R n ) , constants a > 0 and α ( 0 , 1 ) , and a feedback control law ϕ : R n R m such that
V ( 0 ) = 0 ,
V ( x ) > 0 , x R n , x 0 ,
ϕ ( 0 ) = 0 ,
L V ( x , ϕ ( x ) ) a ( V ( x ) ) α , x R n ,
L V p ( x , ϕ ( x ) ) 0 , x R n ,
H ( x , ϕ ( x ) ) = 0 , x R n ,
H ( x , u ) 0 , x R n , u R m ,
where p > 1 and
H ( x , u ) L ( x , u ) + L V ( x , u ) .
Then, with the feedback control law u ( · ) = ϕ ( x ( · ) ) , the closed-loop system (54) is globally finite time stable in probability. Moreover, there exists a stochastic settling time T : R n [ 0 , ) such that
E x 0 [ T ( x 0 ) ] V ( x 0 ) 1 α a ( 1 α ) , x 0 R n ,
and
J ( x 0 , ϕ ( x ( · ) ) ) = V ( x 0 ) , x 0 R n .
In addition, the feedback control law u ( · ) = ϕ ( x ( · ) ) minimizes (55)in the sense that
J ( x 0 , ϕ ( x ( · ) ) ) = min u ( · ) S ( x 0 ) J ( x 0 , u ( · ) ) ,
where S ( x 0 ) denotes the set of controllers given by
S ( x 0 ) { u ( · ) : u ( · ) U and x ( · ) given by   (53)   is such that E x 0 0 | L ( x ( t ) , u ( t ) ) | d t < and lim t E x 0 [ V ( x ( t ) ) ] = 0 } ,
where x 0 R n and u ( · ) = ϕ ( x ( · ) ) S ( x 0 ) .
Proof. 
Global finite time stability in probability along with the existence of a stochastic settling time T : R n [ 0 , ) such that (64) holds are a direct consequence of (56)−(59) by applying Theorem 2 to the closed-loop system (54).
To show that u ( · ) = ϕ ( x ( · ) ) S ( x 0 ) , note that since ϕ is Borel measurable, ϕ ( x ( · ) ) is F t -progressively measurable and (59) and (61) imply that
L ( x , ϕ ( x ) ) 0 , x R n .
Thus,
J ( x 0 , ϕ ( x ( · ) ) ) = E x 0 0 L ( x ( t ) , ϕ ( x ( t ) ) ) d t = E x 0 0 | L ( x ( t ) , ϕ ( x ( t ) ) ) | d t = V ( x 0 ) < .
Now, using an analogous argument as in the proof of Theorem 3, it follows that
lim t E x 0 [ V ( x ( t ) ) ] = 0
for u ( · ) = ϕ ( x ( · ) ) , and hence, u ( · ) = ϕ ( x ( · ) ) S ( x 0 ) .
Next, let u ( · ) S ( x 0 ) , and note that, by Itô’s lemma [22],
V ( x ( t ) ) = V ( x ( 0 ) ) + 0 t L V ( x ( s ) , u ( s ) ) d s
+ 0 t V ( x ( s ) ) D ( x ( s ) , u ( s ) ) d w ( s ) .
It now follows using an analogous argument as the one in the proof of Theorem 3 that the stochastic integral in (69) is a martingale, and hence,
E x 0 [ V ( x ( t ) ) ] = V ( x 0 ) + E x 0 0 t L V ( x ( s ) , u ( s ) ) d s ,
where E x 0 [ V ( x ( t ) ) ] exists since V C p 1 ( R n ) and (10) holds. Next, taking the limit as t yields
lim t E x 0 [ V ( x ( t ) ) ] = V ( x 0 ) + lim t E x 0 0 t L V ( x ( s ) , u ( s ) ) d s .
Since u ( · ) S ( x 0 ) , the control law satisfies lim t E x 0 [ V ( x ( t ) ) ] = 0 , and hence, it follows from (71) that
V ( x 0 ) = lim t E x 0 0 t L V ( x ( s ) , u ( s ) ) d s .
Now, combining (62) and (72) yields
V ( x 0 ) lim t E x 0 0 t L ( x ( s ) , u ( s ) ) d s .
Next, note that, for every t 0 ,
| 0 t L ( x ( s ) , u ( s ) ) d s | 0 t | L ( x ( s ) , u ( s ) ) | d s 0 | L ( x ( s ) , u ( s ) ) | d s ,
and, since u ( · ) S ( x 0 ) , E x 0 0 | L ( x ( s ) , u ( s ) ) | d s < . Thus, it follows from the dominated convergence theorem [31] that
lim t E x 0 0 t L ( x ( s ) , u ( s ) ) d s = E x 0 0 L ( x ( s ) , u ( s ) ) d s .
Finally, combining (65), (73), and (75) yields
V ( x 0 ) = J ( x 0 , ϕ ( x ( · ) ) ) J ( x 0 , u ( · ) ) ,
which proves (66). □
Observe that (61) represents the steady-state form of the stochastic Hamilton–Jacobi–Bellman equation. To see this, recall that the general form of the stochastic Hamilton–Jacobi–Bellman equation is given by
V t ( t , x ) + min u R m L ( t , x , u ) + L V ( t , x , u ) = 0 , t 0 , x R n ,
which serves as the fundamental condition for optimal control in stochastic, time-dependent systems over either finite or infinite horizons [21]. When the system is time-invariant and considered over an infinite time horizon, the value function becomes stationary, that is, V ( t , x ) = V ( x ) ; as a result, (77) simplifies to (61) and (62). These equations ensure optimality within the class of admissible control policies S ( x 0 ) . Notably, it is not necessary to explicitly characterize this admissible set S ( x 0 ) , and the optimal feedback control law u = ϕ ( x ) is independent of the specific initial state x 0 .
To guarantee that the closed-loop system given by (54) is globally finite time stable in probability, Theorem 4 requires that the value function V satisfy Conditions (56), (57), and (59). These requirements ensure that V serves as a Lyapunov function for the closed-loop system. However, these Lyapunov conditions are not necessary for establishing optimality. In particular, if V is twice continuously differentiable and V C p 1 ( R n ) , and if the control signal u ( · ) = ϕ ( x ( · ) ) lies in the admissible set S ( x 0 ) , then satisfaction of (61) and (62) leads to the fulfillment of (65) and (66). It is also crucial to emphasize that, in contrast to the deterministic framework [16], p. 857, establishing optimality in the stochastic setting necessitates an additional condition, namely the transversality condition lim t E x 0 [ V ( x ( t ) ) ] = 0 as stated in (67). (For more detailed discussions on this requirement, the reader is referred to [32], p. 323, [33], p. 125, and [34], p. 139.)

5. Inverse Optimal Stochastic Control for Nonlinear Affine Systems

In this section, we specialize Theorem 4 to nonlinear stochastic dynamical systems that are affine in control. We develop nonlinear feedback controllers that aim to minimize a nonlinear and nonquadratic performance criterion. This is achieved by selecting control laws such that the infinitesimal generator of a Lyapunov function for the closed-loop system satisfies (59) while providing sufficient conditions for the existence of solutions to the stochastic Hamilton–Jacobi–Bellman equation. As a result, we obtain a family of globally finite time stabilizing (in probability) controllers, each of which are parameterized by the cost functional being optimized.
The feedback control laws introduced here are derived from an inverse optimal stochastic control problem [16,18,19,35,36,37,38,39]. Instead of solving the stochastic steady-state Hamilton–Jacobi–Bellman equation directly for a given cost functional—an often complex task for higher dimensional systems—we define a family of stochastically finite time stabilizing controllers that optimize a derived cost functional. This approach offers flexibility in shaping the control laws. The performance integrand explicitly depends on the nonlinear system dynamics, the Lyapunov function for the closed-loop system, and the finite time stabilizing control laws. The coupling between these elements is governed by the stochastic Hamilton–Jacobi–Bellman equation. By adjusting the parameters in both the Lyapunov function and the performance criterion, this framework allows for the design of a class of globally finite time stabilizing (in probability) controllers that can meet specific performance and response requirements for the closed-loop system.
Consider the nonlinear affine in the control stochastic dynamical system given by
d x ( t ) = f ( x ( t ) ) + G ( x ( t ) ) u ( t ) d t + D ( x ( t ) ) d w ( t ) , x ( 0 ) = x 0 , u ( · ) U , t 0 ,
where, for every t 0 , the random variables x ( t ) and u ( t ) take values in the state space R n and the control space R m , respectively. The mappings f : R n R n , G : R n R n × m , and D : R n R n × d satisfy (8) and (9), with f ( x ) = f ( x ) + G ( x ) u uniformly in u. Furthermore, we consider performance integrands L ( x , u ) of the form
L ( x , u ) L 1 ( x ) + L 2 ( x ) u + u T R 2 ( x ) u ,
where L 1 : R n R , L 2 : R n R 1 × m , and R 2 : R n R m × m with R 2 ( x ) > 0 , x R n , such that (55) becomes
J ( x 0 , u ( · ) ) = E x 0 [ 0 [ L 1 ( x ( t ) ) + L 2 ( x ( t ) ) u ( t ) + u T ( t ) R 2 ( x ( t ) ) u ( t ) ] d t ] .
Theorem 5.
Consider the nonlinear stochastic dynamical system given by (78) with performance functional (80). Assume that there exist a two-times continuously differentiable function V C p 1 ( R n ) , constants a > 0 and α ( 0 , 1 ) , and a function L 2 : R n × R n 2 R 1 × m such that
V ( 0 ) = 0 ,
V ( x ) > 0 , x R n , x 0 ,
L 2 ( 0 ) = 0 ,
V ( x ) f ( x ) 1 2 G ( x ) R 2 1 ( x ) L 2 T ( x ) 1 2 G ( x ) R 2 1 ( x ) G T ( x ) V T ( x )
+ 1 2 tr D T ( x ) V ( x ) D ( x ) a ( V ( x ) ) α , x R n .
Then the closed-loop system
d x ( t ) = f ( x ( t ) ) + G ( x ( t ) ) ϕ ( x ( t ) ) d t + D ( x ( t ) ) d w ( t ) , x ( 0 ) = x 0 , t 0 ,
is globally finite time stable in probability with feedback control law
ϕ ( x ) = 1 2 R 2 1 ( x ) [ V ( x ) G ( x ) + L 2 ( x ) ] T .
Moreover, there exists a stochastic settling time T : R n [ 0 , ) such that
E x 0 [ T ( x 0 ) ] V ( x 0 ) 1 α a ( 1 α ) , x 0 R n .
Furthermore, if
L V p ( x , ϕ ( x ) ) 0 , x R n ,
where p > 1 , then the performance functional (80), with
L 1 ( x ) = ϕ T ( x ) R 2 ( x ) ϕ ( x ) V ( x ) f ( x ) 1 2 tr D T ( x ) V ( x ) D ( x ) ,
is minimized in the sense of (66) and (65) holds.
Proof. 
The result is a direct consequence of Theorem 4 with F ( x , u ) = f ( x ) + G ( x ) u , D ( x , u ) = D ( x ) , and L ( x , u ) = L 1 ( x ) + L 2 ( x ) u + u T R 2 ( x ) u . Specifically, with (79), we have
H ( x , u ) = L 1 ( x ) + L 2 ( x ) u + u T R 2 ( x ) u + V ( x ) [ f ( x ) + G ( x ) u ] + 1 2 tr D T ( x ) V ( x ) D ( x ) .
Now, (86) is obtained by setting H u = 0 . With (86), it follows that (84) implies (59).
Next, since V is twice continuously differentiable and x = 0 is a local minimizer of V, it follows that V ( 0 ) = 0 ; hence, since by (83) L 2 ( 0 ) = 0 , it follows that ϕ ( 0 ) = 0 , which implies (58). Next, with L 1 ( x ) given by (89), ϕ ( x ) is given by (86) and (61) holds. Since H ( x , u ) = H ( x , u ) H ( x , ϕ ( x ) ) = [ u ϕ ( x ) ] T R 2 ( x ) [ u ϕ ( x ) ] and R 2 ( x ) is positive definite for all x R n , then (62) holds. The result now follows from Theorem 4. □
Remark 1.
It is important to note that the inverse optimal control design approach provides a framework for constructing the Lyapunov function for the closed-loop system that serves as an optimal value function and, as shown in [16,18,39], achieves desired stability margins. Specifically, nonlinear inverse optimal controllers that minimize a meaningful (in the terminology of [18,39]) nonlinear–nonquadratic performance criterion involving a nonlinear–nonquadratic, nonnegative–definite function of the state and a quadratic positive–definite function of the feedback control are shown to possess gain and sector margin guarantees to input nonlinearities in the conic sector ( 1 / 2 , ) .
Example 1.
Consider the axisymmetric spacecraft with stochastic disturbances given by [40] (p. 753)
d ω 1 ( t ) = [ I 23 ω 3 ω 2 ( t ) + u 1 ( t ) ] d t + σ ω 1 ( t ) d w 1 ( t ) , ω 1 ( 0 ) = ω 10 , t 0 ,
d ω 2 ( t ) = [ I 23 ω 3 ω 1 ( t ) + u 2 ( t ) ] d t + σ ω 2 ( t ) d w 2 ( t ) , ω 2 ( 0 ) = ω 20 ,
where I 23 I 3 I 2 I 1 , I 1 , I 2 , and I 3 are the principal moments of inertia of the spacecraft such that 0 < I 1 = I 2 < I 3 , ω 1 ( · ) , ω 2 ( · ) , and ω 3 R are the angular velocities of the spacecraft in the body frame; u 1 and u 2 are spacecraft control moments; w 1 ( · ) and w 2 ( · ) are standard Wiener processes capturing perturbations in the system dynamics; and σ R .
For this example, we seek state feedback control laws [ u 1 , u 2 ] T = ϕ ( x ) , where x = [ x 1 , x 2 ] T = [ ω 1 , ω 2 ] T , such that the performance measure
J ( x 0 , u ( · ) ) = E x 0 [ 0 [ L 1 ( x ( t ) ) + 2 [ I 23 ω 3 ω 2 ( t ) + σ 2 ω 1 ( t ) ] u 1 ( t ) + [ I 23 ω 3 ω 1 ( t ) + σ 2 ω 2 ( t ) ] u 2 ( t ) + u 1 2 ( t ) + u 2 2 ( t ) ] d t ]
is minimized in the sense of (66). Note that, in this case, R 2 ( x ) = I 2 and
L 2 ( x ) = 2 [ I 23 ω 3 ω 2 + σ 2 ω 1 , I 23 ω 3 ω 1 + σ 2 ω 2 ] .
Here, we apply Theorem 5 to find an inverse optimal feedback controller such that the spacecraft (90) and (91) is globally finite time stable in probability and (92) is minimized. To accomplish this, consider the Lyapunov function candidate given by
V ( x ) = μ ( ω 1 2 + ω 2 2 ) 2 3 ,
where μ > 0 . Note V ( x ) satisfies (81) and (82). Furthermore, note that
L V ( x , ϕ ( x ) ) = V ( x ) f ( x ) 1 2 G ( x ) R 2 1 ( x ) L 2 T ( x ) 1 2 G ( x ) R 2 1 ( x ) G T ( x ) V T ( x ) + 1 2 tr D T ( x ) V ( x ) D ( x ) = 4 μ 3 ( ω 1 2 + ω 2 2 ) 1 3 [ ω 1 , ω 2 ] σ 2 ω 1 σ 2 ω 2 2 μ 3 ( ω 1 2 + ω 2 2 ) 1 3 ω 1 2 μ 3 ( ω 1 2 + ω 2 2 ) 1 3 ω 2 + 2 μ 9 σ 2 ( ω 1 2 + ω 2 2 ) 2 3 = 8 μ 2 9 ( ω 1 2 + ω 2 2 ) 1 3 10 μ 9 σ 2 ( ω 1 2 + ω 2 2 ) 2 3 8 μ 2 9 ( ω 1 2 + ω 2 2 ) 1 3 = 8 μ 3 2 9 ( V ( x ) ) 1 2 , x R 2 ,
which shows that (84) is satisfied. Hence, it follows from Theorem 5 that the zero solution x ( · ) 0 of the closed-loop system (90) and (91) is globally finite time stable in probability with the feedback control law given by
ϕ ( x ) = 1 2 R 2 1 ( x ) [ V ( x ) G ( x ) + L 2 ( x ) ] T = 2 μ 3 ω 1 2 + ω 2 2 1 3 ω 1 I 23 ω 3 ω 2 σ 2 ω 1 2 μ 3 ω 1 2 + ω 2 2 1 3 ω 2 + I 23 ω 3 ω 1 σ 2 ω 2 .
Next, note that
L V 3 2 ( x , ϕ ( x ) ) = [ 2 ω 1 , 2 ω 2 ] I 23 ω 3 ω 1 I 23 ω 3 ω 1 + [ 2 ω 1 , 2 ω 2 ] 2 μ 3 ( ω 1 2 + ω 2 2 ) 1 3 ω 1 I 23 ω 3 ω 2 σ 2 ω 1 2 μ 3 ( ω 1 2 + ω 2 2 ) 1 3 ω 2 + I 23 ω 3 ω 1 σ 2 ω 2 + σ 2 ω 1 2 + σ 2 ω 2 2 = 4 μ 3 ( ω 1 2 + ω 2 2 ) 1 3 ω 1 2 4 μ 3 ( ω 1 2 + ω 2 2 ) 1 3 ω 2 2 σ 2 ω 1 2 σ 2 ω 2 2 0 , x R 2 ,
which shows that (88) is satisfied with p = 3 2 . Hence, it follows from Theorem 5 that the performance functional (92), with
L 1 ( x ) = ϕ T ( x ) R 2 ( x ) ϕ ( x ) V ( x ) f ( x ) 1 2 t r D T ( x ) V ( x ) D ( x ) = 2 μ 3 ( ω 1 2 + ω 2 2 ) 1 3 ω 1 + I 23 ω 3 ω 2 + σ 2 ω 1 2 + 2 μ 3 ( ω 1 2 + ω 2 2 ) 1 3 ω 2 I 23 ω 3 ω 1 + σ 2 ω 2 2 + 2 μ 9 σ 2 ( ω 1 2 + ω 2 2 ) 2 3
is minimized in the sense of (66).
For x 0 = [ 2 , 3 ] T , I 23 = 1 , ω 3 = 1 , μ = 2 , and σ = 0.5 , Figure 1 and Figure 2 show the mean along with one standard deviation (shaded area) of the closed-loop system states along with the control inputs for 10 4 sample paths.

6. Conclusions

In this paper, we formulated an optimal control problem for finite time stochastic stabilization and derived sufficient conditions for designing a nonlinear feedback controller that ensures finite time stability in probability of the closed-loop system. The approach is based on a steady-state, stochastic Hamilton–Jacobi–Bellman framework, where the notion of optimality is associated with a Lyapunov function that satisfies a scalar differential inequality involving fractional powers. Furthermore, we developed inverse optimal feedback controllers for affine nonlinear stochastic systems. In future research, we will extend this framework to develop finite time stability in probability for semi-Markov jump systems [41], as well as for Itô diffusion processes with Markovian switching [42]. In addition, we will address the problem of fixed time optimal stabilization, as well as develop extensions to hybrid stochastic finite time and fixed time optimal control.

Author Contributions

R.C.: Conceptualization, Formal Analysis, Software, Visualization, and Writing—Original Draft. W.M.H.: Conceptualization, Formal Analysis, Writing—Review and Editing, Supervision, and Funding Acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported, in part, by the Air Force Office of Scientific Research under Grant FA9550-25-1-0184.

Data Availability Statement

No data were used for the research described in this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Agarwal, R.; Lakshmikantham, V. Uniqueness and Nonuniqueness Criteria for Ordinary Differential Equations; World Scientific: Singapore, 1993. [Google Scholar]
  2. Yamada, T.; Watanabe, S. On the Uniqueness of Solutions of Stochastic Differential Equations. J. Math. Kyoto Univ. 1971, 11, 155–167. [Google Scholar] [CrossRef]
  3. Watanabe, S.; Yamada, T. On the Uniqueness of Solutions of Stochastic Differential Equations II. J. Math. Kyoto Univ. 1971, 11, 553–563. [Google Scholar] [CrossRef]
  4. Situ, R. Theory of Stochastic Differential Equations with Jumps and Applications; Springer: New York, NY, USA, 2005. [Google Scholar]
  5. Roxin, E. On finite stability in control systems. Rend. Circ. Mat. Palermo 1966, 15, 273–282. [Google Scholar] [CrossRef]
  6. Bhat, S.P.; Bernstein, D.S. Finite-Time Stability of Continuous Autonomous Systems. SIAM J. Control Optim. 2000, 38, 751–766. [Google Scholar] [CrossRef]
  7. Bhat, S.P.; Bernstein, D.S. Geometric homogeneity with applications to finite-time stability. Math. Control. Signals Syst. 2005, 17, 101–127. [Google Scholar] [CrossRef]
  8. Moulay, E.; Perruquetti, W. Finite time stability conditions for non-autonomous continuous systems. Int. J. Control 2008, 81, 797–803. [Google Scholar] [CrossRef]
  9. Haddad, W.M.; Nersesov, S.G.; Du, L. Finite-time stability for time-varying nonlinear dynamical systems. In Proceedings of the 2008 American Control Conference, Seattle, WA, USA, 11–13 June 2008; pp. 4135–4139. [Google Scholar] [CrossRef]
  10. Haddad, W.M.; L’Afflitto, A. Finite-Time Partial Stability, Stabilization, and Optimal Feedback Control. J. Frankl. Inst. 2015, 352, 2329–2357. [Google Scholar] [CrossRef]
  11. Chen, W.; Jiao, L.C. Finite-time stability theorem of stochastic nonlinear systems. Automatica 2010, 46, 2105–2108. [Google Scholar] [CrossRef]
  12. Yin, J.; Khoo, S.; Man, Z.; Yu, X. Finite-time stability and instability of stochastic nonlinear systems. Automatica 2011, 47, 2671–2677. [Google Scholar] [CrossRef]
  13. Rajpurohit, T.; Haddad, W.M. Stochastic finite-time partial stability, partial-state stabilization, and finite-time optimal feedback control. Math. Control. Signals Syst. 2017, 29, 10. [Google Scholar] [CrossRef]
  14. Zhang, W.; Yao, L. New results on finite-time stability and instability theorems for stochastic nonlinear time-varying systems. Sci. China Inf. Sci. 2025, 68, 122203. [Google Scholar] [CrossRef]
  15. Yin, J.; Khoo, S. Comments on “Finite-time stability theorem of stochastic nonlinear systems” [Automatica 46 (2010) 2105–2108]. Automatica 2011, 47, 1542–1543. [Google Scholar] [CrossRef]
  16. Haddad, W.M.; Chellaboina, V. Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach; Princeton University Press: Princeton, NJ, USA, 2008. [Google Scholar]
  17. Lanchares, M.; Haddad, W.M. Nonlinear Optimal Control for Stochastic Dynamical Systems. Mathematics 2024, 12, 1–30. [Google Scholar] [CrossRef]
  18. Freeman, R.; Kokotovic, P. Inverse optimality in robust stabilization. SIAM J. Control Optim. 1996, 34, 1365–1391. [Google Scholar] [CrossRef]
  19. Deng, H.; Krstic, M. Stochastic nonlinear stabilization–Part II: Inverse optimality. Syst. Contr. Lett. 1997, 32, 151–159. [Google Scholar] [CrossRef]
  20. Khasminskii, R. Stochastic Stability of Differential Equations, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  21. Arnold, L. Stochastic Differential Equations: Theory and Applications; Wiley-Interscience: New York, NY, USA, 1974. [Google Scholar]
  22. ksendal, B. Stochastic Differential Equations: An Introduction with Applications, 6th ed.; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  23. Mao, X. Stochastic Differential Equations and Applications, 2nd ed.; Woodhead Publishing: Cambridge, UK, 2007. [Google Scholar]
  24. Le Gall, J.F. Brownian Motion, Martingales, and Stochastic Calculus; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  25. Gard, T.C. Introduction to Stochastic Differential Equations; Marcel Dekker: New York, NY, USA, 1988. [Google Scholar]
  26. Klebaner, F.C. Introduction to Stochastic Calculus with Applications, 3rd ed.; Imperial College Press: London, UK, 2012. [Google Scholar]
  27. Lanchares, M.; Haddad, W.M. Dissipative Stochastic Dynamical Systems. Syst. Control Lett. 2023, 172, 105451. [Google Scholar] [CrossRef]
  28. Chen, W.; Jiao, L.C. Correspondence: Authors’ reply to “Comments on ‘Finite-time stability theorem of stochastic nonlinear systems [Automatica 46 (2010) 2105-2108]”’. Automatica 2011, 47, 1544–1545. [Google Scholar] [CrossRef]
  29. Billingsley, P. Probability and Measure, Anniversary edition; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  30. Apostol, T.M. Mathematical Analysis; Addison-Wesley: Reading, MA, USA, 1974. [Google Scholar]
  31. Shreve, S. Stochastic Calculus for Finance II: Continuous-Time Models; Springer: New York, NY, USA, 2004. [Google Scholar]
  32. Kushner, H. Introduction to Stochastic Control; Holt, Rinehart and Winston: New York, NY, USA, 1971. [Google Scholar]
  33. Chang, F.R. Stochastic Optimization in Continuous Time; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  34. Fleming, W.H.; Soner, H.M. Controlled Markov Processes and Viscosity Solutions, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
  35. Moylan, P.; Anderson, B. Nonlinear regulator theory and an inverse optimal control problem. IEEE Trans. Autom. Control 1973, 18, 460–465. [Google Scholar] [CrossRef]
  36. Molinari, B. The stable regulator problem and its inverse. IEEE Trans. Autom. Control 1973, 18, 454–459. [Google Scholar] [CrossRef]
  37. Jacobson, D.H. Extensions of Linear-Quadratic Control Optimization and Matrix Theory; Academic Press: New York, NY, USA, 1977. [Google Scholar]
  38. Jacobson, D.H.; Martin, D.H.; Pachter, M.; Geveci, T. Extensions of Linear-Quadratic Control Theory; Springer: Berlin/Heidelberg, Germany, 1980. [Google Scholar]
  39. Sepulchre, R.; Jankovic, M.; Kokotovic, P. Constructive Nonlinear Control; Springer: London, UK, 1997. [Google Scholar]
  40. Wie, B. Space Vehicle Dynamics and Control; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 1998. [Google Scholar]
  41. Liu, J.; Colaneri, P.; Bolzern, P.; Li, Z.Y. Asymptotic stability analysis of nonlinear stochastic semi-Markov jump systems. Int. J. Robust Nonlinear Control 2023, 33, 8615–8630. [Google Scholar] [CrossRef]
  42. Ning, Z.; Zhang, L.; Lam, J. Stability and stabilization of a class of stochastic switching systems with lower bound of sojourn time. Automatica 2018, 92, 18–28. [Google Scholar] [CrossRef]
Figure 1. System states versus time for Example 1. The bold lines show the average states over 10 4 sample paths, whereas the shaded area shows a one standard deviation from the average.
Figure 1. System states versus time for Example 1. The bold lines show the average states over 10 4 sample paths, whereas the shaded area shows a one standard deviation from the average.
Axioms 14 00767 g001
Figure 2. Control inputs versus time for Example 1. The bold lines show the average states over 10 4 sample paths, whereas the shaded area shows a one standard deviation from the average.
Figure 2. Control inputs versus time for Example 1. The bold lines show the average states over 10 4 sample paths, whereas the shaded area shows a one standard deviation from the average.
Axioms 14 00767 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chitre, R.; Haddad, W.M. Finite Time Stability and Optimal Control for Stochastic Dynamical Systems. Axioms 2025, 14, 767. https://doi.org/10.3390/axioms14100767

AMA Style

Chitre R, Haddad WM. Finite Time Stability and Optimal Control for Stochastic Dynamical Systems. Axioms. 2025; 14(10):767. https://doi.org/10.3390/axioms14100767

Chicago/Turabian Style

Chitre, Ronit, and Wassim M. Haddad. 2025. "Finite Time Stability and Optimal Control for Stochastic Dynamical Systems" Axioms 14, no. 10: 767. https://doi.org/10.3390/axioms14100767

APA Style

Chitre, R., & Haddad, W. M. (2025). Finite Time Stability and Optimal Control for Stochastic Dynamical Systems. Axioms, 14(10), 767. https://doi.org/10.3390/axioms14100767

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop