Abstract
In real-world applications, finite time convergence to a desired Lyapunov stable equilibrium is often necessary. This notion of stability is known as finite time stability and refers to systems in which the state trajectory reaches an equilibrium in finite time. This paper explores the notion of finite time stability in probability within the context of nonlinear stochastic dynamical systems. Specifically, we introduce sufficient conditions based on Lyapunov methods, utilizing Lyapunov functions that satisfy scalar differential inequalities involving fractional powers for guaranteeing finite time stability in probability. Then, we address the finite time optimal control problem by developing a framework for designing optimal feedback control laws that achieve finite time stochastic stability of the closed-loop system using a Lyapunov function that also serves as the solution to the steady-state stochastic Hamilton–Jacobi–Bellman equation.
MSC:
93D40; 37H30; 93E20; 49L12
1. Introduction
The classical notions of asymptotic and exponential stability in dynamical systems theory involve system trajectories approaching a Lyapunov stable equilibrium state over an infinite time horizon. However, in many engineering and scientific applications, it is necessary for the system to reach a stable equilibrium within finite time rather than asymptotically. Achieving finite time stability in deterministic systems requires the closed-loop system dynamics to be non-Lipschitz, which can result in nonuniqueness of solutions in backward time. Nonetheless, it is possible to preserve forward time uniqueness of solutions for finite time convergent systems.
Sufficient conditions for forward time uniqueness in deterministic systems without assumption of the Lipschitz continuity of the system dynamics are given in [1]. For stochastic systems, refs. [2,3,4] provide criteria that guarantee the forward time uniqueness of solutions without requiring Lipschitz continuity. These works also show that if the dynamics are continuous and forward time uniqueness holds, then the system trajectories are almost surely continuous with respect to the system initial conditions, even in the absence of the Lipschitz continuity of the drift and diffusion functions characterizing the stochastic dynamical system.
The concept of finite time stability, that is, convergence of the system trajectories to a Lyapunov stable equilibrium in finite time, was first introduced by Roxin [5] and further developed in [6,7] for time-invariant deterministic systems, as well as in [8,9,10] for time-varying deterministic systems. In particular, Lyapunov and converse Lyapunov theorems for finite time stability were established using Lyapunov functions that satisfy scalar differential inequalities with fractional powers, and it was shown that the regularity of the Lyapunov function depends on the properties of the settling time function, which characterizes the finite time convergence behavior of the system.
Even though extensions of finite time stability for stochastic systems have been addressed in the literature [11,12,13,14], several of these results contain errors. Specifically, as pointed out in [15], several definitions and the main result in [11] are incorrect. Moreover, the authors in [13] used the results of [11] to develop partial-state stabilization in finite time, propagating the errors of [11] in their work, whereas the proof of the main theorem of [12] used Jensen’s inequality incorrectly, thus invalidating their result. And finally, ref. [14] failed to provide a bound on the expectation of the settling time function, which is crucial in providing a complete theory for finite time stability and stabilization for stochastic dynamical systems.
In this paper, we correct these oversights to present a self-contained theory for finite time stability in probability and build upon the framework established in [16,17] to address the problem of optimal finite time stabilization for stochastic nonlinear systems. Specifically, we ensure finite time stability in probability for the closed-loop system using a Lyapunov function that satisfies a scalar differential inequality that involves fractional powers. This Lyapunov function is shown to correspond to the steady-state solution of the stochastic Hamilton–Jacobi–Bellman equation, ensuring both finite time stability in probability and optimal performance. Finally, we also develop connections of our approach to inverse optimal control [18,19] by constructing a family of finite time stabilizing stochastic feedback laws that minimize a derived cost functional.
2. Mathematical Preliminaries
We will start by reviewing some basic results on nonlinear stochastic dynamical systems [20,21,22,23]. First, however, we require some notations and definitions. The notation, definitions, and mathematical preliminaries in this section are adopted from [17]. A probability space is a mathematical construct that provides a model for a random experiment and consists of the triple . The sample space is the set of all possible outcomes of the experiment. The event space is a collection of subsets of the sample space, where each subset represents an event. The event space has the algebraic structure of a σ-algebra. The pair is a measurable space, and the function defines a probability measure on the -algebra , assigning a probability to each event in the event space . A complete probability space is one in which the -algebra includes all the subsets of sets with a probability measure zero.
A Borel set is a set in a topological space that is derived from open (or closed) sets through the repeated operation of countable unions, countable intersections, and relative complements. The Borel σ-algebra on , denoted by , is the smallest -algebra containing all the open sets in . An -valued random vector x is a measurable function , that is, for every Borel set , its preimage is . A random variable is a scalar random vector. In general, given a random vector x and a -algebra , we say that x is -measurable if for all . Given an integrable random vector x and a sub--algebra , and denote, respectively, the expectation of the random vector x and the conditional expectation of x given under the probability measure .
A continuous-time stochastic process is a collection of random vectors defined on the probability space , and it is indexed by the set of nonnegative real numbers that are greater than or equal to 0. Occasionally, we write for to denote the explicit dependence of the random variable on the outcome . For every fixed time , the random variable assigns a vector to every outcome , and for every fixed , the mapping generates a sample path (or sample trajectory) of the stochastic process , where, for convenience, we write to denote the stochastic process .
A (continuous-time) filtration on is a collection of sub--fields of , and it is indexed by , such that , . A filtration is complete if every contains the set of all sets that are contained in a -null set. The stochastic process is progressively measurable with respect to if, for every , the map defined on is -measurable. The stochastic process is said to be adapted with respect to , or it is simply -adapted if is -measurable for every . An adapted stochastic process with right continuous (or left continuous) sample paths is progressively measurable [24].
The stochastic process , where , is a random variable, is a martingale with respect to the filtration if it is -adapted, , and . If we replace the equality in with “≤” (respectively, “≥”), then is a supermartingale (respectively, submartingale). A random variable is called a stopping time of the filtration if for all . A stopping time is a bounded stopping time if the event , where is a constant, has a probability of one. (For an additional discussion on stochastic processes, filtrations, martingales, and stopping times, see [24]).
In this paper, we consider stochastic dynamical systems of the form
where (1) is a stochastic differential equation. The stochastic process represents the system state, is a random system initial condition vector, and is a d-dimensional Brownian motion process. For every , the random variable takes values in the state space . The Borel measurable mappings and satisfy and , and they are known as the system drift and diffusion functions. The stochastic differential equation (1) is interpreted as a way of expressing the integral equation
where the first integral in (2) is a Lebesgue integral and the second is an Itô integral [25].
Let be a fixed complete filtered probability space, be a -adapted Brownian motion, and be a -measurable initial condition. A solution to (1) is an -valued -adapted process with continuous sample paths such that the integrals in (2) exist and (2) holds almost surely for all . For a Brownian motion disturbance and initial condition given in a prescribed probability space, the solution to (2) is known as a strong solution [26]. In this paper, we focus on strong solutions, and we will simply use the term “solution” to refer to a strong solution. A solution to (1) is unique if for any two solutions and that satisfy (1), for all almost surely.
For a stochastic system of the form given by (1) with solution , the (infinitesimal) generator of is an operator acting on the function , and it is defined as ([22])
where denotes the expectation given that x is a fixed point in . The set of functions for which the limit in (3) exists is denoted by . If has compact support, where denotes the space of functions with r-continuous derivatives, then and , where
, and [22]. Note that the differential operator introduced in (4) is defined for every , and it is characterized by the system drift and diffusion functions. With a minor abuse in terminology, we will refer to the differential operator as the (infinitesimal) generator of the system .
If , then it follows from Itô’s formula [26] that the stochastic process satisfies
If the terms appearing in (5) are integrable and the Itô integral in (5) is a martingale, then it follows from (5) that
A more general version of (6) that holds for t is replaced by a stopping time is known as Dynkin’s formula [22].
We say that a function is of polynomial growth if there exist positive constants C and m such that
For , where denotes the set of natural numbers, we write if V and all its partial derivatives up to order r are of polynomial growth. As shown in [27], is a sufficient condition for the integrability of the terms in (5). In this case, it can be shown that (3) implies (4). In this paper, we assume all Lyapunov functions V are of polynomial growth with .
Given a function , we say that V is positive definite with respect to if and . If , then V is positive definite with respect to the origin, or V is simply positive definite. Moreover, we say a function V is nonnegative definite if . We say that V is negative definite if is positive definite. In addition, we say that V is radially unbounded if .
As discussed in the introduction, in order to achieve convergence in finite time for stochastic dynamical systems, the drift and diffusion functions characterizing the system dynamics need to be non-Lipschitzian giving rise to nonuniqueness of solutions in backward time ([20], Lemma 5.3). Uniqueness of solutions in forward time, however, can be preserved in the case of finite time convergence. The next result establishes the existence and uniqueness of solutions for the stochastic differential equation (1) with non-Lipchitzian drift and diffusion functions. For the statement of this result, , denotes the open ball centered at with radius in the Euclidean norm.
Theorem 1
([4]). Consider the nonlinear stochastic dynamical system (1) with initial condition such that , and assume that the following conditions hold:
- (i)
- Continuity. and are continuous.
- (ii)
- Linear growth. There exists a constant such that, for all ,
- (iii)
- For every , there exist a strictly increasing, continuous, and concave function , as well as a constant such that, for all ,and
Assumption 1.
For the remainder of this paper, we assume that the conditions for existence and uniqueness given in Theorem 1 are satisfied for system (1).
3. Finite Time Stability for Stochastic Dynamical Systems
In this section, we introduce the notion of finite time stability in probability and present sufficient conditions for finite time stability of (1) using a Lyapunov function that satisfies a scalar differential inequality involving fractional powers. First, however, we require some additional definitions and results. We denote the solution to (1) with the initial condition at time t by , and we define and , with denoting the sample trajectory of (1). Thus, for every , there exists a trajectory defined for all and satisfying the dynamical process (1) with initial condition . For simplicity of exposition, we write for and for omitting their dependence on .
The following definitions introduce the notions of a stochastic settling time and finite time stability in probability for stochastic dynamical systems. Here, we assume that the initial condition is a constant, and hence, whenever we write , we mean that is a constant vector. In this case, we will find it convenient to introduce the notation and to denote the probability and expected value, respectively, given that the initial condition is the fixed point almost surely.
Definition 1.
Consider the nonlinear stochastic dynamical system given by (1). The stochastic settling time is a stopping time with respect to , and is defined as
Note that if , then . For simplicity of exposition, we write for , omitting its dependence on .
Definition 2.
- (i)
- or, equivalently, for every and there exists such that, for all ,
- (ii)
- For every , there exists such that if , then
Proposition 1.
Proof.
Proposition 1 implies that, for every and , there exists such that, for all ,
and
which are equivalent. Hence, it follows from Proposition 1 that if the zero solution to (1) is globally finite time stable in probability, then it is globally asymptotically stable in probability. Thus, global finite time stability in probability is a stronger notion than global asymptotic stability in probability.
The following theorem based on the results appearing in [14,28] gives sufficient conditions for stochastic finite time stability using a Lyapunov function involving a scalar differential inequality. For completeness, we give a self-contained proof of this result as it forms the foundation for all later developments in this paper. For the statement of this theorem, denotes the indicator function of the set , that is,
Theorem 2.
Let be an open subset containing the origin. Consider the nonlinear stochastic dynamical system (1) and assume that there exist a two-times continuously differentiable function and constants and such that
Then, the zero solution to (1) is finite time stable in probability. If, in addition, , V is radially unbounded, and (19) and (20) hold on , then the zero solution to (1) is globally finite time stable in probability. Moreover, there exists a stochastic settling time such that
Proof.
Conditions (18)–(20) imply Lyapunov stability in probability by Theorem 2 of [17]. Thus, for every and such that , there exists such that, for all ,
Next, note that finite time stability in probability holds trivially for . For all , define the stopping times , , , and , where , and .
Since is two-times continuously differentiable for all such that , it follows from Itô’s formula [22] that, for all and ,
The process is -measurable and -adapted due to the measurability of the mappings involved and the properties of the processes . Now, since V is continuously differentiable and D is continuous, it follows that, for all and ,
and hence, by Corollary 3.2.6 of [22], the Itô integral in (23) is a martingale. Now, (23) and the properties of the expectation of a martingale yield
Note that the expectations in (25) are well defined because is a stopping time for exiting the bounded set and the integrands are continuous in .
Next, it follows from (19), (20), and (25) that
Since (26) holds for all , it also holds for , implying that
We claim that
To see this, note that, since the stopping times are increasing with k, exists or is infinite for almost all . Next, since is sample continuous, for all , which yields
Now, for almost all , (27) holds trivially; moreover, for some , if , then (28) implies (27). Alternatively, if , then assume that, ad absurdum, . It follows from the definition of that
which implies
Next, since is sample continuous in t, we conclude
which contradicts , and hence, (27) holds for almost all .
Now, it follows from Fatou’s lemma [29] that
which implies that
proving finite time stability in probability.
To show global finite time stability in probability, for every , define the stopping times and , where and , such that . Using a similar argument as the one used to obtain (26) with , replacing yields
Since (31) holds for all , it also holds for , which implies
Next, we claim that
Since the stopping times are increasing with l, exists or is infinite for almost all . Now, using an analogous argument as the one used to obtain (28) gives
Next, for almost all , (33) implies (32). Alternatively, for some , assume that, ad absurdum, . Since (18)−(20) hold and V is radially unbounded, applies by Theorem 2 of [17]. Thus, there exists such that for all , and hence,
which implies
Now, since is sample continuous in t, we conclude
which contradicts , and hence, (32) holds for almost all .
Finally, it follows from Fatou’s lemma [29] that
and hence,
which proves global finite time stability in probability. □
4. Stochastic Optimal Finite Time Stabilization
In the first part of this section, we provide connections between the Lyapunov functions and nonquadratic cost evaluation. Specifically, we consider the problem of evaluating a nonlinear–nonquadratic performance measure that depends on the solution of the stochastic nonlinear dynamical system given by (1). In particular, we show that the nonlinear–nonquadratic performance measure
where and , , satisfies (1) with , can be evaluated in a convenient form so long as (1) is related to an underlying Lyapunov function that proves finite time stability in the probability of (1).
The following theorem generalizes Theorem 6 of [17] to finite time stability.
Theorem 3.
Consider the nonlinear stochastic dynamical system given by (1) with the nonlinear–nonquadratic performance measure (36), where is the solution to (1). Furthermore, assume that there exist a two-times continuously differentiable radially unbounded function and constants and such that
where . Then, the zero solution to (1) is globally finite time stable in probability. Moreover, there exists a stochastic settling time such that
and
Proof.
Global finite time stability in probability along with the existence of a stochastic settling time such that (42) holds are a direct consequence of (37)–(39) and V being radially unbounded by Theorem 2. Using the fact that V is continuous and yields
almost surely.
Next, we show that the stochastic process is a martingale. To see this, first note that the process is -measurable and -adapted because of the measurability of the mappings involved and the properties of the process . Now, using Tonelli’s theorem [29] it follows that, for all ,
for some positive constants and , and hence, by Corollary 3.2.6 of [22] the Itô integral
is a martingale. To arrive at (45) we used the fact that , the linear growth condition (8), and the finiteness of the expected value of the supremum of the moments of the system state (10). Note that the supremum in (45) exists because of the continuity of the sample paths of .
Next, the measurability of the solution of (1) implies that is -adapted and exists since and (10) holds. Now, using Itô’s formula [22] and (39) we have, for every and almost every ,
Since the Itô integral in (46) is a martingale, (46) implies
which shows that the process is a nonnegative supermartingale. An analogous argument implies that the process is also a nonnegative supermartingale for any .
Next, using the fact that supermartingales have decreasing expectations gives
which shows that the stochastic process is uniformly integrable [22] (p. 323). Thus, by Doob’s martingale convergence theorem [22], converges in , and hence,
Furthermore, it follows from (41) and Itô’s formula that, for all ,
Taking the expected value operator on both sides of (50) and using the martingale property of the stochastic integral in (50) yields
Now, taking the limit as and using (49) yields
Next, we use the framework developed in Theorem 3 to obtain a characterization of the stochastic optimal feedback controllers that guarantee closed-loop, finite time stabilization in probability. Specifically, sufficient conditions for optimality are given in a form that corresponds to a steady-state version of the stochastic Hamilton–Jacobi–Bellman equation. To address the problem of characterizing finite time stabilizing feedback controllers, consider the nonlinear controlled stochastic dynamical system
where the stochastic process represents the controller input, and is the set of admissible control inputs. For every , the random variables and take values in the state space and the control space , respectively. The mappings and satisfy (8) and (9), with and , uniformly in u, and and .
We assume that every is an -valued Markov control process. An input process is a Markov control process if there exists a function such that . Note that the class of Markov controls encompasses both time-varying inputs (i.e., possibly open-loop control input processes) and state-dependent inputs (i.e., possibly a state feedback control input, where is a feedback control law). Note that, given the feedback control law , the closed-loop system (53) takes the form
The following theorem generalizes Theorem 7 of [17] to optimal finite time stabilization.
Theorem 4.
Consider the nonlinear stochastic dynamical system given by (53) with nonlinear-nonquadratic performance measure
where is the solution to (53) with control input . Furthermore, assume that there exist a two-times continuously differentiable function , constants and , and a feedback control law such that
where and
Then, with the feedback control law , the closed-loop system (54) is globally finite time stable in probability. Moreover, there exists a stochastic settling time such that
and
In addition, the feedback control law minimizes (55)in the sense that
where denotes the set of controllers given by
where and .
Proof.
Global finite time stability in probability along with the existence of a stochastic settling time such that (64) holds are a direct consequence of (56)−(59) by applying Theorem 2 to the closed-loop system (54).
To show that , note that since is Borel measurable, is -progressively measurable and (59) and (61) imply that
Thus,
Now, using an analogous argument as in the proof of Theorem 3, it follows that
for , and hence, .
Next, let , and note that, by Itô’s lemma [22],
It now follows using an analogous argument as the one in the proof of Theorem 3 that the stochastic integral in (69) is a martingale, and hence,
where exists since and (10) holds. Next, taking the limit as yields
Since , the control law satisfies , and hence, it follows from (71) that
Now, combining (62) and (72) yields
Observe that (61) represents the steady-state form of the stochastic Hamilton–Jacobi–Bellman equation. To see this, recall that the general form of the stochastic Hamilton–Jacobi–Bellman equation is given by
which serves as the fundamental condition for optimal control in stochastic, time-dependent systems over either finite or infinite horizons [21]. When the system is time-invariant and considered over an infinite time horizon, the value function becomes stationary, that is, ; as a result, (77) simplifies to (61) and (62). These equations ensure optimality within the class of admissible control policies . Notably, it is not necessary to explicitly characterize this admissible set , and the optimal feedback control law is independent of the specific initial state .
To guarantee that the closed-loop system given by (54) is globally finite time stable in probability, Theorem 4 requires that the value function V satisfy Conditions (56), (57), and (59). These requirements ensure that V serves as a Lyapunov function for the closed-loop system. However, these Lyapunov conditions are not necessary for establishing optimality. In particular, if V is twice continuously differentiable and , and if the control signal lies in the admissible set , then satisfaction of (61) and (62) leads to the fulfillment of (65) and (66). It is also crucial to emphasize that, in contrast to the deterministic framework [16], p. 857, establishing optimality in the stochastic setting necessitates an additional condition, namely the transversality condition as stated in (67). (For more detailed discussions on this requirement, the reader is referred to [32], p. 323, [33], p. 125, and [34], p. 139.)
5. Inverse Optimal Stochastic Control for Nonlinear Affine Systems
In this section, we specialize Theorem 4 to nonlinear stochastic dynamical systems that are affine in control. We develop nonlinear feedback controllers that aim to minimize a nonlinear and nonquadratic performance criterion. This is achieved by selecting control laws such that the infinitesimal generator of a Lyapunov function for the closed-loop system satisfies (59) while providing sufficient conditions for the existence of solutions to the stochastic Hamilton–Jacobi–Bellman equation. As a result, we obtain a family of globally finite time stabilizing (in probability) controllers, each of which are parameterized by the cost functional being optimized.
The feedback control laws introduced here are derived from an inverse optimal stochastic control problem [16,18,19,35,36,37,38,39]. Instead of solving the stochastic steady-state Hamilton–Jacobi–Bellman equation directly for a given cost functional—an often complex task for higher dimensional systems—we define a family of stochastically finite time stabilizing controllers that optimize a derived cost functional. This approach offers flexibility in shaping the control laws. The performance integrand explicitly depends on the nonlinear system dynamics, the Lyapunov function for the closed-loop system, and the finite time stabilizing control laws. The coupling between these elements is governed by the stochastic Hamilton–Jacobi–Bellman equation. By adjusting the parameters in both the Lyapunov function and the performance criterion, this framework allows for the design of a class of globally finite time stabilizing (in probability) controllers that can meet specific performance and response requirements for the closed-loop system.
Consider the nonlinear affine in the control stochastic dynamical system given by
where, for every , the random variables and take values in the state space and the control space , respectively. The mappings , , and satisfy (8) and (9), with uniformly in u. Furthermore, we consider performance integrands of the form
where , and with , , such that (55) becomes
Theorem 5.
Consider the nonlinear stochastic dynamical system given by (78) with performance functional (80). Assume that there exist a two-times continuously differentiable function , constants and , and a function such that
Then the closed-loop system
is globally finite time stable in probability with feedback control law
Moreover, there exists a stochastic settling time such that
Furthermore, if
where , then the performance functional (80), with
is minimized in the sense of (66) and (65) holds.
Proof.
The result is a direct consequence of Theorem 4 with , , and . Specifically, with (79), we have
Now, (86) is obtained by setting . With (86), it follows that (84) implies (59).
Next, since V is twice continuously differentiable and is a local minimizer of V, it follows that ; hence, since by (83) , it follows that , which implies (58). Next, with given by (89), is given by (86) and (61) holds. Since and is positive definite for all , then (62) holds. The result now follows from Theorem 4. □
Remark 1.
It is important to note that the inverse optimal control design approach provides a framework for constructing the Lyapunov function for the closed-loop system that serves as an optimal value function and, as shown in [16,18,39], achieves desired stability margins. Specifically, nonlinear inverse optimal controllers that minimize a meaningful (in the terminology of [18,39]) nonlinear–nonquadratic performance criterion involving a nonlinear–nonquadratic, nonnegative–definite function of the state and a quadratic positive–definite function of the feedback control are shown to possess gain and sector margin guarantees to input nonlinearities in the conic sector .
Example 1.
Consider the axisymmetric spacecraft with stochastic disturbances given by [40] (p. 753)
where , , , and are the principal moments of inertia of the spacecraft such that , , , and are the angular velocities of the spacecraft in the body frame; and are spacecraft control moments; and are standard Wiener processes capturing perturbations in the system dynamics; and .
For this example, we seek state feedback control laws , where , such that the performance measure
is minimized in the sense of (66). Note that, in this case, and
Here, we apply Theorem 5 to find an inverse optimal feedback controller such that the spacecraft (90) and (91) is globally finite time stable in probability and (92) is minimized. To accomplish this, consider the Lyapunov function candidate given by
where . Note satisfies (81) and (82). Furthermore, note that
which shows that (84) is satisfied. Hence, it follows from Theorem 5 that the zero solution of the closed-loop system (90) and (91) is globally finite time stable in probability with the feedback control law given by
Next, note that
which shows that (88) is satisfied with . Hence, it follows from Theorem 5 that the performance functional (92), with
is minimized in the sense of (66).
For and , Figure 1 and Figure 2 show the mean along with one standard deviation (shaded area) of the closed-loop system states along with the control inputs for sample paths.
Figure 1.
System states versus time for Example 1. The bold lines show the average states over sample paths, whereas the shaded area shows a one standard deviation from the average.
Figure 2.
Control inputs versus time for Example 1. The bold lines show the average states over sample paths, whereas the shaded area shows a one standard deviation from the average.
6. Conclusions
In this paper, we formulated an optimal control problem for finite time stochastic stabilization and derived sufficient conditions for designing a nonlinear feedback controller that ensures finite time stability in probability of the closed-loop system. The approach is based on a steady-state, stochastic Hamilton–Jacobi–Bellman framework, where the notion of optimality is associated with a Lyapunov function that satisfies a scalar differential inequality involving fractional powers. Furthermore, we developed inverse optimal feedback controllers for affine nonlinear stochastic systems. In future research, we will extend this framework to develop finite time stability in probability for semi-Markov jump systems [41], as well as for Itô diffusion processes with Markovian switching [42]. In addition, we will address the problem of fixed time optimal stabilization, as well as develop extensions to hybrid stochastic finite time and fixed time optimal control.
Author Contributions
R.C.: Conceptualization, Formal Analysis, Software, Visualization, and Writing—Original Draft. W.M.H.: Conceptualization, Formal Analysis, Writing—Review and Editing, Supervision, and Funding Acquisition. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported, in part, by the Air Force Office of Scientific Research under Grant FA9550-25-1-0184.
Data Availability Statement
No data were used for the research described in this article.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Agarwal, R.; Lakshmikantham, V. Uniqueness and Nonuniqueness Criteria for Ordinary Differential Equations; World Scientific: Singapore, 1993. [Google Scholar]
- Yamada, T.; Watanabe, S. On the Uniqueness of Solutions of Stochastic Differential Equations. J. Math. Kyoto Univ. 1971, 11, 155–167. [Google Scholar] [CrossRef]
- Watanabe, S.; Yamada, T. On the Uniqueness of Solutions of Stochastic Differential Equations II. J. Math. Kyoto Univ. 1971, 11, 553–563. [Google Scholar] [CrossRef]
- Situ, R. Theory of Stochastic Differential Equations with Jumps and Applications; Springer: New York, NY, USA, 2005. [Google Scholar]
- Roxin, E. On finite stability in control systems. Rend. Circ. Mat. Palermo 1966, 15, 273–282. [Google Scholar] [CrossRef]
- Bhat, S.P.; Bernstein, D.S. Finite-Time Stability of Continuous Autonomous Systems. SIAM J. Control Optim. 2000, 38, 751–766. [Google Scholar] [CrossRef]
- Bhat, S.P.; Bernstein, D.S. Geometric homogeneity with applications to finite-time stability. Math. Control. Signals Syst. 2005, 17, 101–127. [Google Scholar] [CrossRef]
- Moulay, E.; Perruquetti, W. Finite time stability conditions for non-autonomous continuous systems. Int. J. Control 2008, 81, 797–803. [Google Scholar] [CrossRef]
- Haddad, W.M.; Nersesov, S.G.; Du, L. Finite-time stability for time-varying nonlinear dynamical systems. In Proceedings of the 2008 American Control Conference, Seattle, WA, USA, 11–13 June 2008; pp. 4135–4139. [Google Scholar] [CrossRef]
- Haddad, W.M.; L’Afflitto, A. Finite-Time Partial Stability, Stabilization, and Optimal Feedback Control. J. Frankl. Inst. 2015, 352, 2329–2357. [Google Scholar] [CrossRef]
- Chen, W.; Jiao, L.C. Finite-time stability theorem of stochastic nonlinear systems. Automatica 2010, 46, 2105–2108. [Google Scholar] [CrossRef]
- Yin, J.; Khoo, S.; Man, Z.; Yu, X. Finite-time stability and instability of stochastic nonlinear systems. Automatica 2011, 47, 2671–2677. [Google Scholar] [CrossRef]
- Rajpurohit, T.; Haddad, W.M. Stochastic finite-time partial stability, partial-state stabilization, and finite-time optimal feedback control. Math. Control. Signals Syst. 2017, 29, 10. [Google Scholar] [CrossRef]
- Zhang, W.; Yao, L. New results on finite-time stability and instability theorems for stochastic nonlinear time-varying systems. Sci. China Inf. Sci. 2025, 68, 122203. [Google Scholar] [CrossRef]
- Yin, J.; Khoo, S. Comments on “Finite-time stability theorem of stochastic nonlinear systems” [Automatica 46 (2010) 2105–2108]. Automatica 2011, 47, 1542–1543. [Google Scholar] [CrossRef]
- Haddad, W.M.; Chellaboina, V. Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach; Princeton University Press: Princeton, NJ, USA, 2008. [Google Scholar]
- Lanchares, M.; Haddad, W.M. Nonlinear Optimal Control for Stochastic Dynamical Systems. Mathematics 2024, 12, 1–30. [Google Scholar] [CrossRef]
- Freeman, R.; Kokotovic, P. Inverse optimality in robust stabilization. SIAM J. Control Optim. 1996, 34, 1365–1391. [Google Scholar] [CrossRef]
- Deng, H.; Krstic, M. Stochastic nonlinear stabilization–Part II: Inverse optimality. Syst. Contr. Lett. 1997, 32, 151–159. [Google Scholar] [CrossRef]
- Khasminskii, R. Stochastic Stability of Differential Equations, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
- Arnold, L. Stochastic Differential Equations: Theory and Applications; Wiley-Interscience: New York, NY, USA, 1974. [Google Scholar]
- ksendal, B. Stochastic Differential Equations: An Introduction with Applications, 6th ed.; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
- Mao, X. Stochastic Differential Equations and Applications, 2nd ed.; Woodhead Publishing: Cambridge, UK, 2007. [Google Scholar]
- Le Gall, J.F. Brownian Motion, Martingales, and Stochastic Calculus; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
- Gard, T.C. Introduction to Stochastic Differential Equations; Marcel Dekker: New York, NY, USA, 1988. [Google Scholar]
- Klebaner, F.C. Introduction to Stochastic Calculus with Applications, 3rd ed.; Imperial College Press: London, UK, 2012. [Google Scholar]
- Lanchares, M.; Haddad, W.M. Dissipative Stochastic Dynamical Systems. Syst. Control Lett. 2023, 172, 105451. [Google Scholar] [CrossRef]
- Chen, W.; Jiao, L.C. Correspondence: Authors’ reply to “Comments on ‘Finite-time stability theorem of stochastic nonlinear systems [Automatica 46 (2010) 2105-2108]”’. Automatica 2011, 47, 1544–1545. [Google Scholar] [CrossRef]
- Billingsley, P. Probability and Measure, Anniversary edition; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
- Apostol, T.M. Mathematical Analysis; Addison-Wesley: Reading, MA, USA, 1974. [Google Scholar]
- Shreve, S. Stochastic Calculus for Finance II: Continuous-Time Models; Springer: New York, NY, USA, 2004. [Google Scholar]
- Kushner, H. Introduction to Stochastic Control; Holt, Rinehart and Winston: New York, NY, USA, 1971. [Google Scholar]
- Chang, F.R. Stochastic Optimization in Continuous Time; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
- Fleming, W.H.; Soner, H.M. Controlled Markov Processes and Viscosity Solutions, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
- Moylan, P.; Anderson, B. Nonlinear regulator theory and an inverse optimal control problem. IEEE Trans. Autom. Control 1973, 18, 460–465. [Google Scholar] [CrossRef]
- Molinari, B. The stable regulator problem and its inverse. IEEE Trans. Autom. Control 1973, 18, 454–459. [Google Scholar] [CrossRef]
- Jacobson, D.H. Extensions of Linear-Quadratic Control Optimization and Matrix Theory; Academic Press: New York, NY, USA, 1977. [Google Scholar]
- Jacobson, D.H.; Martin, D.H.; Pachter, M.; Geveci, T. Extensions of Linear-Quadratic Control Theory; Springer: Berlin/Heidelberg, Germany, 1980. [Google Scholar]
- Sepulchre, R.; Jankovic, M.; Kokotovic, P. Constructive Nonlinear Control; Springer: London, UK, 1997. [Google Scholar]
- Wie, B. Space Vehicle Dynamics and Control; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 1998. [Google Scholar]
- Liu, J.; Colaneri, P.; Bolzern, P.; Li, Z.Y. Asymptotic stability analysis of nonlinear stochastic semi-Markov jump systems. Int. J. Robust Nonlinear Control 2023, 33, 8615–8630. [Google Scholar] [CrossRef]
- Ning, Z.; Zhang, L.; Lam, J. Stability and stabilization of a class of stochastic switching systems with lower bound of sojourn time. Automatica 2018, 92, 18–28. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).