Abstract
This paper presents a comprehensive framework addressing optimal nonlinear analysis and feedback control synthesis for nonlinear stochastic dynamical systems. The focus lies on establishing connections between stochastic Lyapunov theory and stochastic Hamilton–Jacobi–Bellman theory within a unified perspective. We demonstrate that the closed-loop nonlinear system’s asymptotic stability in probability is ensured through a Lyapunov function, identified as the solution to the steady-state form of the stochastic Hamilton–Jacobi–Bellman equation. This dual assurance guarantees both stochastic stability and optimality. Additionally, optimal feedback controllers for affine nonlinear systems are developed using an inverse optimality framework tailored to the stochastic stabilization problem. Furthermore, the paper derives stability margins for optimal and inverse optimal stochastic feedback regulators. Gain, sector, and disk margin guarantees are established for nonlinear stochastic dynamical systems controlled by nonlinear optimal and inverse optimal Hamilton–Jacobi–Bellman controllers.
Keywords:
Lyapunov theory; stochastic optimal control; inverse optimality; relative stability margins MSC:
37H05; 37H30; 93E20; 49L12
1. Introduction
Under specific circumstances, nonlinear controllers offer notable advantages compared to linear controllers. This is particularly evident when dealing with nonlinear plant dynamics and/or system measurements [1,2,3,4], nonadditive or non-Gaussian plant/measurement disturbances, nonquadratic performance measures [5,6,7,8,9], uncertain plant models [10,11,12], or constrained control signals/state amplitudes [13,14]. In the work by [15], the current status of deterministic continuous-time nonlinear-nonquadratic optimal control problems is presented, emphasizing the use of Lyapunov functions for stability and optimality (see [15,16,17] and the references therein).
Expanding on the findings of [15,16,18,19], this paper introduces a framework for analyzing and designing feedback controllers for nonlinear stochastic dynamical systems. Specifically, it addresses a feedback stochastic optimal control problem with a nonlinear-nonquadratic performance measure over an infinite horizon. The key is the connection between the performance measure and a Lyapunov function, ensuring asymptotic stability in probability for the nonlinear closed-loop system. The framework establishes the groundwork for extending linear-quadratic control to nonlinear-nonquadratic problems in stochastic dynamical systems.
The focus lies on the role of the Lyapunov function in ensuring stochastic stability and its seamless connection to the steady-state solution of the stochastic Hamilton–Jacobi–Bellman equation, characterizing the optimal nonlinear feedback controller. To simplify the solution of the stochastic steady-state Hamilton–Jacobi–Bellman equation, the paper adopts an approach of parameterizing a family of stochastically stabilizing controllers. This corresponds to addressing an inverse optimal stochastic control problem [20,21,22,23,24,25,26].
The inverse optimal control design approach constructs the Lyapunov function for the closed-loop system, serving as an optimal value function. It achieves desired stability margins, particularly for nonlinear inverse optimal controllers minimizing a meaningful nonlinear-nonquadratic performance criterion. The paper derives stability margins for optimal and inverse optimal nonlinear stochastic feedback regulators, considering gain, sector, and disk margin guarantees. These guarantees are obtained for nonlinear stochastic dynamical systems controlled by nonlinear optimal and inverse optimal Hamilton–Jacobi–Bellman controllers.
Furthermore, the paper establishes connections between stochastic stability margins, stochastic meaningful inverse optimality, and stochastic dissipativity [27,28], showcasing the equivalence between stochastic dissipativity and optimality for stochastic dynamical systems. Specifically, utilizing extended Kalman–Yakubovich–Popov conditions characterizing stochastic dissipativity, our optimal feedback control law satisfies a return difference inequality predicated on the infinitesimal generator of a controlled Markov diffusion process, connecting optimality to stochastic dissipativity with a specific quadratic supply rate. This integrated framework provides a comprehensive understanding of optimal nonlinear control strategies for stochastic dynamical systems, encompassing stability, optimality, and dissipativity considerations.
2. Mathematical Preliminaries
We start by reviewing some basic results on nonlinear stochastic dynamical systems [29,30,31,32]. First, however, we require some definitions. A sample space is the set of possible outcomes of an experiment. Given a sample space , a -algebra on is a collection of subsets of such that , if , then , and if , then . The pair () is called a measurable space and the probability measure defined on () is a function such that . Furthermore, if and , , then . The triple () is called a probability space. The subsets of belonging to are called -measurable sets. A probability space is complete if every subset of every null set is measurable.
The -algebra generated by the open sets in , denoted by , is called the Borel σ-algebra and the elements of are called Borel sets. Given the probability space (), a random variable is a real-valued mapping such that for all Borel sets . That is, x is -measurable. A stochastic process is a collection of random variables defined on the complete probability space indexed by the set that take values on a common measurable space . Since , we say that is a continuous-time stochastic process.
Occasionally, we write for to denote the explicit dependence of the random variable on the outcome . For every fixed time , the random variable assigns a vector to every outcome , and for every fixed , the mapping generates a sample path of the stochastic process , where for convenience we write to denote the stochastic process . In this paper, and .
A filtration on is a collection of sub--fields of , indexed by , such that , . A filtration is complete if contains the -negligible sets. The stochastic process is progressively measurable with respect to if, for every , the map defined on is -measurable, where denotes the Borel -algebra on . The stochastic process is said to be adapted with respect to , or simply -adapted, if is -measurable for every . An adapted stochastic process with right continuous (or left continuous) sample paths is progressively measurable [33]. We say that a stochastic process satisfies the Markov property if the conditional probability distribution of the future states of the stochastic process depends only on the present state.
The stochastic process is a martingale with respect to the filtration if it is -adapted, , and , where denotes expectation and denotes conditional expectation. If we replace the equality in with “≤" (respectively, “≥"), then is a supermartingale (respectively, submartingale). For an additional discussion on stochastic processes, filtrations, and martingales, see [33].
In this paper, we consider controlled stochastic dynamical systems of the form
where (1) is a stochastic differential equation and (2) is an output equation. The stochastic processes , , and represent the system state, input, and output, respectively. Here, is a set of admissible inputs that contains the input processes that can be applied to the system, is a random system initial condition vector, and is a d-dimensional Brownian motion process. For every , the random variables , , and take values in the state space , the control space , and the output space , respectively. The measurable mappings , , and are known as the system drift, diffusion, and output functions.
The stochastic differential Equation (1) is interpreted as a way of expressing the integral equation
where the first integral in (3) is a Lebesgue integral and the second integral is an Itô integral [34]. When considering processes whose initial condition is a fixed deterministic point rather than a distribution, we will find it convenient to introduce the notation to denote the solution process at time t when the initial condition at time s is the fixed point almost surely. Similarly, and denote probability and expected value, respectively, given that the initial condition is the fixed point almost surely.
Let be a fixed complete filtered probability space, be a -adapted Brownian motion, be a -valued -progressively measurable input process, and be a -measurable initial condition. A solution to (1) with input is a -valued -adapted process with continuous sample paths such that the integrals in (3) exist and (3) holds almost surely (a.s.) for all . For a Brownian motion disturbance, input process, and initial condition given in a prescribed probability space, the solution to (3) is known as a strong solution [35]. In this paper, we focus on strong solutions, and we will simply use the term “solution” to refer to a strong solution. A solution to (1) is unique if for any two solutions and that satisfy (1), for all almost surely.
We assume that every is a -valued Markov control process. An input process is a Markov control process if there exists a function such that . Note that the class of Markov controls encompasses both time-varying inputs (i.e., possibly open-loop control input processes) as well as state-dependent inputs (i.e., possibly a state feedback control input , where is a feedback control law). If is a Markov control process, then the stochastic differential Equation (1) is an Itô diffusion, and if its solution is unique, then the solution is a Markov process.
For an Itô diffusion system with solution , the (infinitesimal) generator of is an operator acting on the continuous function and is defined as ([31])
The set of functions for which the limit in (4) exists is denoted by . If has compact support, where denotes the space of two-times continuously differentiable functions on , then and , where
and where we write for the gradient of V at x and for the Hessian of V at x.
Note that the differential operator introduced in (5) is defined for every and it is characterized by the system drift and diffusion functions. We will refer to the differential operator as the (infinitesimal) generator of the system . However, if discontinuous control inputs in the state variables are considered, then the concept of the extended generator [36] should be used.
If , then it follows from Itô’s formula [35] that the stochastic process satisfies
If the terms appearing in (6) are integrable and the Itô integral in (6) is a martingale, then it follows from (6) that
The next result is standard and establishes existence and uniqueness of solutions for the controlled Itô diffusion system (1).
Theorem 1
([32]). Consider the stochastic dynamical system (1) with initial condition such that . Let be a Markov control process given by , such that the following conditions hold:
- (i)
- Local Lipschitz continuity. For every , there exists a constant such thatfor every with , and every .
- (ii)
- Linear growth. There exists a constant such that, for all and ,
Then, there exists a unique solution to (1) with input . Furthermore,
Assumption 1.
For the remainder of the paper we assume that the conditions for existence and uniqueness given in Theorem 1 are satisfied for the system (1) and (2).
3. Stability Theory for Stochastic Dynamical Systems
Given a feedback control law , the closed-loop system (1) takes the form
where, for convenience, we have defined the closed-loop drift function and we have omitted the dependence of D on its second parameter so that . In this case, the infinitesimal generator of the closed-loop system (11) is given by
Next, we define the notion of stochatic stability for the closed loop system (11). An equilibrium point of (11) is a point such that and . If is an equilibrium point of (11), then the constant stochastic process is a solution of (11) with initial condition . The following definition introduces several notions of stability in probability for the equilibrium solution of the stochastic dynamical system (11). Here, the initial condition is assumed to be a constant, and hence, whenever we write we mean that is a constant vector. It is important to note that if we assume that is a random vector, then we replace with almost surely in Definition 1. As shown in ([32], p. 111) this is without loss of generality in addressing stochastic stability of an equilibrium point.
Definition 1
([29,32]). The equilibrium solution to (11) is Lyapunov stable in probability if, for every ,
Equivalently, the equilibrium solution to (11) is Lyapunov stable in probability if, for every and , there exists such that, for all ,
The equilibrium solution to (11) is asymptotically stable in probability if it is Lyapunov stable in probability and
Equivalently, the equilibrium solution to (11) is asymptotically stable in probability if it is Lyapunov stable in probability and, for every , there exists such that if , then
( The equilibrium solution to (11) is globally asymptotically stable in probability if it is Lyapunov stable in probability and, for all ,
( The equilibrium solution to (11) is exponentially p-stable in probability if there exist scalars , and , and such that if , then
If, in addition, (18) holds for all , then the equilibrium solution to (11) is globally exponentially p-stable in probability. Finally, if , we say that the equilibrium solution to (11) is globally exponentially mean square stable in probability.
We now provide sufficient conditions for local and global asymptotic stability in probability for the nonlinear stochastic dynamical system (11).
Theorem 2
([29]). Let be an open subset containing the point . Consider the nonlinear stochastic dynamical system (11) and assume that there exists a two-times continuously differentiable function such that
The equilibrium solution to (11) is then Lyapunov stable in probability. If, in addition,
then the equilibrium solution to (11) is asymptotically stable in probability. Finally, if, in addition, and V is radially unbounded, then the equilibrium solution to (11) is globally asymptotically stable in probability.
Finally, the next result gives a Lyapunov theorem for global exponential stability in probability.
4. Dissipativity Theory for Stochastic Dynamical Systems
In this section, we recall several key results from [28] on stochastic dissipativity that are necessary for several results of this paper. For the dynamical system given by (1) and (2), a function is called a supply rate if, for all and ,
Definition 2
([28]). A nonlinear stochastic dynamical system given by (1) and (2) is stochastically dissipative with respect to the supply rate rif there exists a nonnegative-definite measurable function , called a storage function, such that the stochastic process is a supermartingale, where is the solution to (1) with . In this case, for all ,
or, equivalently, since (25) holds,
The next result shows that if the system storage function is two-times continuously differentiable, then, under certain regularity conditions, stochastic dissipativity given by the energetic dissipation inequality in expectation (27) can be characterized by the infinitesimal generator .
Theorem 4
([28]). Consider the nonlinear stochastic dynamical system given by (1) and (2). Let be nonnegative definite and let r be a supply rate for . Assume that, for all , the stochastic process is a martingale and . Furthermore, assume that, for every and , there exists an input , with , such that, with input and deterministic initial condition , the mappings and are continuous at . Then, is stochastically dissipative with respect to the supply rate r and with the storage function if and only if for every and , and
The next theorem shows that the regularity conditions needed in Theorem 4 to characterize dissipativity using the power balance inequality (28) is satisfied for a broad-class of stochastic dynamical systems. For the statement of this result, we say that a function is of polynomial growth if there exist positive constants C and m such that
For , we write if V and all its partial derivatives up to order r are of polynomial growth.
Theorem 5
([28]). Consider the nonlinear stochastic dynamical system given by (1) and (2). Let , let be a nonnegative definite function, and let the set of admissible inputs be a set of Markov control processes such that, for every with , there exist a positive constant and a continuous function such that
Assume that, for every , the constant input belongs to and the mapping is continuous on . Then, r is a supply rate of , and the stochastic process is integrable for every . Furthermore is stochastically dissipative with respect to the supply rate r and with the storage function if and only if (28) holds.
Theorem 4 gives an equivalent characterization for stochastic dissipativity as defined by the energetic (i.e., supermartingale) Definition 2 using the power balance inequality (28). The energetic (i.e., supermartingale) definition of dissipativity requires the verification of (26) which is sample path dependent and can be difficult to verify in practice, whereas (28) is an algebraic condition for dissipativity involving a local power balance inequality using the system drift and diffusion functions of the stochastic dynamical system. This equivalence holds under the regularity conditions stated in Theorem 4.
Assumption 2.
For the rest of the paper we assume that the regularity conditions for the equivalence between the supermartingale definition of dissipativity (27) and the power balance inequality (28) are satisfied. That is, we assume that (1) and (2) is dissipative if and only if (28) holds. Note that Theorem 5 gives sufficient conditions for the regularity conditions to hold by imposing polynomial growth contraints on the storage and supply rate functions.
5. Connections between Stability Analysis and Nonlinear-Nonquadratic Performance Evaluation
In this section, we provide connections between stochastic Lyapunov functions and nonlinear-nonquadratic performance evaluation. Specifically, we present sufficient conditions for stability and performance for a given nonlinear stochastic dynamical system with a nonlinear-nonquadratic performance measure. As in deterministic theory [15,16], the cost functional can be explicitly evaluated as long as it is related to an underlying Lyapunov function. For the following result, let and be such that and .
Theorem 6.
Consider the nonlinear stochastic dynamical system given by (11) with nonlinear-nonquadratic performance measure
where is the solution to (11). Furthermore, assume that there exists a two-times continuously differentiable radially unbounded function such that
Then the zero solution to (11) is globally asymptotically stable in probability and
Proof.
Conditions (32)–(34) are a restatement of (19)–(21). This, along with V being radially unbounded, imply that the zero solution of (11) is globally asymptotically stable in probability by Theorem 2.
Next, we show that the stochastic process is a martingale. To see this, first note that the process is -measurable and -adapted because of the measurability of the mappings involved and the properties of the process . Now, using Tonelli’s theorem [37] it follows that, for all ,
for some positive constants and , and hence, the Itô integral
is a martingale. To arrive at (37) we used the fact that , the linear growth condition (9), and the finiteness of the expected value of the supremum of the moments of the system state (10). Note that the supremum in (37) exists because of the continuity of the sample paths of .
It follows from (35) and Itô’s lemma [31] that, for all ,
Taking the expected value operator on both sides of (38) and using the martingale property of the stochastic integral in (38) yields
where exists since and (10) holds. Now, taking the limit as yields
where we used the fact that global asymptotic stability in probability implies that is a nonnegative supermartingale and almost surely [29], and, by Theorem 5.1 of [29],
Finally, note that
where the interchanging of the integration with the expectation operator in (40) follows from the Lebesgue monotone convergence theorem [38] by noting that , is monotone increasing, and hence, converges pointwise to , and noting that, by (34) and (35), . □
Next, we specialize Theorem 6 to linear stochastic systems. For this result, let , let , and let be a positive-definite matrix.
Corollary 1.
Consider the linear stochastic dynamical system with multiplicative noise given by
and with quadratic performance measure
Furthermore, assume that there exists a positive-definite matrix such that
Then, the zero solution to (41) is globally asymptotically stable in probability and
Proof.
The result is a direct consequence of Theorem 6 with , , , and . Specifically, conditions (32), (33), and V being two-times continuously differentiable, radially unbounded and of class are trivially satisfied. Now,
and hence, it follows from (43) that conditions (34) and (35) hold. Thus, all the conditions of Theorem 6 are satisfied. □
Note that (43) is a Lyapunov equation, and hence, for every positive-definite matrix R, there exists a positive definite matrix P satisfying (43) as long as is Hurwitz, and hence, the eigenvalues of A have real part less than . Thus, a continuous-time linear stochastic system driven by a multiplicative Wiener process is globally asymptotically stable in probability if the spectral abscissa of A is less than .
6. Optimal Nonlinear Feedback Control for Stochastic Systems
In this section, we consider a control problem involving a notion of optimality with respect to a nonlinear-nonquadratic cost functional. We use the results developed in Theorem 6 to characterize optimal feedback controllers that guarantee closed-loop global stabilization in probability. Specifically, sufficient conditions for optimality are given in a form that corresponds to a steady-state version of the stochastic Hamilton–Jacobi–Bellman equation. For the following result, let and be such that and .
Theorem 7.
Consider the nonlinear stochastic dynamical system given by (1) with nonlinear-nonquadratic performance measure
where is the solution to (1) with control input . Furthermore, assume that there exists a two-times continuously differentiable, radially unbounded function , and a feedback control law such that
where
Then, with the feedback control , the zero solution of the closed-loop system (11) is globally asymptotically stable in probability, and
In addition, the feedback control minimizes (45) in the sense that
where denotes the set of controllers given by
where and .
Proof.
Global asymptotic stability in probability is a direct consequence of (46)–(49) by applying Theorem 6 to the closed-loop system (11). Furthermore, using (50), (53) is a restatement of (37) as applied to the closed-loop system.
To show that , note that (49) and (50) imply that . Thus,
Now, using an analogous argument as in the proof of Theorem 6, it follows that
for , and hence, .
Next, let , and note that, by Itô’s lemma [31],
Now, it can be shown that the stochastic integral in (57) is a martingale using a similar argument as the one given in the proof of Theorem 6. Hence,
where exists since and (10) holds. Next, taking the limit as yields
Since , the control law satisfies , and hence, it follows from (59) that
Now, combining (51) and (60) yields
Note that (50) is the steady-state version of the stochastic Hamilton–Jacobi–Bellman equation. To see this, recall that the stochastic Hamilton–Jacobi–Bellman equation is given by
which characterizes the optimal control for stochastic time-varying systems on a finite or infinite time interval [30]. For infinite horizon time-invariant systems, , and hence, (65) collapses to (50) and (51), which guarantee optimality with respect to the set of admissible controllers . Note that an explicit characterization of the set is not required and the optimal stabilizing feedback control law is independent of the initial condition .
In order to ensure global asymptotic stability in probability of the closed-loop system (11), Theorem 7 requires that V satisfy (46), (47), and (49), which implies that V is a Lyapunov function for the closed-loop system (11). However, for optimality V need not satisfy (46), (47), and (49). Specifically, if V is a two-times continuously differentiable function such that and , then (50) and (51) imply (53) and (54). It is important to note here that, unlike deterministic theory ([16], p. 857), to ascertain that a control is optimal we require the additional transversality condition appearing in (55); see ([40], p. 337), ([41], p. 125), and ([42], p. 139) for further details.
Even though for an optimal controller the transversality condition in (55) is satisfied, the transversality condition involves a sample path dependent condition that can be difficult to verify for an arbitrary control input . The next theorem circumvents this problem by requiring additional restrictions on the cost integrand L and the Lyapunov function V.
Theorem 8.
Consider the nonlinear stochastic dynamical system given by (1) with the nonlinear-nonquadratic performance measure (45) where satisfies
for some positive constants γ and . Assume that there exist a two-times continuously differentiable function and a control law such that (48), (50), and (51) hold and, for positive constants α and β,
Then, with the feedback control , the zero solution of the closed-loop system (11) is globally exponentially p-stable in probability and (53) holds. In addition, the feedback control minimizes (45) in the sense that
where denotes the set of controllers given by
and .
Proof.
Global exponential p-stability in probability is a direct consequence of (66), (67), and (50) by applying Theorem 3 to the closed-loop system (11). To show (53), (68), and , first note that Theorem 7 holds. Therefore, we need only show that with (66) and (67), . That is, any input with finite cost (and hence, belonging to ) automatically satisfies the transversality condition (and hence, belongs to ).
Next, we specialize Theorem 8 to linear stochastic dynamical systems and provide connections to the stochastic optimal linear-quadratic regulator problem with multiplicative noise. For the following result, let , , , and let and be given positive definite matrices.
Corollary 2.
Consider the linear controlled stochastic dynamical system with multiplicative noise given by
and with quadratic performance measure
Furthermore, assume that there exists a positive-definite matrix such that
Then, with the feedback control , the zero solution to (76) is globally exponentially mean-square stable in probability and
Furthermore,
where is the set of controllers defined in (69) for (76) and .
Proof.
The result is a direct consequence of Theorem 8 with , , , and . Specifically, (66) is satisfied with and . Moreover, V is a two-times continuously differentiable function that satisfies and (67) with , , and . Furthermore, condition (48) is trivially satisfied. Next, it follows from (78) that , showing that (50) holds. Finally, so that all of the conditions of Theorem 8 are satisfied. □
The optimal feedback control law in Corollary 2 is derived using the properties of H as defined in Theorem 7. Specifically, since it follows that . Now, gives the unique global minimum of H. Hence, since minimizes it follows that satisfies or, equivalently, .
7. Inverse Optimal Stochastic Control
In this section, we specialize Theorem 7 to systems that are affine in the control. Specifically, we devise nonlinear feedback controllers within a stochastic optimal control framework, aiming to minimize a nonlinear-nonquadratic performance criterion. This is achieved by selecting the controller in such a way that the mapping of the infinitesimal generator of the Lyapunov function is negative definite along the sample trajectories of the closed-loop system. We also establish sufficient conditions for the existence of asymptotically stabilizing solutions (in probability) to the stochastic Hamilton–Jacobi–Bellman equation. Consequently, these findings present a set of globally stabilizing controllers, parameterized by the minimized cost functional.
The controllers developed in this section are based on an inverse optimal stochastic control problem [20,21,22,23,24,25,26]. To simplify the solution of the stochastic steady-state Hamilton–Jacobi–Bellman equation, we do not attempt to minimize a given cost functional. Instead, we parameterize a family of stochastically stabilizing controllers that minimize a derived cost functional, offering flexibility in defining the control law. The performance integrand explicitly depends on the nonlinear system dynamics, the Lyapunov function for the closed-loop system, and the stabilizing feedback control law. This coupling is introduced through the stochastic Hamilton–Jacobi–Bellman equation. Therefore, by adjusting parameters in the Lyapunov function and the performance integrand, the proposed framework can characterize a class of globally stabilizing controllers in probability, meeting specified constraints on the closed-loop system response.
Consider the nonlinear stochastic affine in the control dynamical system given by
where satisfies , , and satisfies . Furthermore, we consider performance integrands L of the form
where , and , and where denotes the set of positive definite matrices, so that (45) becomes
Theorem 9.
Consider the nonlinear controlled affine stochastic dynamical system (81) with performance measure (83). Assume that there exists a two-times continuously differentiable, radially unbounded function and a function such that
Then the zero solution of the closed-loop system
is globally asymptotically stable in probability with the feedback control law
and the performance measure (83), with
is minimized in the sense that
Finally,
Proof.
The result is a direct consequence of Theorem 7 with , , and . Specifically, with (82) the Hamiltonian has the form
Now, the feedback control law (90) is obtained by setting . With (90), it follows that (84), (85), and (88) imply (46), (47), and (49), respectively. Next, since V is two-times continuously differentiable and is a local minimum of V, it follows that , and hence, since by assumption , it follows that , which implies (48). Next, with given by (91) and given by (90), (50) holds. Finally, since and is positive definite for all , condition (51) holds. The result now follows as a direct consequence of Theorem 7. □
Note that (88) is equivalent to
with given by (90). Furthermore, conditions (84), (85), and (94) ensure that V is a Lyapunov function for the closed-loop system (89). As outlined in [16], it is crucial to acknowledge that the function present in the integrand of the performance measure (82) is a variable function of constrained by conditions (86) and (88). Therefore, offers versatility in the selection of the control law.
With given by (91) and given by (90), L is given by
Since , , the first term on the right-hand side of (95) is nonnegative, whereas (94) implies that the second, third, and fourth terms collectively are nonnegative. Thus, it follows that
which shows that L may be negative. As a result, there may exist a control input for which the performance measure is negative. However, if the control is a regulation controller, that is, , then it follows from (92) and (93) that
Furthermore, in this case, substituting into (95) yields
which, by (94), is positive.
Example 1.
To illustrate the utility of Theorem 9, we showcase an example involving global stabilization of a stochastic version of the Lorentz equations [43]. These equations model fluid convection and are known to exhibit chaotic behavior. To construct inverse optimal controllers for the controlled Lorentz stochastic dynamical system consider the system
where , and . Note that (99)–(101) can be written in the form of (81) with
In order to design an inverse optimal control law for the controlled Lorentz stochastic dynamical system (99)–(101) consider the quadratic Lyapunov function candidate given by
where and . Now, letting , , , , and , where , it follows that
satisfies (88); that is,
Hence, the feedback control law given by (90) globally stabilizes the controlled Lorentz dynamical system (99)–(101). Furthermore, the performance functional (83), with
is minimized in the sense of (92).
Figure 1 shows the mean along with the standard deviation of 1000 system sample paths with parameters , , and . ▵
Figure 1.
Controlled system states versus time. The bold lines show the average states over 1000 sample paths, whereas the shaded area shows a one standard deviation from the average.
The next theorem is similar to Theorem 9 and is included here as it provides the basis for our stability margin results given in the next sections.
Theorem 10.
Consider the nonlinear controlled affine stochastic dynamical system (81) with performance measure (83). Assume that there exists a two-times continuously differentiable, radially unbounded function such that
Then the zero solution of the closed-loop system
is globally asymptotically stable in probability with the feedback control law
and the performance functional (83) is minimized in the sense that
Finally,
Proof.
The proof is identical to the proof of Theorem 9 and, hence, is omitted. □
8. Relative Stability Margins for Optimal Nonlinear Stochastic Regulators
In this section, we establish relative stability margins for both optimal and inverse optimal nonlinear stochastic feedback regulators. Specifically, we derive sufficient conditions ensuring gain, sector, and disk margin guarantees for nonlinear stochastic dynamical systems under the control of nonlinear optimal and inverse optimal Hamilton–Jacobi–Bellman controllers. These controllers aim to minimize a nonlinear-nonquadratic performance criterion that includes cross-weighting terms. In the scenario where the cross-weighting term in the performance criterion is omitted, our findings align with the gain, sector, and disk margins derived for the deterministic optimal control problem outlined in [25].
Alternatively, by retaining the cross-terms in the performance criterion and specializing the optimal nonlinear-nonquadratic problem to a stochastic linear-quadratic problem featuring a multiplicative noise disturbance, our results recover the corresponding gain and phase margins for the deterministic linear-quadratic optimal control problem as presented in [44]. Despite the observed degradation of gain, sector, and disk margins due to the inclusion of cross-weighting terms, the added flexibility afforded by these terms enables the assurance of optimal and inverse optimal nonlinear controllers that can exhibit superior transient performance as compared to meaningful inverse optimal controllers.
To develop relative stability margins for nonlinear stochastic regulators consider the nonlinear stochastic dynamical system given by
where satisfies , , satisfies , and is an admissible feedback controller such that is globally asymptotically stable in probability with , with a nonlinear-nonquadratic performance criterion
where , , and are given such that , , and .
Next, we define the relative stability margins for given by (114) and (115). Specifically, let , , and consider the negative feedback interconnection of and given in Figure 2, where is either a linear operator , a nonlinear static operator , or a nonlinear dynamic operator with input and output . Furthermore, we assume that in the nominal case the nominal closed-loop system is globally asymptotically stable in probability.
Figure 2.
Multiplicative input uncertainty of and input operator .
Definition 3
([16]). Let be such that . Then the nonlinear stochastic dynamical system given by (114) and (115) is said to have a gain margin if the negative feedback interconnection of and is globally asymptotically stable in probability for all , where , .
Definition 4
([16]). Let be such that . Then the nonlinear stochastic dynamical system given by (114) and (115) is said to have a sector margin if the negative feedback interconnection of and is globally asymptotically stable in probability for all nonlinearities such that , , and , for all , .
Definition 5.
A nonlinear stochastic dynamical systems is asymptotically zero-state observable if and imply .
For the next two definitions, we assume that the system and the nonlinear operator are asymptotically zero-state observable.
Definition 6
([16]). Let be such that . Then the nonlinear stochastic dynamical system given by (114) and (115) is said to have a disk margin if the negative feedback interconnection of and Δ is globally asymptotically stable in probability for all dynamic operators Δ such that Δ is stochastically dissipative with respect to the supply rate and with a two-times continuously differentiable, positive definite storage function, where , , and such that .
Definition 7
([16]). Let be such that . Then the nonlinear stochastic dynamical system given by (114) and (115) is said to have a structured disk margin if the negative feedback interconnection of and Δ is globally asymptotically stable in probability for all dynamic operators Δ such that , and , , is stochastically dissipative with respect to the supply rate and with a two-times continuously differentiable, positive definite storage function, where , , and such that .
Note that if has a disk margin , then has gain and sector margins .
The following lemma is needed for developing the main results of this section.
Lemma 1.
Proof.
Next, we present disk margins for the nonlinear-nonquadratic optimal regulator given by Theorem 10. We consider the case in which , , is a constant diagonal matrix and the case in which it is not a constant diagonal matrix.
Theorem 11.
Consider the nonlinear stochastic dynamical system given by (114) and (115), where ϕ is the stochastically stabilizing feedback control law given by (111) and where is a two-times continuously differentiable, radially unbounded function that satisfies (105)–(109). Assume that is asymptotically zero state observable. If the matrix , where , , and there exists such that and (118) is satisfied, then the nonlinear stochastic dynamical system has a structured disk margin . If, in addition, and there exists such that and (118) is satisfied, then the nonlinear stochastic dynamical system has a disk margin .
Proof.
Note that it follows from Lemma 1 that
Hence, with the storage function , it follows from that is stochastically dissipative with respect to the supply rate . Now, the result is a direct consequence of Definitions 6 and 7, and the stochastic version of Corollary 6.2 given in [16] with and . □
For the next result, define
where is such that and .
Theorem 12.
Consider the nonlinear stochastic dynamical system given by (114) and (115), where ϕ is the stochastically stabilizing feedback control law given by (111) and where is a two-times continuously differentiable, radially unbounded function that satisfies (105)–(109). Assume that is asymptotically zero-state observable. If there exists such that and (118) is satisfied, then the nonlinear stochastic system has a disk margin , where .
Proof.
It follows from Lemma 1 that
Thus, with the storage function , is stochastically dissipative with respect to thesupply rate . The result now is a direct consequence of Definition 6 and the stochastic version of Corollary 6.2 given in [16] with and . □
Next, we provide a result that guarantees sector and gain margins for the case in which , , is diagonal.
Theorem 13.
Consider the nonlinear stochastic dynamical system given by (114) and (115), where ϕ is the stochastically stabilizing feedback control law given by (111) and where is a two-times continuously differentiable, radially unbounded function that satisfies (105)–(109). Furthermore, let , where , , . If there exists such that and
then the nonlinear stochastic dynamical system has a sector (and hence, gain) margin .
Proof.
Let , where is a static nonlinearity such that , , and , for all , , where and ; or, equivalently, , for all , . In this case, the closed-loop system (114) and (115) with is given by
Next, consider the two-times continuously differentiable, radially unbounded function Lyapunov function candidate V satisfying (105)–(109) and let denote the Lyapunov infinitesimal generator of the closed-loop system (126). Now, it follows from (109) and (125) that
which, by Theorem 2, implies that the closed-loop system (126) is globally asymptotically stable in probability for all such that , , . Hence, given by (114) and (115) has sector (and hence, gain) margins . □
It is important to note that Theorem 13 also holds in the case where (125) is replaced with (118) and with the additional assumption that (118) is radially unbounded. To see this, note that (127) can be written as
where
In this case, the result follows from Theorem 3.1 of [45]. Furthermore, note that in the case where , , is diagonal, Theorem 13 guarantees larger gain and sector margins to the gain and sector margin guarantees provided by Theorem 12. However, Theorem 13 does not provide disk margin guarantees.
9. Nonlinear Stochastic Feedback Regulators with Relative Stability Margins Guarantees
In this section, we give sufficient conditions that guarantee that a given nonlinear feedback controller has prespecified disk, sector, and gain margins.
Proposition 1.
Let and let be a positive-definite matrix. Consider the nonlinear stochastic dynamical system given by (114) and (115), where ϕ is a stochastically stabilizing feedback control law. Then there exist functions , , and such that , is a two-times continuously differentiable, radially unbounded function such that , , , , and, for all ,
if and only if there exists a two-times continuously differentiable, radially unbounded function such that , , , , and
Proof.
If there exist functions , , and such that and (129) and (130) are satisfied, then it follows from Lemma 1 that (131) is satisfied.
Conversely, if (131) is satisfied, then with , , and , it follows from the stochastic version of Theorem 5.6 of [16] that, for all ,
The result now follows with
and . □
Note that if (129) and (130) are satisfied, then it follows from Theorem 9 that the feedback control law minimizes the cost functional (83). Hence, Proposition 1 provides necessary and sufficient conditions for optimality of a given stochastically stabilizing feedback control law with prespecified disk margin guarantees.
The following result presents specific disk margin guarantees for inverse optimal controllers.
Theorem 14.
Let be given. Consider the nonlinear stochastic dynamical system given by (114) and (115), where ϕ is a stochastically stabilizing feedback control law. Assume that is asymptotically zero-state observable, there exist functions and such that is a two-times continuously differentiable, radially unbounded function, , , and
Then the nonlinear stochastic dynamical system has a disk margin , where and and are given by (123). Furthermore, with the feedback control law ϕ the performance functional
is minimized in the sense that
Proof.
The result is a direct consequence of Theorems 9 and 12 with and . Specifically, in this case, all the conditions of Theorem 9 are trivially satisfied. Furthermore, note that (135) is equivalent to (118). The result now follows from Theorems 9 and 12. □
The next result provides sufficient conditions that guarantee that a given nonlinear feedback controller has prespecified gain and sector margins.
Theorem 15.
Let be given. Consider the asymptotically zero-state observable nonlinear stochastic dynamical system given by (114) and (115), where ϕ is a stochastically stabilizing feedback control law. Assume there exist functions , where , , , and is a two-times continuously differentiable, radially unbounded function, and satisfies (132)–(135). Then the nonlinear stochastic dynamical system has a disk margin . Furthermore, with the feedback control law ϕ the performance functional (136) is minimized in the sense that
Proof.
The result is a direct consequence of Theorems 9 and 13 with the proof being identical to the proof of Theorem 14. □
10. Optimal Linear-Quadratic Stochastic Control
In this section, we specialize Theorems 11 and 12 to the case of linear stochastic systems with multiplicative disturbance noise. Specifically, consider the stabilizable stochastic linear system given by
where , , , and , and assume that is detectable and the system (139) and (140) is asymptotically stable in probability with the feedback or, equivalently, is Hurwitz, where . Furthermore, assume that K is an optimal regulator that minimizes the quadratic performance functional given by
where , , and are such that , , and is observable. In this case, it follows from Theorem 9 with , , , , , , and that the optimal control law K is given by , where is the solution to the algebraic regulator Riccati equation given by
The following results provide guarantees of disk, sector, and gain margins for the system (139) and (140). We assume that is asymptotically zero-state observable.
Corollary 3.
Proof.
Corollary 4.
Proof.
The gain margins specified in Corollary 4 precisely match those presented in [44] for deterministic linear-quadratic optimal regulators incorporating cross-weighting terms in the performance criterion. Additionally, as Corollary 4 ensures structured disk margins of , it implies that the system possesses a phase margin defined as follows:
or equivalently,
In the scenario where , deduced from (144), it follows that . Consequently, Corollary 4 ensures a phase margin of in each input–output channel. Additionally, stipulating leads to the conclusion, based on Corollary 4, that the system described by (139) and (140) possesses a gain and sector margin of .
11. Stability Margins and Meaningful Inverse Optimality
In this section, we establish explicit links between stochastic stability margins, stochastic meaningful inverse optimality, and stochastic dissipativity, focusing on a specific quadratic supply rate. More precisely, we derive a stochastic counterpart to the classical return difference inequality for continuous-time systems with continuously differentiable flows [21,46] in the context of stochastic dynamical systems. Furthermore, we establish connections between stochastic dissipativity and optimality for stochastic nonlinear controllers. Notably, we demonstrate the equivalence between stochastic dissipativity and optimality in the realm of stochastic dynamical systems. Specifically, we illustrate that an optimal nonlinear feedback controller , satisfying a return difference condition based on the infinitesimal generator of a controlled Markov diffusion process, is equivalent to the stochastic dynamical system—with input u and output —being stochastically dissipative with respect to a supply rate expressed as .
Here, we assume that is nonnegative for all , which, in the terminology of [25,47], corresponds to a meaningful cost functional. Furthermore, we assume and , , and is radially unbounded. In this case, we establish connections between stochastic dissipativity and optimality for nonlinear stochastic controllers. The first result specializes Theorem 10 to the case in which .
Theorem 16.
Consider the nonlinear stochastic dynamical system (114) with performance functional (83) with and , . Assume that there exists a two-times continuously differentiable, radially unbounded function such that
Then the zero solution of the closed-loop system
is globally asymptotically stable in probability with the feedback control law
and the performance functional (83) is minimized in the sense that
Finally,
Proof.
The proof is similar to the proof of Theorem 9, and hence, is omitted. □
Next, we show that for a given nonlinear stochastic dynamical system given by (114) and (115), there exists an equivalence between optimality and stochastic dissipativity. For the following result we assume that for a given nonlinear stochastic system (114), if there exists a feedback control law that minimizes the performance functional (83) with , , and , , then there exists a two-times continuously differentiable, radially unbounded function such that (149) is satisfied.
Theorem 17.
Consider the nonlinear stochastic dynamical system given by (114) and (115). The feedback control law is optimal with respect to a performance functional (82) with , , and , , if and only if the nonlinear stochastic system is stochastically dissipative with respect to the supply rate and has a two-times continuously differentiable positive-definite, radially unbounded storage function .
Proof.
If the control law is optimal with respect to a performance functional (82) with , , and , , then, by assumption, there exists a two-times continuously differentiable, radially unbounded function such that (149) is satisfied. Hence, it follows from Proposition 1 that
which implies that is stochastically dissipative with respect to the supply rate .
Conversely, if is stochastically dissipative with respect to the supply rate and has a two-times continuously differentiable positive-definite storage function , then, with , , , , and , it follows from the stochastic version of Theorem 5.6 of [16] that there exists a function such that and, for all ,
Now, the result follows from Theorem 16 with . □
The next result gives disk and structured disk margins for the nonlinear stochastic dynamical system given by (114) and (115).
Corollary 5.
Consider the nonlinear stochastic dynamical system given by (114) and (115), where ϕ is the stochastically stabilizing feedback control law given by (111) with and where is a two-times continuously differentiable, radially unbounded function that satisfies (105)–(109). Assume that is asymptotically zero state observable. Furthermore, assume , where , , and , . Then the nonlinear stochastic dynamical system has a structured disk margin . If, in addition, , then the nonlinear stochastic system has a disk margin
Proof.
The result is a direct consequence of Theorem 11. Specifically, if , , and , then (118) is trivially satisfied for all . Now, the result follows immediately by letting . □
Finally, we provide sector and gain margins for the nonlinear stochastic dynamical system given by (114) and (115).
Corollary 6.
Consider the nonlinear stochastic dynamical system given by (114) and (115), where ϕ is a stochastically stabilizing feedback control law given by (111) with and where is a two-times continuously differentiable, radially unbounded function that satisfies (105)–(109). Furthermore, assume , where , , , and , . Then the nonlinear stochastic dynamical system has a sector (and hence, gain) margin .
Proof.
The result is a direct consequence of Theorem 13. Specifically, if , , and , then (125) is trivially satisfied for all . Now, the result follows immediately by letting . □
12. Conclusions
In this paper, we merged stochastic Lyapunov theory with stochastic Hamilton–Jacobi–Bellman theory to provide explicit connections between stability and optimality of nonlinear stochastic regulators. The proposed approach involves utilizing a steady-state stochastic Hamilton–Jacobi–Bellman framework to characterize optimal nonlinear feedback controllers wherein the notion of optimality is directly linked to a specified Lyapunov function, guaranteeing stability in probability for the closed-loop system. The derived results are then employed to establish inverse optimal feedback controllers for both affine nonlinear stochastic systems and linear stochastic systems.
Moreover, leveraging the concepts of stochastic stability and stochastic dissipativity theory, we developed sufficient conditions for gain, sector, and disk margin guarantees. These conditions apply to nonlinear stochastic dynamical systems controlled by both nonlinear optimal and inverse optimal regulators, minimizing a nonlinear-nonquadratic performance criterion. Furthermore, we established connections between stochastic dissipativity and optimality for nonlinear stochastic systems. The novelty of the proposed framework provides the foundation for extending linear-quadratic control for stochastic dynamical systems to nonlinear-nonquadratic problems.
Author Contributions
M.L.: Conceptualization, Formal analysis, Software, Visualization, Writing—original draft. W.M.H.: Conceptualization, Formal analysis, Writing—review and editing, Supervision, Funding Acquisition. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported in part by the Air Force Office of Scientific Research under Grant FA9550-20-1-0038.
Data Availability Statement
No data were used for the research described in the article.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Baumann, W.T.; Rugh, W.J. Feedback control of nonlinear systems by extended linearization. IEEE Trans. Autom. Control 1986, 31, 40–46. [Google Scholar] [CrossRef]
- Wang, J.; Rugh, W.J. Feedback linearization families for nonlinear systems. IEEE Trans. Autom. Control 1987, 32, 935–941. [Google Scholar] [CrossRef]
- Blueschke, D.; Blueschke-Nikolaeva, V.; Neck, R. Approximately optimal control of nonlinear dynamic stochastic problems with learning: The OPTCON algorithm. Algorithms 2021, 14, 181. [Google Scholar] [CrossRef]
- Zhang, Y.; Li, S.; Liao, L. Near-optimal control of nonlinear dynamical systems: A brief survey. Annu. Rev. Control 2019, 47, 71–80. [Google Scholar] [CrossRef]
- Rekasius, Z.V. Suboptimal design of intentionally nonlinear controllers. IEEE Trans. Autom. Control 1964, 9, 380–386. [Google Scholar] [CrossRef]
- Bass, R.; Webber, R. Optimal nonlinear feedback control derived from quartic and higher-order performance criteria. IEEE Trans. Autom. Control 1966, 11, 448–454. [Google Scholar] [CrossRef]
- Speyer, J. A nonlinear control law for a stochastic infinite time problem. IEEE Trans. Autom. Control 1976, 21, 560–564. [Google Scholar] [CrossRef]
- Shaw, L. Nonlinear control of linear multivariable systems via state-dependent feedback gains. IEEE Trans. Autom. Control 1979, 24, 108–112. [Google Scholar] [CrossRef]
- Salehi, S.V.; Ryan, E. On optimal nonlinear feedback regulation of linear plants. IEEE Trans. Autom. Control 1982, 27, 1260–1264. [Google Scholar] [CrossRef]
- Leitmann, G. On the efficacy of nonlinear control in uncertain linear systems. ASME J. Dyn. Syst. Meas. Control 1981, 102, 95–102. [Google Scholar] [CrossRef]
- Petersen, I.R. Nonlinear versus linear control in the direct output feedback stabilization of linear systems. IEEE Trans. Autom. Control 1985, 30, 799–802. [Google Scholar] [CrossRef]
- Barmish, B.R.; Galimidi, A.R. Robustness of Luenberger observers: Linear systems stabilized via non-linear control. Automatica 1986, 22, 413–423. [Google Scholar] [CrossRef]
- Ryan, E.P. Optimal feedback control of saturating systems. Int. J. Control 1982, 35, 531–534. [Google Scholar] [CrossRef]
- Blanchini, F. Feedback control for linear time-invariant systems with state and control bounds in the presence of disturbances. IEEE Trans. Autom. Control 1990, 35, 1231–1234. [Google Scholar] [CrossRef]
- Bernstein, D.S. Nonquadratic cost and nonlinear feedback control. Int. J. Robust Nonlinear Control 1993, 3, 211–229. [Google Scholar] [CrossRef]
- Haddad, W.M.; Chellaboina, V. Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach; Princeton University Press: Princeton, NJ, USA, 2008. [Google Scholar]
- Kushner, H.J. A partial history of the early development of continuous-time nonlinear stochastic systems theory. Automatica 2014, 50, 303–334. [Google Scholar] [CrossRef]
- Lanchares, M.; Haddad, W.M. Nonlinear–nonquadratic optimal and inverse optimal control for discrete-time stochastic dynamical systems. Int. J. Robust Nonlinear Control 2022, 32, 1487–1509. [Google Scholar] [CrossRef]
- Haddad, W.M.; Lanchares, M. Dissipativity, inverse optimal control, and stability margins for nonlinear discrete-time stochastic feedback regulators. Int. J. Control 2023, 96, 2133–2145. [Google Scholar] [CrossRef]
- Molinari, B. The stable regulator problem and its inverse. IEEE Trans. Autom. Control 1973, 18, 454–459. [Google Scholar] [CrossRef]
- Moylan, P.J.; Anderson, B.D.O. Nonlinear regulator theory and an inverse optimal control problem. IEEE Trans. Autom. Control 1973, 18, 460–465. [Google Scholar] [CrossRef]
- Jacobson, D.H. Extensions of Linear-Quadratic Control Optimization and Matrix Theory; Academic Press: New York, NY, USA, 1977. [Google Scholar]
- Jacobson, D.H.; Martin, D.H.; Pachter, M.; Geveci, T. Extensions of Linear-Quadratic Control Theory; Springer: Berlin/Heidelberg, Germany, 1980. [Google Scholar]
- Freeman, R.A.; Kokotović, P.V. Inverse optimality in robust stabilization. SIAM J. Control Optim. 1996, 34, 1365–1391. [Google Scholar] [CrossRef]
- Sepulchre, R.; Jankovic, M.; Kokotovic, P. Constructive Nonlinear Control; Springer: London, UK, 1997. [Google Scholar]
- Deng, H.; Krstic, M. Stochastic nonlinear stabilization–Part II: Inverse optimality. Syst. Control Lett. 1997, 32, 151–159. [Google Scholar] [CrossRef]
- Rajpurohit, T.; Haddad, W.M. Dissipativity theory for nonlinear stochastic dynamical systems. IEEE Trans. Autom. Control 2017, 62, 1684–1699. [Google Scholar] [CrossRef]
- Lanchares, M.; Haddad, W.M. Dissipative stochastic dynamical systems. Syst. Control Lett. 2023, 172, 105451. [Google Scholar] [CrossRef]
- Khasminskii, R. Stochastic Stability of Differential Equations, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
- Arnold, L. Stochastic Differential Equations: Theory and Applications; Wiley-Interscience: New York, NY, USA, 1974. [Google Scholar]
- Øksendal, B. Stochastic Differential Equations: An Introduction with Applications, 6th ed.; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
- Mao, X. Stochastic Differential Equations and Applications, 2nd ed.; Woodhead Publishing: Cambridge, UK, 2007. [Google Scholar]
- Gall, J.F.L. Brownian Motion, Martingales, and Stochastic Calculus; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
- Gard, T.C. Introduction to Stochastic Differential Equations; Marcel Dekker: New York, NY, USA, 1988. [Google Scholar]
- Klebaner, F.C. Introduction to Stochastic Calculus with Applications, 3rd ed.; Imperial College Press: London, UK, 2012. [Google Scholar]
- Stroock, D.W.; Varadhan, S.R.S. Multidimensional Diffusion Processes; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
- Billingsley, P. Probability and Measure, anniversary edition ed.; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
- Apostol, T.M. Mathematical Analysis; Addison-Wesley: Reading, MA, USA, 1974. [Google Scholar]
- Shreve, S. Stochastic Calculus for Finance II: Continuous-Time Models; Springer: New York, NY, USA, 2004. [Google Scholar]
- Kushner, H. Introduction to Stochastic Control; Holt, Rinehart and Winston: New York, NY, USA, 1971. [Google Scholar]
- Chang, F.-R. Stochastic Optimization in Continuous Time; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
- Fleming, W.H.; Soner, H.M. Controlled Markov Processes and Viscosity Solutions, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
- Wan, C.-J.; Bernstein, D.S. Nonlinear feedback control with global stabilization. Dyn. Control 1995, 5, 321–346. [Google Scholar] [CrossRef]
- Chung, D.; Kang, T.; Lee, J. Stability robustness of LQ optimal regulators for the performance index with cross-product terms. IEEE Trans. Autom. Control 1994, 39, 1698–1702. [Google Scholar] [CrossRef]
- Mao, X. Stochastic versions of the LaSalle theorem. J. Differ. Equations 1999, 153, 175–195. [Google Scholar] [CrossRef]
- Chellaboina, V.; Haddad, W.M. Stability margins of nonlinear optimal regulators with nonquadratic performance criteria involving cross-weighting terms. Syst. Control Lett. 2000, 39, 71–78. [Google Scholar] [CrossRef]
- Freeman, R.; Kokotović, P. Robust Nonlinear Control Design: State-Space and Lyapunov Techniques; Birkhauser: Boston, MA, USA, 1996. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).