Systematic Analysis and Design of Control Systems Based on Lyapunov’s Direct Method

: This paper deals with systematic approaches for the analysis of stability properties and controller design for nonlinear dynamical systems. Numerical methods based on sum-of-squares decomposition or algebraic methods based on quantiﬁer elimination are used. Starting from Lya-punov’s direct method, these methods can be used to derive conditions for the automatic veriﬁcation of Lyapunov functions as well as for the structural determination of control laws. This contribution describes methods for the automatic veriﬁcation of (control) Lyapunov functions as well as for the constructive determination of control laws.


Introduction
Stability is one of the most important properties of control systems. When designing a controller, the stability must be maintained in the case of a stable system and an unstable system must be stabilized via feedback.
In the case of nonlinear systems, stability cannot always be decided based on linearization. In addition, a stability statement based on linearization is only locally valid at most. For global and semi-global stability statements, one typically employs Lyapunov's second method [1,2]. The difficulty of this approach is finding a suitable Lyapunov candidate function and proving the corresponding definiteness conditions. This proof is typically established through adept estimations, which, nonetheless, requires significant expertise.
In its basic form, Lyapunov's second method allows statements about the stability of equilibrium points of an autonomous system [3][4][5]. Input-state stability extended this approach to systems with inputs [6][7][8]. While the (classical) Lyapunov function allows statements about stability, a control Lyapunov function makes statements about stabilizability [9,10].
In applying Lyapunov's direct method as well as its control generalizations, one is confronted, as mentioned above, with difficulties, namely, the choice of an appropriate candidate function and the proof of appropriate definiteness.
Nevertheless, there are some methods with the help of which it is possible to determine Lyapunov functions under certain circumstances. Such methods are, for example, the Krasovskii method, the variable gradient method, or the choice of physically motivated Lyapunov candidates. For more detailed information and examples, the reader is referred to [3,4]. A comprehensive introduction to passivity and its use for finding Lyapunov functions can be found in [11].
Part of the difficulties associated with Lyapunov's direct method can be circumvented by using vector Lyapunov functions [12][13][14]. In control engineering applications, cascaded or recursive design approaches, such as backstepping, have become established [11,15]. Instead of analytical computations, numerical approximation methods [16] or neural networks in conjunction with machine learning can sometimes be used to determine control Lyapunov functions [17,18].
In recent years, two methods have been developed that systematically address the Lyapunov definiteness check. Hence, it could be shown that both sum-of-squares decomposition and quantifier elimination are suitable to find the appropriate Lyapunov functions. These results could then be extended to different problems, such as other stability properties [19], passivity [20], static output feedback [21][22][23], or determination of the region of attraction [24] and invariant sets [25,26]. However, both methods are only suitable for systems with polynomial descriptions, so a large number of problems cannot be considered in the first step. To circumvent this circumstance, a polynomializing transformation can be introduced to significantly increase the class of systems that can be studied. In this paper, we want to recapitulate the results on stability and input-to-state stability, respectively, and show how control Lyapunov function approaches can be used to generate stable and input-to-state stable closed-loop systems.
For that purpose, the paper is organized as follows: Section 2 outlines the mathematical methods used. Section 3 is devoted to stability analysis. The following Section 4 deals with control Lyapunov functions that can be directly applied to the design of stabilizing state feedback. The procedure for determining a Lyapunov function for the non-polynomial Furuta pendulum and that of a control Lyapunov function using a chemical process as an example is illustrated in Section 5. In Section 6, the results are summarized and discussed. In addition, we provide an outlook for possible follow-up research. Finally, we draw some conclusions in Section 7.

Methods
In the case of nonlinear systems, one difficulty is that the analysis and design methods available from control engineering and systems theory are often not constructive, but exist in conjunction with existential statements. The definiteness conditions that arise in Lyapunov methods can often only be ensured by clever estimations, which in turn require a high degree of understanding of the system and can only be applied to the specific problem.
In this section, two different methods are presented, which can be used to check or guarantee definiteness conditions for given approaches, namely quantifier elimination and sum-of-squares decomposition. Both approaches have been successfully used to estimate regions of attraction with Lyapunov methods [25,27,28], but are essentially limited to polynomial systems. Therefore, a rational transformation is first introduced, which provides an extension for the treatment of non-polynomial systems [19].

Polynomialization
In the following subsections, two mathematical methods are presented to test Lyapunov approaches systematically concerning their definiteness. However, both quantifier elimination and sum-of-squares decomposition apply only to polynomials and thus to polynomial system descriptions. This is a relatively severe limitation since a large number of problems are described by non-polynomial terms. For example, trigonometric functions frequently arise in the modeling of mechanical systems. Chemical or process engineering models often exhibit exponential functions (due to the Arrhenius equation). To overcome this limitation, an algorithm for polynomialization of non-polynomial system descriptions is presented below (see Algorithm 1). Thereby systems of the forṁ are considered, where f ijk (x) are elementary functions and nested elementary functions, respectively. By elementary functions in this paper, we mean a fixed set of functions, each of which is the solution of a rational ordinary differential equation. The algorithm terminates since the system (1) consists of finitely many functions f ijk and each of these functions is described by an ordinary differential equation of finite order.
and on the other hand from derived conditions of substitution (see Example 1 and Section 5).
Here, T, G 1 and G 2 are column vectors of functions for which the corresponding equation and inequality conditions hold for each row. The Algorithm 1 generally produces a rational system. However, polynomial descriptions are necessary for the introduced analysis tools. Therefore, let N(z 1 , z 2 ) be the principal denominator of f 1 (z 1 , z 2 ) and f 2 (z 1 , z 2 ). Thus, the functions N f 1 and N f 2 are also polynomials if N(z 1 , z 2 ) is polynomial. Moreover, N(z 1 , z 2 ) > 0, ∀(z 1 , z 1 ) ∈ D 1 × D 2 must hold, i.e., the sign of the principal denominator must not change. Thus, N f 1 and f 1 as well as N f 2 and f 2 always have the same sign. The procedure of that recasting process is now illustrated with an example taken from ( [11] Example 3.45).

Example 1.
To illustrate the transformation process, the systeṁ is stabilized using the control law The feedback results in the closed-loop systeṁ with an asymptotically stable equilibrium point x = 0.
For the polynomialization, the new variables z 1 = x and z 2 = √ x 2 + 1 = z 2 1 + 1 are introduced. With this substitution, we obtain the transformed systeṁ which has a polynomial vector field. It must be noted that z 2 ≥ 1 holds. Moreover, the definition of z 2 gives rise to the condition z 2 = z 2 1 + 1. This condition is not in any polynomial form and therefore cannot be analyzed by the methods considered here. However, it is possible to convert this condition into the polynomial conditions Summarizing, one obtains withż a polynomial vector field with polynomial constraints.

Quantifier Elimination
Many stability criteria can be expressed as with the quantifiers Q i ∈ { ∃, ∀ }. Here, F(X, Y) is a quantifier-free formula, which is derived from the Boolean combination of atomic formulas of the form with a polynomial f (x 1 , . . . , x k , y 1 , . . . , y l ) result. An expression of the form (13) is called a prenex formula with the quantified variables X = (x 1 , . . . , x k ) and the free variables Y = (y 1 , . . . , y l ). Formulas of the prenex form (13) are not very helpful for system analysis or design because they are not constructive. Rather, we are interested in the solution set expressed with the free variables Y. To this end, exploiting the fact that the quantifier-free formula F(X, Y) appearing in (13) describes an semialgebraic set, i.e., a subset of the real vector space R n with n = k + l characterized by finitely many polynomial equations or inequalities. By the Tarski-Seidenberg theorem [31,32] the projections of such a semialgebraic set onto the subsets R n−1 , R n−2 , . . . , R 1 = R along the coordinate axes are again semialgebraic sets. This projection property does not hold in general for purely algebraic sets described only by a finite number of polynomial equations.
describes an algebraic and thus also semialgebraic subset of R 2 , which geometrically represents a hyperbola. The projection of (x, y) from R 2 onto x in R yields the inequalities and thus describes a semialgebraic but not an algebraic set.
Exploiting the projection property of semialgebraic sets, any prenex Formula (13) can be transformed into an equivalent quantifier-free formula. This transformation is called quantifier elimination (QE) [33]. In this process, after the projection from R n into R 1 , the equations or inequalities occurring in F(X, Y) in R 1 are evaluated and the results are projected back into R n . In doing so, one can check step by step for every single quantified variable x j with j = 1, . . . , k whether the respective quantifier-free formula is always satisfied in the case of the all quantifier (∀) or for at least one value in the case of the existential quantifier (∃). In quantifier elimination, all quantified variables are eliminated and an equivalent quantifier-free formula in the free variables is obtained. If all variables are quantified, a decision problem results which can only yield "true" or "false" as a result.

Example 3.
Adding all of the quantifiers with respect to x to the quantifier-free Formula (15) from Example 2, we obtain the prenex formula The projection (16) of (15) onto the x-axis does not contain the point x = 0, so it does not hold for all x ∈ R. Consequently, is the Formula (17) "false". If, on the other hand, we complement (15) with the existential quantifier ∃x : then the solution set is nonempty because of (16). The back projection from x > 0 to y > 0 and from x < 0 to y < 0 yields the quantifier-free formula y = 0 equivalent to (18).
The computation of the corresponding quantifier-free equivalent can be performed by different algorithms. Mentioned here are the cylindrical algebraic decomposition (CAD) [34]), the virtual substitution [35] and the approaches based on the classification of real roots (RRC) of polynomials [36,37]. Numerous software tools are now available to perform quantifier elimination [38][39][40]. For further details, please refer to the review article [41].
with the quantified variable x and the free variables a, b, c. The quantifier elimination yields the quantifier-free formula equivalent to (19)

Sum-of-Squares Decomposition
One way to numerically implement the definiteness tests associated with Lyapunov methods is the Sum-of-Squares Decomposition (SOS). It is checked whether a given polynomial can be decomposed into a sum of squares. From such a representation, definiteness properties can be read off, which are needed for the stability conditions. Let P be the set of all polynomials in the considered variables. Then, denotes the set of SOS polynomials. From the definition (20) it follows immediately that every polynomial p ∈ S is positive semidefinite. The converse case does not hold, because the so-called Motzkin polynomial [42] with χ ≥ 3 is positive definite, but cannot be represented as a sum of squares [43]. A more detailed consideration of the differences between positive definite and SOS polynomials can be found in [44]. Each SOS polynomial can be expressed in the form with a positive semidefinite matrix Q and a vector m(x) consisting of all monomials up to degree 1 2 deg(p(x)) [45]. The search for a positive semidefinite matrix Q can be transformed into a semidefinite optimization problem [45]. Efficient tools exist for solving such semidefinite programs: SeDuMi [46], CSDP [47], SDPNAL [48], SDPNAL+ [49], SDPA [50], CDCS [51], or SDPT3 [52]. The transformation of a SOS problem into a semidefinite program is possible with the toolbox SOSTOOLS under MATLAB (see Figure 1). For this purpose, the respective conditions formulated with SOS polynomials are defined under MATLAB. These can subsequently be transformed into a semidefinite program using SOS-TOOLS. This program can then be solved with one of the previously mentioned tools. The solutions obtained in this way are thereupon transformed into the solution of the original SOS constraints by SOSTOOLS. Through this interaction of the individual tools, an efficient solving of SOS programs is possible.  The resulting semidefinite programs can in principle be solved in polynomial time. However, the size of these programs grows rapidly with the dimension of the state space, because the matrix Q of a polynomial in n variables with polynomial degree 2d has dimension ( n+d d ) × ( n+d d ). Therefore, when applying the sum-of-squares decomposition, thoughtful problem formulations are not only useful but also necessary.

Lyapunov's Direct Method
In this section, we present how the previously introduced methods can be used to test nonlinear systems concerning different stability properties. For the following, autonomous time-invariant systems of the formẋ = f (x) (23) are considered. Here, x(t) ∈ R n is the state of the system (23) and f : R n → R n is a (locally) Lipschitz vector field. This is true for polynomial systems as a result of the mean value theorem of differential calculus. According to the theorem of Picard-Lindelöf, the Lipschitz condition guarantees the (local) existence and uniqueness of the solution of the system of differential Equation (23) [4,55].
To formulate the stability conditions, functions of class K and class K ∞ are introduced.
For systems of the form (23), the stability of the equilibrium x = 0 can be analyzed using the direct method of Lyapunov [3,4].
Theorem 1 (Lyapunov's direct method). Let x = 0 be a equilibrium of (23) with a locally Lipschitz vector field f and U = {x ∈ R n , |x| < a}. If there exists a continuously differentiable for all x ∈ U holds, then the equilibrium is (locally) stable in the sense of Lyapunov. If in addition there exists a function α of class K defined on [0, a) such that is satisfied, then x = 0 is a (locally) asymptotically stable equilibrium. If α,ᾱ,ᾱ ∈ K ∞ also holds, then the equilibrium is globally asymptotically stable.
In Equation (27), the (total) time derivativeV of V along the solutions of the system (23) is described by the Lie derivativė of the scalar field V in the direction of the vector field [4]. A function V satisfying the conditions of Theorem 1 is called a Lyapunov function. Given a polynomial system (23), the procedures described in Sections 2.2 and 2.3 can be used directly to verify the conditions of Theorem 1. For a system (2)-(6) polynomialized according to Section 2.1, some adjustments are necessary ([30] Proposition 4): Theorem 2 (Lyapunov conditions for a recasted system). Let a system (2)-(6) with the main denominator N(z) be given. Moreover, let z 2,0 = T(0) hold. Given a polynomial functionṼ(z), column vectors of polynomial functions µ 1 (z), µ 2 (z); column vectors of SOS polynomials σ 1 (z), holds, then x = 0 is a stable equilibrium of the original system (1).
The proof of this theorem is given in [30]. The conditions presented in Theorem 2 suggest only stability and not asymptotic stability. This stems from the fact that the condition (30) requires only positive semidefiniteness. If it is to be tested for asymptotic stability, the negatively definite term −φ(z 1 , z 2 ), with φ(z 1 , T(z 1 )) > 0, can be added to the condition (30). This would guarantee the corresponding negative definiteness of the time derivative.
To apply the theorem, the functionsṼ, µ 1 , µ 2 , σ 1 , σ 2 and φ in the form of multivariate polynomials of suitable order are chosen. The numerical determination of the coefficients of these polynomials is then possible using the methods presented in Section 2.3. The quantifier elimination described in Section 2.2 makes it possible to check the conditions from Theorem 1 by extending the resulting prenex formula with the constraints (G 1 , G 2 ) resulting from the substitution.

Input-to-State Stability
The Input-to-State Stability (ISS) [7,8] extends the property of global asymptotic stability to systems of the formẋ = F(x, u) with a equilibrium x = 0 for u = 0, which means F(0, 0) = 0. The input u(t) ∈ R can be taken as both a manipulated variable and a disturbance. Let the input-dependent vector field F be continuous and locally Lipschitz in the first argument. The input-to-state stability is a generalization of the asymptotic stability, where in addition the influence of the input u is taken into account. In this paper, the input-to-state stability is introduced directly via the existence of a corresponding ISS-Lyapunov function [7].

Definition 1 (ISS-Lyapunov function).
A continuously differentiable function V : R n → [0, ∞) is called ISS-Lyapunov function for a system (31) if it satisfies the following conditions.
for all x ∈ R n and u ∈ R withᾱ,ᾱ ∈ K ∞ and α, γ ∈ K is satisfied.
For u(t) ≡ 0 we obtain the asymptotic stability of the equilibrium at the origin. Alternatively, ISS-Lyapunov functions can also be defined by a so-called "dissipation" characterization ( for all x ∈ R n and u ∈ R withᾱ,ᾱ, α, γ ∈ K ∞ fulfilled.

Input-to-State Stability of the Recasted System
First, we consider the ISS property for polynomial systems. Afterward, these results are extended to non-polynomial systems.
For polynomial input-dependent vector fields F of the form (31), the statements (34) and (35) from Lemma 1 can be translated into the sufficient conditions ISS analysis for such polynomial systems was performed by Ichihara [57]. Here, an SOS-compatible formulation for the comparison functions must be found, since any ISS condition is based on them. The following theorem [57] provides a possible SOS condition for K ∞ functions.

Theorem 3. A univariate real even polynomial without a constant term
with at least one coefficient c 2i = 0, belongs to the class K ∞ if and only if holds for all s ∈ R.
It seems unusual that the condition (40) should be satisfied for all s as functions of norms are considered. This requirement follows the SOS procedure. Thus, it is verified that (40) is a sum of squares and thus must also hold for negative s.
To apply these considerations to transformed or polynomialized systems, the resulting peculiarities have to be taken into account. Thus, a function α * (s) which is a K ∞ -function in the transformed coordinates is in general not a K ∞ -function in the original coordinates x. The first requirement that must be ensured is that a K ∞ -function passes through the coordinate origin. This can be achieved with Moreover, the resulting function is not a function in |x|, but in |z| = |x, T(x)|. Thus, the function defined in (41) guarantees monotonicity and radial unboundedness in the transformed, but not in the original coordinates. The terms depending on T(x) must therefore be estimated upward or downward with terms depending on |x| to overcome this problem. In the condition (36), a lower bound on the functionᾱ must be determined. It can be shown thatᾱ(x, 0) is an adequate comparison function for conditions (36) and (38) in the original coordinates ifᾱ(z 1 , z 2 ) is a comparison function in the transformed variables [19,54]. For the condition (37), however, an upward estimate is needed, which cannot be generalized but is straightforward to determine in most cases (cf. [19] Table 2). Furthermore, if the polynomialization process is applied to systems of the form F(x, u), the result is analogous to (2) and (3) a systemż as a polynomial description. With these considerations, the following theorem can be formulated [19,54].
Theorem 4 (ISS of the recasted system). Let the system (42) and (43) and functions T(z 1 ), G 1 (z), G 2 (z) and N(z) be given. If there exist a polynomial functionṼ(z), column vectors of polynomial functions µ 1 (z), µ 2 (z) and column vectors of SOS polynomials σ 1 (z), σ 2 (z) with suitable dimensions such that as well as the class K ∞ functionsᾱ * ,ᾱ * and α * holds, then the original system (31) is ISS if there exists a corresponding upward estimate forᾱ.
An application of this theorem can be found in ( [19] Example 3).

Control Lyapunov Function
Consider the system (31) with an equilibrium at the origin. In the controller design, the question arises whether this equilibrium can be stabilized with a state feedback This problem leads to the concept of the control Lyapunov functions. In this generalization of the Lyapunov function, the focus is not on stability, but on stabilizability ([10] Theorem 4.1).

Theorem 5 (Artstein's Theorem).
Let the function V : R n → [0, ∞) be continuously differentiable, positive definite, and radially unbounded. A system of the form (31) is stabilizable with a state feedback (48) if and only if

holds.
A function V satisfying the conditions of Theorem 5 is called a control Lyapunov function (CLF). For an input affine systemẋ with vector fields f , g : R n → R n , the function V along the system dynamics has the time derivativeV (x, u) = L f V(x) + L g V(x)u.
Thus, the condition (49) can be expressed in the form The condition (51) can also be specified via the expression if no access to the system dynamics is possible via the control input because of L g V(x) = 0, the system must be stable in its dynamics (L f V(x) < 0), see, e.g., [11,58] . Once a control Lyapunov function has been found for a given system (50), then the system can be stabilized via a state feedback (48) of the form with a = L f V(x) and b = L g V(x). To justify this, consider the time derivative of V along the solution of the closed-loop systeṁ which is negative definite. Thus, the closed-loop equilibrium x = 0 is globally asymptotically stable. The state feedback (53) is called Sontag's formula [9,58]. Similar formulas are also obtained via other approaches, such as inverse or modified optimal control [59][60][61] if the control Lyapunov function is known.
The question now arises whether a CLF V exists and respectively how such a function can be found. For this purpose, one sets up an approach V(x, q) for a positive definite function with a parameter vector q, e.g., as a quadratic form V(x, q) = x T P(q) x with P(q) = P T (q) 0, where the vector q contains the free entries of the positive definite matrix P. The positive definiteness of the functions V is then ensured [41] via Courant-Fischer's theorem (minimum-maximum principle). As more general approaches, functions with even powers can be used [57].
For a positive definite initial function V(x, q) and a system of the form (31), the condition (49) can be expressed by the prenex formula ∃q ∀x ∃u : x = 0 =⇒ L F V(x, u, q) < 0 (54) and thus address it by quantifier elimination. Here, the statement (54) formulates a decision problem that verifies the suitability of the approach V(x, q) as a CLF.
For an input affine system (50), the condition (52) leads to the prenex formula If after quantifier elimination it is evident that the conditions (54) or (55) are satisfied, then the entries of the parameter vector q can be chosen as free variables and the admissible parameter values can be determined step by step. This is performed by successively removing the corresponding existential quantifiers at the individual elements of the vector q and performing quantifier elimination. As a result, in each case conditions arise to the now free variables. If a parameter configuration is chosen that satisfies these conditions, a CLF is created. Building on the CLF thus determined, a stabilizing feedback can then be designed using the law (53).
In principle, the concept of the control Lyapunov function can also be applied with SOS methods. For this purpose SOS-conditions for the construction of a CLF can be formulated [62,63] based on the positive position theorem ([64] Theorem 4.2.2). However, this approach only allows for the verification of a concrete candidate V. If a CLF is to be constructed in this way, and thus the coefficients of V are to be treated as decision variables, bilinear constraints arise which significantly complicate the computation [62,63,65].

ISS Control Lyapunov Function
The concept of the control Lyapunov function can also be applied to the ISS with respect to a disturbance. For this purpose, consider a system which, in addition to the control input u, is also influenced by a disturbance w. In the case with affinity to the input, the system can be described as followṡ (56) by the vector fields f , g 1 , g 2 : R n → R n . Definition 1 can be generalized as follows [7,66,67]: holds.
Analogous to the condition (52) and (57) can be represented via as an equivalent formulation ([66] Section II).
Once a corresponding ISS-CLF has been found, it is possible to use feedback (53) with a = L f V(x) + |L g 1 V(x)|γ −1 (|x|) and b = L g 2 V(x), an ISS closed-loop can be generated with respect to the disturbance w.
To determine adequate feedback employing quantifier elimination, one uses a positive definite approach function V(x, q) and a suitable approach for the comparison function γ(|w|, c) with a parameter vector q or c. The suitability of the approach functions used can then be determined by applying quantifier elimination to the prenex formula ∃q, c ∀x ∀w ∃u : |x| ≥ γ(|w|, c) =⇒ L f V(x, q) + L g 1 V(x, q) w + L g 2 V(x, q) u < 0, respectively, ∃q, c ∀x : can be determined. There may be other constraints to consider to the parameters, for example, to ensure positive definiteness of V or monotonicity of γ. For example, q > 0 must hold for V(x) = qx 2 , x ∈ R to be positive definite.

Furuta Pendulum
In this example, the Furuta pendulum [30,54] is considered. The basic structure of this pendulum is shown in Figure 2. In this example we focus on the stability analysis of this system, for this we assumed that the arm of length l A rotates with a constant angular velocity ω A . Under these assumptions, we obtain the state-space model where the angle x 1 = ϕ P and the angular velocity x 2 =φ P = ω P of the pendulum are the state variables. For ω A ≤ g l P , the points (x 1 , x 2 ) = (kπ, 0), k ∈ Z are the equilibria of the system. In the following, the equilibrium (0, 0) is to be tested for stability. For this, the system must be polynomialized both for the sum-of-squares decomposition and for quantifier elimination. For this purpose, the new variables z 1 = x 1 , z 2 = x 2 , z 3 = sin(x 1 ), z 4 = cos(x 1 ) are introduced, which points to the polynomial state space model with the algebraic constraint To prove the stability of the system with Theorem 2, candidate functions forṼ, µ 1 , µ 2 and φ have to be chosen. For µ 1 and µ 2 a polynomial consisting of all monomials up to order 2 is applied. As a first step,Ṽ(z) = q 1 z 2 1 + q 2 z 2 2 + q 3 z 2 3 + q 4 z 2 4 + q 5 , consisting of the squared variables and an absolute term q 5 , is chosen as the approach for the functionṼ. To satisfy the condition (28), q 4 + q 5 = 0 must hold, which is considered as an equality constraint in the SOS program. To ensure positive definiteness ofṼ, the function φ(z) = c 1 z 2 1 + c 2 z 2 2 + c 3 (1 − z 4 ) with c 1 , c 2 , c 3 ≥ 0.2 is also chosen. If one applies Theorem 2 using SOSTOOLS with these initial functions, the problem cannot be solved. On the other hand, if one introduces an additional term q 6 z 4 into the approachṼ(z) = q 1 z 2 1 + q 2 z 2 2 + q 3 z 2 3 + q 4 z 2 4 + q 5 + q 6 z 4 , one obtains a solvable SOS program. In this case, the equation q 4 + q 5 + q 6 = 0 must also be taken into account. Since in the original x-coordinates, the relation V(x = 0) = q 4 cos 2 (0) + q 5 + q 6 cos(0) results and thus for (28) the condition q 4 + q 5 + q 6 = 0 must hold. This leads with the parameters ω A = 1 s −1 , g = 9.81 m s −2 and l P = 0.1 m to the solutionṼ(z) ≈ 0.78z 2 in original coordinates. The numerical values were rounded to two decimal places. It is noticeable that this function no longer contains the term x 2 1 since the associated coefficient is almost zero. For the time derivative, we obtaiṅ which is consistent with the physical notion. Thus, the system is stable but not asymptotically stable.
With an additional friction term −dz 2 , the asymptotic stability can be obtained by continuous energy extraction. This results in the state space model Using the same procedure as in the frictionless case and d = 0.1 s −1 , the Lyapunov function is calculated, which is related to the time derivativeV(x) = L f V(x) ≈ −0.16x 2 2 . Using the invariance principle of LaSalle [4,68,69] one can conclude asymptotic stability.
As an alternative to determining a Lyapunov function with SOS programming, QE can also be used. In this specific case, this can be performed using the prenex formula ∃q 1 , . . . , q 6 ∀z 1 , . . . , z 4 : (z 1 = 0 ∨ z 2 = 0) The application of the QE process confirms the validity of the decision problem formulated in this manner, as anticipated by the SOS analysis. Consequently, the Lyapunov approachṼ is deemed suitable. During the SOS considerations, it was observed that the term q 1 z 2 1 does not appear in the resulting Lyapunov function. Therefore, this term is not further considered, and we focus on the investigation of the approachṼ(z) = q 1 z 2 2 + q 2 z 2 3 + q 3 z 2 4 + q 4 z 4 + q 5 . In order to establish a concrete Lyapunov function, the successive existential quantifiers can be eliminated for each q i . Since q 3 + q 4 + q 5 = 0 must hold, q 3 = 2, q 4 = −3 and q 5 = 1 can be chosen as examples. With these assumptions, the result for the prenex formula ∃q 1 , q 2 ∀z 1 , . . . , z 4 : is also "true" so that the assumptions made are valid. If, in addition, the existential quantifiers at q 1 and q 2 are removed, these become free variables, and the quantifierfree formula is obtained for the frictionless system which, with q 1 = 5 327 and q 2 = 649 327 , leads to the Lyapunov functioñ with time derivativeV(x) = 0. If instead q 3 = 2, q 4 = −5 and q 5 = 3 are applied, (64) results in the equivalent quantifier-free formula from which, with q 1 = 25 981 and q 2 = 1937 981 , we obtain the Lyapunov functioñ This function also has the time derivativeV(x) = 0 according to the physical notion. With the two Lyapunov functions thus computed, the time derivativeV(x) = − 1 327 x 2 2 andV(x) = − 5 981 x 2 2 , respectively, are obtained for the system with friction. This example illustrates the systematic differences in the application of the two methods presented. In the SOS method, the functions µ i , σ i , and φ result in additional degrees of freedom, which have to be selected via candidate functions. From the assumptions made for the individual functions, an optimization problem is formulated, which is solved by a corresponding numerical algorithm. This is accompanied by a limited accuracy of the solution. This requires a subsequent critical interpretation of the results. With QE, on the other hand, the constraints can be considered directly and the additional functions µ i , σ i , and φ are omitted. Additionally, the generated results are mathematically exact. However, the algorithms used to solve QE problems are extremely computationally expensive, so they can only be reasonably applied to a limited number of problems.

Van de Vusse-Reaction
The van de Vusse reaction A → B → C, 2A → D with reactants A, B, C, D in an isothermal stirred tank reactor can be described by the differential equations [70,71] Here, c A and c B denote the concentrations of reactants A and B in the reactor, c A f denotes the concentration of reactant A supplied, F denotes the feed rate, V denotes the reactor volume, and k 1 , k 2 , k 3 denote the reaction rates. The model normalized around the operating point studied in ([71] Section 5.2) is in the form (50). The state variables x 1 and x 2 describe deviations from target concentrations. The system is affected by the input u (inflow rate). As a candidate CLF, we chose with q > 0. Together with the system (65) one obtains the Lie derivatives To evaluate (55), we obtain the quantifier-free expression Here, the positive definiteness of (66) is achieved via the condition q > 0. The remaining terms correspond to the expression (55). To check whether the approach (66) is suitable as CLF for the system (65), all variables (x 1 , x 2 , q) must be quantified. Once all variables are quantified, the QE process leads only to the statements "true" or "false" and a decision problem results. In the case considered here, this problem can be solved by the prenex formula ∃q ∀x 1 ∀x 2 : condition (67), which can be transformed by QE to the statement "true". Thus, the proof for the suitability of the approach (66) as CLF is given.
To determine a CLF, an admissible value for q must be computed. This is performed by removing the existential quantifier ∃q from Formula (68) before the QE process. The removal of the existential quantifier leads to the prenex formula: ∀x 1 ∀x 2 : condition (67).
The Formula (69) can be transformed by a QE process into an equivalent quantifier-free expression in the now free variable q: The cubic polynomial appearing in this expression has the real roots q 1 ≈ −30.959, q 2 ≈ −11.375, and q 3 ≈ 15.334. Thus, for 0 < q < q 3 , the condition (70) is satisfied. Hence, in this admissible parameter interval (66) is a control Lyapunov function. The equilibrium x = 0 of the system (65) can therefore be stabilized, for example, with feedback (53). Figure 3 shows the phase plane of the controlled system, where the parameter q = 10 was used for the control Lyapunov function.

Discussion
This paper addresses the systematic and constructive Lyapunov approaches to stability analysis and control design for both polynomial and non-polynomial dynamical systems. The fundamental methodologies, namely sum-of-squares decomposition and quantifier elimination, are briefly introduced and their application to the corresponding control engineering problems is demonstrated. The paper explains how the limitation to polynomial systems can be overcome through a polynomializing transformation, leading to distinct approaches for the two methods. In the case of sum-of-squares decomposition, the constraints arising from the transformation are considered by employing a multiplier approach within the optimization problem. This introduces additional degrees of freedom. So that although SOS programs can in principle be solved in polynomial time, the degrees of freedom result in extremely large optimization problems. Thus, the application of the methodology is mostly limited to systems of dimensions 6 to 8.
On the other hand, quantifier elimination directly incorporates the arising constraints through Boolean linkage. However, the algorithmic implementations of quantifier elimination are computationally intensive, so the computational effort sometimes grows double exponentially with the number of variables. It has been shown that by the development of new algorithms and software tools in the last years, a wide spectrum of problems can be treated anyway. In particular, very good results could be obtained by combining different tools. Nevertheless, the inherent number of variables and the system dimensions present insurmountable hurdles. According to our assessment, the application of quantifier elimination is limited to systems of dimensions 2 to 6. On the other hand, quantifier elimination allows the treatment of more general system, e.g., with non-smooth nonlinearities such as the absolute value or the sign function.
The practical application of the presented approaches is exemplified through the Furuta pendulum and the van de Vusse reaction. Furthermore, it has been previously demonstrated that SOS methods can also be employed to investigate incremental stability properties [19,54]. The acquired knowledge will be expanded to encompass other stability properties in future research, such as output-state stability [72], input-output-state stability [73], and integral input-state stability [74]. Additionally, the focus extends to design approaches, including forthcoming work on backstepping and sliding-mode methods. The above limitations concerning the dimensions of the systems can potentially be relaxed by combining our approaches with successive design techniques such as backstepping [11].

Conclusions
Lyapunov's second method, which was published more than 100 years ago, represented a pioneering approach to the analysis of nonlinear systems. Starting with the stability analysis of an equilibrium point, this approach was expanded in a variety of ways to control engineering problems and questions.
The main disadvantage of the Lyapunov-like theorems is that they are not constructive. The practical use of Lyapunov methods usually requires a considerable understanding of the system under consideration. Numerous approaches have been developed in recent decades to alleviate this difficulty. In this article, two more approaches to allow the practical use of Lyapunov methods have been discussed.