Torsion Discriminance for Stability of Linear Time-Invariant Systems

This paper proposes a new approach to describe the stability of linear time-invariant systems via the torsion $\tau(t)$ of the state trajectory. For a system $\dot{r}(t)=Ar(t)$ where $A$ is invertible, we show that (1) if there exists a measurable set $E_1$ with positive Lebesgue measure, such that $r(0)\in E_1$ implies that $\lim\limits_{t\to+\infty}\tau(t)\neq0$ or $\lim\limits_{t\to+\infty}\tau(t)$ does not exist, then the zero solution of the system is stable; (2) if there exists a measurable set $E_2$ with positive Lebesgue measure, such that $r(0)\in E_2$ implies that $\lim\limits_{t\to+\infty}\tau(t)=+\infty$, then the zero solution of the system is asymptotically stable. Furthermore, we establish a relationship between the $i$th curvature $(i=1,2,\cdots)$ of the trajectory and the stability of the zero solution when $A$ is similar to a real diagonal matrix.


Introduction
It is well known that Lyapunov [1] laid the foundation of stability theory. Linear systems are the most basic and widely used research objects, which have been developed for a long period. However, the traditional methods rely heavily on linear algebra. There are few results obtained from geometric aspects.
Curvature and torsion are important concepts in differential geometry. In [2], the authors calculated the curvature and torsion of the state trajectories r(t) of the two-and three-dimensional linear timeinvariant systemsṙ(t) = Ar(t), which are related to the system stability. Furthermore, in [3] the authors use the definition of higher curvatures of curves in R n given in [4] to obtain the relationship between the first curvature of the state trajectory and the stability of the n-dimensional linear system.
In this paper, we will describe the stability of the zero solutions of linear time-invariant systems in arbitrary dimension by using the torsion, namely, the second curvature.
Our main results are as follows.
Theorem 1. Suppose thatṙ(t) = Ar(t) is a linear time-invariant system, where A is similar to an n × n real diagonal matrix, r(t) ∈ R n , andṙ(t) is the derivative of r(t). Denote by κ i (t) (i = 1, 2, · · · ) the ith curvature of trajectory of a solution r(t). We have (1) if there exists a measurable set E ⊆ R n whose Lebesgue measure is greater than 0, such that r(0) ∈ E implies that lim t→+∞ κ i (t) = 0 or lim t→+∞ κ i (t) does not exist, then the zero solution of the system is stable; (2) if A is invertible, then under the assumptions of (1), the zero solution of the system is asymptotically stable.
Theorem 2. Suppose thatṙ(t) = Ar(t) is a linear time-invariant system, where A is an n × n invertible real matrix, and r(t) ∈ R n . Denote by τ(t) the torsion of trajectory of a solution r(t). We have (1) if there exists a measurable set E 1 ⊆ R n whose Lebesgue measure is greater than 0, such that r(0) ∈ E 1 implies that lim t→+∞ τ(t) = 0 or lim t→+∞ τ(t) does not exist, then the zero solution of the system is stable; (2) if there exists a measurable set E 2 ⊆ R n whose Lebesgue measure is greater than 0, such that r(0) ∈ E 2 implies that lim t→+∞ τ(t) = +∞, then the zero solution of the system is asymptotically stable.
The paper is organized as follows. In Section 2, we review some basic concepts and propositions. In Section 3, we study the relationship between the ith curvature (i = 1, 2, · · · ) of the trajectory and the stability of the zero solution of the system when the system matrix is similar to a real diagonal matrix, and we prove Theorem 1. In Section 4, we establish a relationship between the torsion of the trajectory and the stability of the zero solution of the system, and complete the proof of Theorem 2. Two examples are given in Section 5. Finally, Section 6 concludes the paper.

Preliminaries
Throughout this paper, all vectors will be written as column vectors, and x will denote the Euclidean norm of x = (x 1 , x 2 , · · · , x n ) T ∈ R n , namely, x = ∑ n i=1 x 2 i . The vector r (i) (t) denotes the ith derivative of vector r(t). We denote by det A the determinant of matrix A. The eigenvalues of matrix A are denoted by λ i (A) (i = 1, 2, · · · , n), and the set of eigenvalues of matrix A is denoted by σ(A). The degree of polynomial f (t) is denoted by deg( f (t)).

Stability of Linear Time-Invariant Systems
Definition 1 ([5]). The system of ordinary differential equationṡ is called a linear time-invariant system, where A is an n × n real constant matrix, r(t) ∈ R n , andṙ(t) is the derivative of r(t).

Proposition 1 ([5]
). The initial value problem ṙ(t) = Ar(t), has a unique solution given by where e tA = ∑ ∞ k=0 t k A k k! . The curve r(t) is called the trajectory of the system (2) with the initial value r 0 ∈ R n . Definition 2 ([6,7]). The solution r(t) ≡ 0 of differential equations (1) is called the zero solution of the linear time-invariant system. If for every constant ε > 0, there exists a δ = δ(ε) > 0, such that r(0) < δ implies that r(t) < ε for all t ∈ [0, +∞), where r(t) is a solution of (1), then we say that the zero solution of system (1) is stable. If the zero solution is not stable, then we say that it is unstable.
Suppose that the zero solution of system (1) is stable, and there exists aδ (0 <δ δ), such that r(0) <δ implies that lim t→+∞ r(t) = 0, then we say that the zero solution of system (1) is asymptotically stable. Proposition 2 ([6]). The zero solution of system (1) is stable if and only if all eigenvalues of matrix A have nonpositive real parts and those eigenvalues with zero real parts are simple roots of the minimal polynomial of A.
The zero solution of system (1) is asymptotically stable if and only if all eigenvalues of matrix A have negative real parts, namely, Re{λ i (A)} < 0 (i = 1, 2, · · · , n).

Proposition 3 ([6]
). Suppose that A and B are two n × n real matrices, and A is similar to B, namely, there exists an n × n real invertible matrix P, such that A = P −1 BP. For system (1), let v(t) = Pr(t). Then the system after the transformation becomesv System (4) is said to be equivalent to system (1), and v(t) = Pr(t) is called an equivalence transformation.
are called the curvature and torsion of the curve r(t), respectively.
Gluck [4] gave a definition of higher curvatures of curves in R n , which is a generalization of curvature and torsion. Here we omit the definition of higher curvatures and review their calculation formulas directly.
In this paper, V i (t) denotes the i-dimensional volume of the i-dimensional parallelotope with vectorsṙ(t),r(t), · · · , r (i) (t) as edges, and we have a convention that V 0 (t) = 1.
We can give V i (t) by the derivatives of r(t) with respect to t. In fact, we have the following result.
By Proposition 5 and Proposition 6, we obtain the expression of each curvature of curve r(t) in R n by the coordinates of derivatives of r(t). In particular, ifṙ(t) andr(t) are linearly independent, then the torsion of r(t) satisfies On the other hand, if V 2 (t) ≡ 0, namelyṙ(t) andr(t) are linearly dependent for all t, then obviously we have the convention that τ(t) ≡ 0. Further, the function V 2 (t) will be examined in detail in Section 4.2.

Relationship Between the Curvatures of Two Equivalent Systems
Wang et al. [3] establish a relationship between the curvatures of the trajectories of two equivalent systems. In fact, let a curve r(t) be the trajectory of system (2), and suppose that for each t, the vectorṡ r(t),r(t), · · · , r (m) (t) are linearly independent. Then we can define curvatures κ r,1 (t), κ r,2 (t), · · · , κ r,m−1 (t) of the curve r(t), and we have the following result.

Real Jordan Canonical Form
Proposition 8 ([7,9]). Let A be an n × n real matrix. Then A is similar to a block diagonal real matrix where (1) for k ∈ {1, 2, · · · , p}, the numbers λ k = a k + √ −1b k andλ k = a k − √ −1b k (a k , b k ∈ R, and b k > 0) are complex eigenvalues of A, and (2) for j ∈ {p + 1, p + 2, · · · , r}, the number λ j is a real eigenvalue of A, and The matrix (6) is called the real Jordan canonical form of A.

Real Diagonal Matrix
In this section, we study the case that the system matrix is similar to a real diagonal matrix, and prove Theorem 1. From Proposition 4 and Proposition 7, we only need to focus on the case that A is a real diagonal matrix, and prove Proposition 9.
Proposition 9. Suppose thatṙ(t) = Ar(t) is a linear time-invariant system, where A is an n × n real diagonal matrix, and r(t) ∈ R n . Denote by κ i (t) (i = 1, 2, · · · ) the ith curvature of trajectory of a solution r(t). Then for any given initial value r(0) ∈ S, we have does not exist, then the zero solution of the system is stable; does not exist, then the zero solution of the system is asymptotically stable.
Wang et al. [3] has proved the case of i = 1. Now we give a complete proof of this proposition.
Proof. (1) Suppose that A is an n × n real diagonal matrix, namely, where p = 1, 2, · · · . Hence we have r(t) = e tA r(0) = e λ 1 t r 1 (0), e λ 2 t r 2 (0), · · · , e λ n t r n (0) namely, the coordinates of derivatives of r(t) arė Then by Proposition 6, we obtain We see that if the eigenvalues λ i 1 , λ i 2 , · · · , λ i k of A are non-zero and distinct, then a term of the form Ce 2 ∑ k p=1 λ ip t will appear in the expression of V 2 k (t), where C is a constant depending on the eigenvalues and initial value, and C > 0.
By Proposition 5, the square of the ith curvature is Now, we consider the limit of κ i (t) as t → +∞ by comparing the exponents of e in the numerator and denominator of κ 2 i (t). Let ∆ 1 and ∆ 2 denote the maximum values of α in the terms of the form e αt in Then by (7) and (8), we have Thus, It follows that where C is a positive constant depending on the initial value r(0) = r 0 (r j (0) = 0 for j = 1, 2, · · · , n).
Here we notice that for any given real diagonal matrix A, if for a given initial value r(0) ∈ R n that satisfies ∏ n j=1 r j (0) = 0, we have lim t→+∞ κ i (t) = 0 (or +∞, or a constant C > 0, respectively), then for an arbitrary r(0) ∈ R n satisfying ∏ n j=1 r j (0) = 0, we still have lim t→+∞ κ i (t) = 0 (or +∞, or a constant C > 0, respectively). Noting that A is a real diagonal matrix, by Proposition 2, the zero solution of the system (1) is stable if and only if λ i (A) 0 (i = 1, 2, · · · , n). If the zero solution of the system is unstable, then we have λ (1) does not exist, then the zero solution of the system is stable.
(2) Suppose that A is invertible, and lim t→+∞ κ i (t) = 0 or lim t→+∞ κ i (t) does not exist. Then 0 is not an eigenvalue of A, and the zero solution of the system is stable. By Proposition 2, the zero solution of the system is asymptotically stable. Now, we proceed to the proof of Theorem 1.
Proof of Theorem 1. Suppose that the linear time-invariant systemṙ(t) = Ar(t) is equivalent to a systemv(t) = Bv(t), where B is a real diagonal matrix, A = P −1 BP, and v(t) = Pr(t) is the equivalence transformation. They by Proposition 7, we have We definẽ Note that we can regard any given n × n invertible matrix P as an invertible linear transformation P : R n → R n , and the Lebesgue measure of R n \S satisfies If there exists a measurable set E ⊆ R n whose Lebesgue measure is greater than 0, such that r(0) ∈ E implies that lim t→+∞ κ r,i (t) = 0 or lim t→+∞ κ r,i (t) does not exist, then by (10) and (11), there exists thus by Proposition 9, the zero solution of the systemv(t) = Bv(t) is stable, and then by Proposition 4, the zero solution of the systemṙ(t) = Ar(t) is also stable, which proves Theorem 1 (1).
Since A is similar to B, the matrix A is invertible if and only if B is invertible. The method of the proof of (1) works for (2), which completes the proof of Theorem 1.

Relationship between Torsion and Stability
In this section, we give the proof of Theorem 2, which establishes a relationship between the torsion of the trajectory and the stability of the zero solution of the system. From Proposition 4, 7, and 8, we only need to focus on the case that A is an invertible matrix in real Jordan canonical form (6), and prove the following result. In order to study the matrices in real Jordan canonical form (6), we first consider the blocks of the forms where λ, a, b ∈ R, b > 0, and Λ = a b −b a . Part of this subsection goes back to the work as far as [3].
(2) For a C m (a, b) block, a direct calculation gives . . .
. . . where where R =   cos bt sin bt − sin bt cos bt   .
Substituting (19) into r(t) = e tC m (a,b) r(0), we obtain the expressions of the coordinates of r(t) where By (21), we have where each B ϕ (t) is a bounded function.
Substituting (12) and (18) into r (s) (t) = C s m (a, b)r(t) for s = 1, 2, 3, combined with (20), we see that the coordinates of the derivatives of r(t) arė r i,1 (t) =ar i,1 (t) + br i,2 (t) + r i+1,1 (t) = e at {aT m;i,1 (t) + bT m;i,2 (t) + T m;i+1,1 (t)} , r i,2 (t) = − br i,1 (t) + ar i,2 (t) + r i+1,2 (t) = e at {−bT m;i,1 (t) + aT m;i,2 (t) + T m;i+1,2 (t)} , where we have a convention that if i > m, then r i,j (t) = 0 (j = 1, 2). It should be noted that in the following subsections we will consider the case where A has more than one block of the form J p (λ) or C m (a, b), so when P p;k (t), T m;i,1 (t) and T m;i,2 (t) appear in the following, the r k+l (0) in (16) should be understood as the coordinate of r(t) which corresponds to the (k + l)th row of the diagonal block corresponding to the P p;k (t), and the r 2i+2k−1 (0) and r 2i+2k (0) in (21) should be understood as the coordinates of r(t) which correspond to the (2i + 2k − 1)th and (2i + 2k)th row of the diagonal block corresponding to the T m;i,1 (t) and T m;i,2 (t), respectively.

Function V 2 (t)
By Proposition 6, we have Considering the form of the expression of torsion τ(t), it is necessary to make a detailed analysis of the function V 2 (t).

Lemma 1. Suppose thatṙ(t) = Ar(t) is a linear time-invariant system, where A is an n × n matrix in real
Jordan canonical form, and r(t) ∈ R n . The function V 2 (t) is given by (24). Then for any given r(0) ∈ S, we have where z ∈ {0, 1, · · · , n}; (2) if V 2 (t) ≡ 0, then there exists a T > 0, such that V 2 (t) > 0 for all t > T.

Proof. Suppose
A is an n × n matrix in real Jordan canonical form.
(a) If A has a diagonal block C m (a, b) (without loss of generality, we assume that this C m (a, b) block is the first diagonal block of A), then by (20), (22), (23), and the analysis of Section 4.4 of [3], we have where the constant > 0, and B ϕ (t) (ϕ = 0, 1, · · · , 4m − 5) are bounded functions. It follows that there exists a T > 0, such that V 2 2 (t) > 0 for all t > T.
From Lemma 1, we know that except for the two trivial cases in (25), we have V 2 (t) > 0 when t is sufficiently large, that is to say, there exists a T > 0, such that we have the expression (5) of torsion τ(t) for all t > T, which avoids a lot of potential trouble when we consider the limit of τ(t) as t → +∞ in the proof of Theorem 2.

Function V 3 (t)
The function V 3 (t) is given by Proposition 6. In fact, we have By (17) and (23), we see that all coordinates of r (s) (t) (s = 1, 2, 3) can be expressed in the form oḟ where r (s) i;k (t) denotes the coordinate of r (s) (t) corresponding to the kth row of the ith diagonal block of A. Hence where G(t) is a linear combination of terms in the form of t β B γ (t), where B γ (t) is a bounded function.
In the remainder of this paper, set Then by (28) and (29), we obtain where ∆ 1 denotes the maximum values of α in the terms of the form e αt t β B γ (t) in V 2 3 (t).

Proof of Theorem 2 (1)
In order to give a proof of Theorem 2 (1), we only need to prove Proposition 10 (1). In this subsection, we will discuss the two cases in which the zero solution of the system is unstable, and obtain lim t→+∞ τ(t) = 0. In fact, we will prove Lemma 2 and Lemma 3.
where the functions V 2 3 (t) and V 4 2 (t) are both linear combinations of terms in the form of e αt t β B γ (t), where each B γ (t) is a bounded function. We will prove lim t→+∞ τ(t) = 0 for the following cases. For simplicity, let t > 0.
(a) If A has a diagonal block C m (M, b), then by (26), (29), and (30), we have where the constant C > 0, all B ϕ (t) and B ψ (t) are bounded functions, the function F(t) is a linear combination of terms in the form of t β B γ (t), and R(t) is a linear combination of terms in the form of e αt t β B γ (t), where α < 6M, and each B γ (t) is a bounded function. Hence we obtain lim where f (t) is a polynomial satisfying f (t) > 0 and deg( f (t)) = 4(p − 2), the function F(t) is a linear combination of terms in the form of t β B γ (t), and R(t) is a linear combination of terms in the form of e αt t β B γ (t), where α < 6M, and each B γ (t) is a bounded function. Hence we obtain lim t→+∞ τ(t) = 0.
(c) If in A only those J 1 (M) blocks are diagonal blocks satisfying Re(λ) = M, then we should consider the eigenvalues whose real part is less than M. In fact, suppose two J 1 (M) diagonal blocks are in the ith and jth row of A, respectively. Then which means this term has no contribution to the value of V 2 2 (t). In addition, note that J 1 (0) diagonal blocks in A do not affect the value of τ(t). We define N = max{Re(λ)|λ ∈σ(A)\{M}}, whereσ(A) denotes the set of eigenvalues of A which excluding the zero eigenvalues in J 1 (0) blocks.
(c1) Suppose that A has a diagonal block C m (N, b). Let r M (t) denote the coordinate of r(t) corresponding to the row of a diagonal block J 1 (M) of A, and r N,1 (t), r N,2 (t) denote the coordinate of r(t) corresponding to the first and second row of the diagonal block C m (N, b) of A, respectively. Then by (22) and (23), we have where the constant and B ψ (t) are bounded functions.
(c2) Suppose that A has a diagonal block J p (N). Let r M (t) denote the coordinate of r(t) corresponding to the row of a diagonal block J 1 (M) of A, and r N (t) the coordinate of r(t) corresponding to the first row of the diagonal block J p (N) of A. Then by (16) and (17), we have e Nt NP p;1 (t) + P p;2 (t) e Nt N 2 P p;1 (t) + 2NP p;2 (t) + P p;3 (t) where the constants C,Ĉ > 0, and all B ϕ (t) and B ψ (t) are bounded functions. By (c1) and (c2), we can give the expression of V 2 2 (t) in case (c). In fact, we suppose (m 1 m 2 · · · m k , and p 1 p 2 · · · p l ) are the all diagonal blocks whose eigenvalues satisfy Re(λ) = N. Then by (24), (32), and (33), we obtain where the constant C > 0, and each B ϕ (t) is a bounded function.
In what follows, ∆ 1 and ∆ 2 denote the maximum values of α in the terms of the form e αt t β B γ (t) in V 2 3 (t) and V 4 2 (t), respectively. Then by (34), we have In the determinant of (29), we can see that at most one row corresponds to a diagonal block with eigenvalue M, and the real parts of eigenvalues of the diagonal blocks corresponding to the other two rows are not greater than N, otherwise the determinant vanishes in V 2 3 (t). Hence we have where the constantC > 0, each B ψ (t) is a bounded function, the function F(t) is a linear combination of terms in the form of t β B γ (t), and R(t) is a linear combination of terms in the form of e αt t β B γ (t), where α < ∆ 1 , and each B γ (t) is a bounded function. Hence we obtain lim t→+∞ τ(t) = 0. Note that (a) (b) (c) cover all cases that satisfy M > 0, which completes the proof. Now we give Lemma 3. Proof. Suppose M = 0, and A has a diagonal block C m (0, b) (m 2). Then from (30), we have ∆ 1 0. From (24) and (26), we have ∆ 2 = 0.
If ∆ 1 < ∆ 2 = 0, then we have lim t→+∞ τ(t) = 0. If ∆ 1 = ∆ 2 = 0, in order to obtain the limit of τ(t) as t → +∞, we need to compare the highest power of t of terms in the form e 0t t β B γ (t) in the numerator and denominator of τ 2 (t). Let Γ 1 and Γ 2 denote the maximum value of β in the terms of the form e 0t t β B γ (t) in V 2 3 (t) and V 4 2 (t), respectively. Then we have In fact, by (21) and (23), for a diagonal block C m (0, b) (m 2), the functions T m;1,1 (t) and T m;1,2 (t) can reach the highest power m − 1 of t, namely t m−1 , thus r (s) 1,1 (t) and r (s) 1,2 (t) (s = 1, 2, 3) corresponding the first two rows of C m (0, b) (m 2) can reach the highest power m − 1 of t. Hence by (28) and (29), we obtain (36). In addition, by (24) and (26), we have where the constants C > 0, all B ϕ (t) and B ψ (t) are bounded functions, and R(t) is a linear combination of terms in the form of e αt t β B γ (t), where α < 0, and each B γ (t) is a bounded function. Hence we obtain lim t→+∞ τ(t) = 0.
Lemma 2 and Lemma 3 show that under the assumptions of Proposition 10, if the zero solution of the system is unstable, then lim t→+∞ τ(t) = 0. That is to say, Proposition 10 (1) is proved. Thus we proved Theorem 2 (1).

Proof of Theorem 2 (2)
We have proved Proposition 10 (1), and in order to prove Proposition 10 (2), we only need to prove the following lemma.
where all eigenvalues ofÃ have negative real parts.
(1) If s = 1, then by (24) and (26), we have ∆ 2 = 0. In the determinant of (29), we can see that at most two rows correspond to the diagonal block C 1 (0, b 1 ), and the real part of eigenvalue of the diagonal block corresponding to the other row is negative. Hence ∆ 1 < 0. It follows that lim t→+∞ τ(t) = 0.

Remark
In Theorem 2 and Proposition 10, the condition that A is invertible cannot be removed. In fact, we have the following two examples.
Nevertheless, since det A = 0, we cannot obtain stability from lim t→+∞ τ(t) = 0. In fact, noting that A is a matrix in real Jordan canonical form which has a diagonal block J 2 (0), we know that the zero solution of the system is unstable.
(2) Let Then by a direct calculation, we have lim t→+∞ τ(t) = +∞ for any given r(0) ∈ S. Nevertheless, since det A = 0, the zero solution of the system is not asymptotically stable.

Examples
In this section, we give two examples, which correspond to Theorem 1 and Theorem 2, respectively.

Example 2
We consider a popular model in classical mechanics called coupled oscillators (cf. [10]). Two masses P and Q are attached with springs. Assume that the masses are identical, i.e., m P = m Q = m, but the spring constants are different, as shown in the Figure 2. Let x P be the displacement of P from its equilibrium and x Q be the displacement of Q from its equilibrium. Holding Q fixed and moving P, the force on P is Holding P fixed and moving Q, the force on P is Thus by Newton's second law we have Similarly, for Q we have Introducing two variables v P =ẋ P and v Q =ẋ Q , the above equations are equivalent to the following linear system For simplicity we denote the system byṙ(t) = Ar(t), where . Set E = r(0) ∈ R 4 r 2 1 (0) − r 2 2 (0) r 2 3 (0) − r 2 4 (0) = 0 .
Then the Lebesgue measure of E satisfies m(E) = +∞. By direct calculation, the torsion τ(t) of the trajectory r(t) is a periodic function and lim t→+∞ τ(t) does not exist for any r(0) ∈ E. Hence by Theorem 2, the zero solution of the system (37) is stable.
The graph of the function τ 2 (t) is shown in Figure 3.

Conclusions and Future Work
The main contribution of this paper is to further develop the geometric description of stability of linear time-invariant systems in arbitrary dimension. Unlike traditional methods based on linear algebra, we focus on the curvature of curves. Specifically, the main results of this paper, Theorem 1 and Theorem 2 are proved. For the case where A is similar to a real diagonal matrix, Theorem 1 gives a relationship between the ith curvature (i = 1, 2, · · · ) of the trajectory and the stability of the zero solution of the systemṙ(t) = Ar(t). Further, Theorem 2 establishes a torsion discriminance for the stability of the system in the case where A is invertible.
For each theorem, we give an example to illustrate the result. In particular, we use the coupled oscillators as an example of the torsion discrimination.
In the future, we will continue to use geometric methods to describe the properties of other kinds of control systems.