Next Article in Journal
Pulse Processes in Networks and Evolution Algebras
Previous Article in Journal
Mixed Graph Colorings: A Historical Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Torsion Discriminance for Stability of Linear Time-Invariant Systems

School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(3), 386; https://doi.org/10.3390/math8030386
Submission received: 12 February 2020 / Revised: 6 March 2020 / Accepted: 7 March 2020 / Published: 10 March 2020

Abstract

:
This paper extends the former approaches to describe the stability of n-dimensional linear time-invariant systems via the torsion τ ( t ) of the state trajectory. For a system r ˙ ( t ) = A r ( t ) where A is invertible, we show that (1) if there exists a measurable set E 1 with positive Lebesgue measure, such that r ( 0 ) E 1 implies that lim t + τ ( t ) 0 or lim t + τ ( t ) does not exist, then the zero solution of the system is stable; (2) if there exists a measurable set E 2 with positive Lebesgue measure, such that r ( 0 ) E 2 implies that lim t + τ ( t ) = + , then the zero solution of the system is asymptotically stable. Furthermore, we establish a relationship between the ith curvature ( i = 1 , 2 , ) of the trajectory and the stability of the zero solution when A is similar to a real diagonal matrix.
MSC:
53A04; 93C05; 93D05; 93D20

1. Introduction

It is well known that Lyapunov [1] laid the foundation of stability theory. Linear systems are the most basic and widely used research objects, which have been developed for a long period. However, the traditional methods rely heavily on linear algebra. There are few results obtained from geometric aspects.
Curvature and torsion are important concepts in differential geometry. In [2], the authors calculated the curvature and torsion of the state trajectories r ( t ) of the two- and three-dimensional linear time-invariant systems r ˙ ( t ) = A r ( t ) , which are related to the system stability. Furthermore, in [3] the authors use the definition of higher curvatures of curves in R n given in [4] to obtain the relationship between the first curvature of the state trajectory and the stability of the n-dimensional linear system.
In this paper, we will describe the stability of the zero solutions of linear time-invariant systems in arbitrary dimension by using the torsion, namely, the second curvature.
Our main results are as follows.
Theorem 1.
Suppose that r ˙ ( t ) = A r ( t ) is a linear time-invariant system, where A is similar to an n × n real diagonal matrix, r ( t ) R n , and r ˙ ( t ) is the derivative of r ( t ) . Denote by κ i ( t ) ( i = 1 , 2 , ) the ith curvature of trajectory of a solution r ( t ) . We have
( 1 ) if there exists a measurable set E R n whose Lebesgue measure is greater than 0, such that r ( 0 ) E implies that lim t + κ i ( t ) 0 or lim t + κ i ( t ) does not exist, then the zero solution of the system is stable;
( 2 ) if A is invertible, then under the assumptions of ( 1 ) , the zero solution of the system is asymptotically stable.
Theorem 2.
Suppose that r ˙ ( t ) = A r ( t ) is a linear time-invariant system, where A is an n × n invertible real matrix, and r ( t ) R n . Denote by τ ( t ) the torsion of trajectory of a solution r ( t ) . We have
( 1 ) if there exists a measurable set E 1 R n whose Lebesgue measure is greater than 0, such that r ( 0 ) E 1 implies that lim t + τ ( t ) 0 or lim t + τ ( t ) does not exist, then the zero solution of the system is stable;
( 2 ) if there exists a measurable set E 2 R n whose Lebesgue measure is greater than 0, such that r ( 0 ) E 2 implies that lim t + τ ( t ) = + , then the zero solution of the system is asymptotically stable.
The paper is organized as follows. In Section 2, we review some basic concepts and propositions. In Section 3, we study the relationship between the ith curvature ( i = 1 , 2 , ) of the trajectory and the stability of the zero solution of the system when the system matrix is similar to a real diagonal matrix, and we prove Theorem 1. In Section 4, we establish a relationship between the torsion of the trajectory and the stability of the zero solution of the system, and complete the proof of Theorem 2. Two examples are given in Section 5. Finally, Section 6 concludes the paper.

2. Preliminaries

Throughout this paper, all vectors will be written as column vectors, and x will denote the Euclidean norm of x = ( x 1 , x 2 , , x n ) T R n , namely, x = i = 1 n x i 2 . The vector r ( i ) ( t ) denotes the ith derivative of vector r ( t ) . We denote by det A the determinant of matrix A. The eigenvalues of matrix A are denoted by λ i ( A ) ( i = 1 , 2 , , n ) , and the set of eigenvalues of matrix A is denoted by σ ( A ) . The degree of polynomial f ( t ) is denoted by deg ( f ( t ) ) .

2.1. Stability of Linear Time-Invariant Systems

Definition 1
([5]). The system of ordinary differential equations
r ˙ ( t ) = A r ( t )
is called a linear time-invariant system, where A is an n × n real constant matrix, r ( t ) R n , and r ˙ ( t ) is the derivative of r ( t ) .
Proposition 1
([5]). The initial value problem
r ˙ ( t ) = A r ( t ) , r ( 0 ) = r 0 ,
has a unique solution given by
r ( t ) = e t A r 0 ,
where e t A = k = 0 t k A k k ! .
The curve r ( t ) is called the trajectory of the system (2) with the initial value r 0 R n .
Definition 2
([6,7]). The solution r ( t ) 0 of differential equations (1) is called the zero solution of the linear time-invariant system. If for every constant ε > 0 , there exists a δ = δ ( ε ) > 0 , such that r ( 0 ) < δ implies that r ( t ) < ε for all t [ 0 , + ) , where r ( t ) is a solution of (1), then we say that the zero solution of system (1) is stable. If the zero solution is not stable, then we say that it is unstable.
Suppose that the zero solution of system (1) is stable, and there exists a δ ˜ ( 0 < δ ˜ δ ) , such that r ( 0 ) < δ ˜ implies that lim t + r ( t ) = 0 , then we say that the zero solution of system (1) is asymptotically stable.
Proposition 2
([6]). The zero solution of system (1) is stable if and only if all eigenvalues of matrix A have nonpositive real parts and those eigenvalues with zero real parts are simple roots of the minimal polynomial of A.
The zero solution of system (1) is asymptotically stable if and only if all eigenvalues of matrix A have negative real parts, namely, Re { λ i ( A ) } < 0 ( i = 1 , 2 , , n ) .
Proposition 3
([6]). Suppose that A and B are two n × n real matrices, and A is similar to B, namely, there exists an n × n real invertible matrix P, such that A = P 1 B P . For system (1), let v ( t ) = P r ( t ) . Then the system after the transformation becomes
v ˙ ( t ) = B v ( t ) .
System (4) is said to be equivalent to system (1), and v ( t ) = P r ( t ) is called an equivalence transformation.
Proposition 4
([6]). Let A and B be two n × n real matrices, and A is similar to B. Then the zero solution of the system r ˙ ( t ) = A r ( t ) is (asymptotically) stable if and only if the zero solution of the system v ˙ ( t ) = B v ( t ) is (asymptotically) stable.

2.2. Curvatures of Curves in R n

Definition 3
([8]). Let r : [ 0 , + ) R 3 be a smooth curve. The functions
κ ( t ) = r ˙ ( t ) × r ¨ ( t ) r ˙ ( t ) 3 , τ ( t ) = r ˙ ( t ) , r ¨ ( t ) , r ( t ) r ˙ ( t ) × r ¨ ( t ) 2
are called the curvature and torsion of the curve r ( t ) , respectively.
Gluck [4] gave a definition of higher curvatures of curves in R n , which is a generalization of curvature and torsion. Here we omit the definition of higher curvatures and review their calculation formulas directly.
In this paper, V i ( t ) denotes the i-dimensional volume of the i-dimensional parallelotope with vectors r ˙ ( t ) , r ¨ ( t ) , ⋯, r ( i ) ( t ) as edges, and we have a convention that V 0 ( t ) = 1 .
Proposition 5
([4]). Let r : [ 0 , + ) R n be a smooth curve, and r ˙ ( t ) 0 for all t [ 0 , + ) . Suppose that for each t [ 0 , + ) , the vectors r ˙ ( t ) , r ¨ ( t ) , , r ( m ) ( t ) ( m n ) are linearly independent. Then the ith curvature of a curve r ( t ) is
κ i ( t ) = V i 1 ( t ) V i + 1 ( t ) V 1 ( t ) V i 2 ( t ) ( i = 1 , 2 , , m 1 ) .
In [4], according to the definition of the curvatures of curves in R n , we have κ i ( s ) 0 for i = 1 , 2 , , m 1 .
If r ( t ) is a smooth curve in R 3 , and r ˙ ( t ) , r ¨ ( t ) , r ( t ) are linearly independent, then we have Frenet-Serret formulas (cf. [8]), where κ 1 ( s ) = κ ( s ) , and κ 2 ( s ) = | τ ( s ) | , which means the first and second curvature are the generalization of curvature and torsion of curves in R 3 , respectively. In the remainder of this paper, we use κ ( t ) instead of κ 1 ( t ) , and τ ( t ) instead of κ 2 ( t ) , for simplicity.
We can give V i ( t ) by the derivatives of r ( t ) with respect to t. In fact, we have the following result.
Proposition 6
([3]). Write r ( i ) ( t ) = r 1 ( i ) ( t ) , r 2 ( i ) ( t ) , , r n ( i ) ( t ) T . We have
V k 2 ( t ) = 1 i 1 < i 2 < < i k n r ˙ i 1 ( t ) r ¨ i 1 ( t ) r i 1 ( k ) ( t ) r ˙ i 2 ( t ) r ¨ i 2 ( t ) r i 2 ( k ) ( t ) r ˙ i k ( t ) r ¨ i k ( t ) r i k ( k ) ( t ) 2 .
By Proposition 5 and Proposition 6, we obtain the expression of each curvature of curve r ( t ) in R n by the coordinates of derivatives of r ( t ) . In particular, if r ˙ ( t ) and r ¨ ( t ) are linearly independent, then the torsion of r ( t ) satisfies
τ ( t ) = V 3 ( t ) V 2 2 ( t ) = 1 i < j < k n r ˙ i ( t ) r ¨ i ( t ) r i ( t ) r ˙ j ( t ) r ¨ j ( t ) r j ( t ) r ˙ k ( t ) r ¨ k ( t ) r k ( t ) 2 1 p < q n r ˙ p ( t ) r ¨ p ( t ) r ˙ q ( t ) r ¨ q ( t ) 2 .
On the other hand, if V 2 ( t ) 0 , namely r ˙ ( t ) and r ¨ ( t ) are linearly dependent for all t, then obviously we have the convention that τ ( t ) 0 . Further, the function V 2 ( t ) will be examined in detail in Section 4.2.

2.3. Relationship Between the Curvatures of Two Equivalent Systems

Wang et al. [3] establish a relationship between the curvatures of the trajectories of two equivalent systems. In fact, let a curve r ( t ) be the trajectory of system (2), and suppose that for each t, the vectors r ˙ ( t ) , r ¨ ( t ) , , r ( m ) ( t ) are linearly independent. Then we can define curvatures κ r , 1 ( t ) , κ r , 2 ( t ) , , κ r , m 1 ( t ) of the curve r ( t ) , and we have the following result.
Proposition 7
([3]). Suppose that a linear time-invariant system r ˙ ( t ) = A r ( t ) is equivalent to a system v ˙ ( t ) = B v ( t ) , where A = P 1 B P , and v ( t ) = P r ( t ) is the equivalence transformation. Let κ r , i ( t ) and κ v , i ( t ) be the ith ( i = 1 , 2 , , m 1 ) curvatures of trajectories r ( t ) and v ( t ) , respectively. Then we have
lim t + κ r , i ( t ) = 0 lim t + κ v , i ( t ) = 0 , lim t + κ r , i ( t ) = + lim t + κ v , i ( t ) = + , κ r , i ( t ) is a bounded function κ v , i ( t ) is a bounded function .

2.4. Real Jordan Canonical Form

Proposition 8
([7,9]). Let A be an n × n real matrix. Then A is similar to a block diagonal real matrix
C n 1 ( a 1 , b 1 ) C n 2 ( a 2 , b 2 ) 0 C n p ( a p , b p ) 0 J n p + 1 ( λ p + 1 ) J n r ( λ r ) ,
where
( 1 ) for k { 1 , 2 , , p } , the numbers λ k = a k + 1 b k and λ ¯ k = a k 1 b k ( a k , b k R , and b k > 0 ) are complex eigenvalues of A, and
C n k ( a k , b k ) = Λ k I 2 Λ k I 2 Λ k I 2 Λ k 2 n k × 2 n k ,
where Λ k = a k b k b k a k , I 2 = 1 0 0 1 ;
( 2 ) for j { p + 1 , p + 2 , , r } , the number λ j is a real eigenvalue of A, and
J n j ( λ j ) = λ j 1 λ j 1 λ j 1 λ j n j × n j .
The matrix (6) is called the real Jordan canonical form of A.

3. Real Diagonal Matrix

In this section, we study the case that the system matrix is similar to a real diagonal matrix, and prove Theorem 1. From Proposition 4 and Proposition 7, we only need to focus on the case that A is a real diagonal matrix, and prove Proposition 9.
In what follows, we defind a subset of R n that
S = r ( 0 ) | r ( 0 ) = ( r 1 ( 0 ) , r 2 ( 0 ) , , r n ( 0 ) ) T R n , s . t . i = 1 n r i ( 0 ) 0 .
Proposition 9.
Suppose that r ˙ ( t ) = A r ( t ) is a linear time-invariant system, where A is an n × n real diagonal matrix, and r ( t ) R n . Denote by κ i ( t ) ( i = 1 , 2 , ) the ith curvature of trajectory of a solution r ( t ) . Then for any given initial value r ( 0 ) S , we have
( 1 ) if lim t + κ i ( t ) 0 or lim t + κ i ( t ) does not exist, then the zero solution of the system is stable;
( 2 ) if A is invertible, and lim t + κ i ( t ) 0 or lim t + κ i ( t ) does not exist, then the zero solution of the system is asymptotically stable.
Wang et al. [3] has proved the case of i = 1 . Now we give a complete proof of this proposition.
Proof. 
( 1 ) Suppose that A is an n × n real diagonal matrix, namely,
A = diag { λ 1 , λ 2 , , λ n } .
Then
A k = diag { λ 1 k , λ 2 k , , λ n k } , e t A = diag { e λ 1 t , e λ 2 t , , e λ n t } ,
where p = 1 , 2 , . Hence we have
r ( t ) = e t A r ( 0 ) = e λ 1 t r 1 ( 0 ) , e λ 2 t r 2 ( 0 ) , , e λ n t r n ( 0 ) T , r ˙ ( t ) = A r ( t ) = λ 1 e λ 1 t r 1 ( 0 ) , λ 2 e λ 2 t r 2 ( 0 ) , , λ n e λ n t r n ( 0 ) T , , r ( k ) ( t ) = A k r ( t ) = λ 1 k e λ 1 t r 1 ( 0 ) , λ 2 k e λ 2 t r 2 ( 0 ) , , λ n k e λ n t r n ( 0 ) T ,
namely, the coordinates of derivatives of r ( t ) are
r ˙ i ( t ) = λ i e λ i t r i ( 0 ) , , r i ( k ) ( t ) = λ i k e λ i t r i ( 0 ) ( i = 1 , 2 , , n ) .
Then by Proposition 6, we obtain
V k 2 ( t ) = 1 i 1 < i 2 < < i k n r ˙ i 1 ( t ) r ¨ i 1 ( t ) r i 1 ( k ) ( t ) r ˙ i 2 ( t ) r ¨ i 2 ( t ) r i 2 ( k ) ( t ) r ˙ i k ( t ) r ¨ i k ( t ) r i k ( k ) ( t ) 2 = 1 i 1 < i 2 < < i k n λ i 1 e λ i 1 t r i 1 ( 0 ) λ i 1 2 e λ i 1 t r i 1 ( 0 ) λ i 1 k e λ i 1 t r i 1 ( 0 ) λ i 2 e λ i 2 t r i 2 ( 0 ) λ i 2 2 e λ i 2 t r i 2 ( 0 ) λ i 2 k e λ i 2 t r i 2 ( 0 ) λ i k e λ i k t r i k ( 0 ) λ i k 2 e λ i k t r i k ( 0 ) λ i k k e λ i k t r i k ( 0 ) 2 = 1 i 1 < i 2 < < i k n e p = 1 k λ i p t q = 1 k λ i q r i q ( 0 ) 1 λ i 1 λ i 1 2 λ i 1 k 1 1 λ i 2 λ i 2 2 λ i 2 k 1 1 λ i k λ i k 2 λ i k k 1 2 = 1 i 1 < i 2 < < i k n e 2 p = 1 k λ i p t q = 1 k λ i q r i q ( 0 ) 1 α < β k λ i β λ i α 2 .
We see that if the eigenvalues λ i 1 , λ i 2 , , λ i k of A are non-zero and distinct, then a term of the form C e 2 p = 1 k λ i p t will appear in the expression of V k 2 ( t ) , where C is a constant depending on the eigenvalues and initial value, and C > 0 .
By Proposition 5, the square of the ith curvature is
κ i 2 ( t ) = V i 1 2 ( t ) V i + 1 2 ( t ) V 1 2 ( t ) V i 4 ( t ) ( i = 1 , 2 , , m 1 ) .
Now, we consider the limit of κ i ( t ) as t + by comparing the exponents of e in the numerator and denominator of κ i 2 ( t ) . Let Δ 1 and Δ 2 denote the maximum values of α in the terms of the form e α t in V i 1 2 ( t ) V i + 1 2 ( t ) and V 1 2 ( t ) V i 4 ( t ) , respectively. We define
λ ( 1 ) = max σ ( A ) \ 0 , λ ( 2 ) = max σ ( A ) \ 0 , λ ( 1 ) , , λ ( i ) = max σ ( A ) \ 0 , λ ( 1 ) , λ ( 2 ) , , λ ( i 1 ) ,
Then by (7) and (8), we have
Δ 1 = 2 a = 1 i 1 λ ( a ) + 2 b = 1 i + 1 λ ( b ) , Δ 2 = 2 λ ( 1 ) + 4 c = 1 i λ ( c ) .
Thus,
Δ 1 Δ 2 = 2 λ ( i + 1 ) λ ( 1 ) λ ( i ) .
It follows that
lim t + κ i ( t ) = 0 Δ 1 < Δ 2 λ ( 1 ) + λ ( i ) > λ ( i + 1 ) , lim t + κ i ( t ) = C Δ 1 = Δ 2 λ ( 1 ) + λ ( i ) = λ ( i + 1 ) , lim t + κ i ( t ) = + Δ 1 > Δ 2 λ ( 1 ) + λ ( i ) < λ ( i + 1 ) ,
where C is a positive constant depending on the initial value r ( 0 ) = r 0 ( r j ( 0 ) 0 for j = 1 , 2 , , n ) . Here we notice that for any given real diagonal matrix A, if for a given initial value r ( 0 ) R n that satisfies j = 1 n r j ( 0 ) 0 , we have lim t + κ i ( t ) = 0 (or + , or a constant C > 0 , respectively), then for an arbitrary r ( 0 ) R n satisfying j = 1 n r j ( 0 ) 0 , we still have lim t + κ i ( t ) = 0 (or + , or a constant C ˜ > 0 , respectively).
Noting that A is a real diagonal matrix, by Proposition 2, the zero solution of the system (1) is stable if and only if λ i ( A ) 0 ( i = 1 , 2 , , n ) . If the zero solution of the system is unstable, then we have λ ( 1 ) > 0 , thus λ ( 1 ) + λ ( i ) > λ ( i + 1 ) . By (9), we have lim t + κ i ( t ) = 0 . In other words, if lim t + κ i ( t ) 0 or lim t + κ i ( t ) does not exist, then the zero solution of the system is stable.
( 2 ) Suppose that A is invertible, and lim t + κ i ( t ) 0 or lim t + κ i ( t ) does not exist. Then 0 is not an eigenvalue of A, and the zero solution of the system is stable. By Proposition 2, the zero solution of the system is asymptotically stable. □
Now, we proceed to the proof of Theorem 1.
Proof of Theorem 1.
Suppose that the linear time-invariant system r ˙ ( t ) = A r ( t ) is equivalent to a system v ˙ ( t ) = B v ( t ) , where B is a real diagonal matrix, A = P 1 B P , and v ( t ) = P r ( t ) is the equivalence transformation. They by Proposition 7, we have
lim t + κ r , i ( t ) = 0 lim t + κ v , i ( t ) = 0 .
We define
S ˜ = P 1 v ( 0 ) | v ( 0 ) = ( v 1 ( 0 ) , v 2 ( 0 ) , , v n ( 0 ) ) T R n , s . t . i = 1 n v i ( 0 ) 0 .
Note that we can regard any given n × n invertible matrix P as an invertible linear transformation P : R n R n , and the Lebesgue measure of R n \ S ˜ satisfies
m R n \ S ˜ = 0 .
If there exists a measurable set E R n whose Lebesgue measure is greater than 0, such that r ( 0 ) E implies that lim t + κ r , i ( t ) 0 or lim t + κ r , i ( t ) does not exist, then by (10) and (11), there exists a r ( 0 ) S ˜ , such that the trajectory v ( t ) with initial value v ( 0 ) = P r ( 0 ) satisfies lim t + κ v , i ( t ) 0 or lim t + κ v , i ( t ) does not exist. Notice that when r ( 0 ) S ˜ , the vector v ( 0 ) satisfies i = 1 n v i ( 0 ) 0 , thus by Proposition 9, the zero solution of the system v ˙ ( t ) = B v ( t ) is stable, and then by Proposition 4, the zero solution of the system r ˙ ( t ) = A r ( t ) is also stable, which proves Theorem 1 (1).
Since A is similar to B, the matrix A is invertible if and only if B is invertible. The method of the proof of (1) works for (2), which completes the proof of Theorem 1. □

4. Relationship between Torsion and Stability

In this section, we give the proof of Theorem 2, which establishes a relationship between the torsion of the trajectory and the stability of the zero solution of the system. From Proposition 4, 7, and 8, we only need to focus on the case that A is an invertible matrix in real Jordan canonical form (6), and prove the following result.
Proposition 10.
Suppose that r ˙ ( t ) = A r ( t ) is a linear time-invariant system, where A is an n × n invertible matrix in real Jordan canonical form, and r ( t ) R n . Denote by τ ( t ) the torsion of trajectory of a solution r ( t ) . Then for any given initial value r ( 0 ) S , we have
( 1 ) if lim t + τ ( t ) 0 or lim t + τ ( t ) does not exist, then the zero solution of the system is stable;
( 2 ) if lim t + τ ( t ) = + , then the zero solution of the system is asymptotically stable.

4.1. Blocks J p ( λ ) and C m ( a , b )

In order to study the matrices in real Jordan canonical form (6), we first consider the blocks of the forms
J p ( λ ) = λ 1 λ 1 λ 1 λ p × p and C m ( a , b ) = Λ I 2 Λ I 2 Λ I 2 Λ 2 m × 2 m ,
where λ , a , b R , b > 0 , and Λ = a b b a . Part of this subsection goes back to the work as far as [3].
( 1 ) For a J p ( λ ) block, by direct calculation, we obtain
J p 2 ( λ ) = λ 2 2 λ 1 λ 2 2 λ λ 2 1 2 λ λ 2 p × p , J p 3 ( λ ) = λ 3 3 λ 2 3 λ 1 λ 3 3 λ 2 3 λ λ 3 3 λ 2 1 λ 3 3 λ 3 λ 2 λ 3 p × p ,
and we have the exponential function
e t J p ( λ ) = e λ t 1 t t 2 2 ! t 3 3 ! t p 1 ( p 1 ) ! 1 t t 2 2 ! t p 2 ( p 2 ) ! 1 t t p 3 ( p 3 ) ! 1 t 1 .
For the system r ˙ ( t ) = J p ( λ ) r ( t ) , by substituting (14) into r ( t ) = e t J p ( λ ) r ( 0 ) , we obtain the expressions of the coordinates of r ( t )
r k ( t ) = e λ t P p ; k ( t ) ( k = 1 , 2 , , p ) ,
where the polynomial
P p ; k ( t ) = l = 0 p k r k + l ( 0 ) l ! t l .
Substituting (12) and (13) into r ( s ) ( t ) = J p s ( λ ) r ( t ) for s = 1 , 2 , 3 , combined with (15), we see that the coordinates of the derivatives of r ( t ) are
r ˙ k ( t ) = λ r k ( t ) + r k + 1 ( t ) = e λ t λ P p ; k ( t ) + P p ; k + 1 ( t ) , r ¨ k ( t ) = λ 2 r k ( t ) + 2 λ r k + 1 ( t ) + r k + 2 ( t ) = e λ t λ 2 P p ; k ( t ) + 2 λ P p ; k + 1 ( t ) + P p ; k + 2 ( t ) , r k ( t ) = λ 3 r k ( t ) + 3 λ 2 r k + 1 ( t ) + 3 λ r k + 2 ( t ) + r k + 3 ( t ) = e λ t λ 3 P p ; k ( t ) + 3 λ 2 P p ; k + 1 ( t ) + 3 λ P p ; k + 2 ( t ) + P p ; k + 3 ( t ) ,
where we have a convention that r k ( t ) = 0 for k > p .
We see that if k { 1 , 2 , , p } , then deg ( P p ; k ( t ) ) = p k ; if k > p , then P p ; k ( t ) = 0 .
( 2 ) For a C m ( a , b ) block, a direct calculation gives
C m 2 ( a , b ) = Λ 2 2 Λ I 2 Λ 2 2 Λ Λ 2 I 2 2 Λ Λ 2 2 m × 2 m , C m 3 ( a , b ) = Λ 3 3 Λ 2 3 Λ I 2 Λ 3 3 Λ 2 3 Λ Λ 3 3 Λ 2 I 2 Λ 3 3 Λ 3 Λ 2 Λ 3 2 m × 2 m ,
where Λ 2 = a 2 b 2 2 a b 2 a b a 2 b 2 , and Λ 3 = a ( a 2 3 b 2 ) b ( 3 a 2 b 2 ) b ( 3 a 2 b 2 ) a ( a 2 3 b 2 ) ; and we have the exponential function
e t C m ( a , b ) = e a t R t R t 2 2 ! R t 3 3 ! R t m 1 ( m 1 ) ! R R t R t 2 2 ! R t m 2 ( m 2 ) ! R R t R t m 3 ( m 3 ) ! R R t R R ,
where R = cos b t sin b t sin b t cos b t .
For the system r ˙ ( t ) = C m ( a , b ) r ( t ) , write
r ( t ) = ( r 1 ( t ) , r 2 ( t ) , , r 2 m 1 ( t ) , r 2 m ( t ) ) T = ( r 1 , 1 ( t ) , r 1 , 2 ( t ) , r 2 , 1 ( t ) , r 2 , 2 ( t ) , , r m , 1 ( t ) , r m , 2 ( t ) ) T .
Substituting (19) into r ( t ) = e t C m ( a , b ) r ( 0 ) , we obtain the expressions of the coordinates of r ( t )
r i , 1 ( t ) = e a t T m ; i , 1 ( t ) , r i , 2 ( t ) = e a t T m ; i , 2 ( t ) ( i = 1 , 2 , , m ) ,
where
T m ; i , 1 ( t ) = k = 0 m i t k k ! ( r 2 i + 2 k 1 ( 0 ) cos b t + r 2 i + 2 k ( 0 ) sin b t ) , T m ; i , 2 ( t ) = k = 0 m i t k k ! ( r 2 i + 2 k 1 ( 0 ) sin b t + r 2 i + 2 k ( 0 ) cos b t ) .
By (21), we have
T m ; 1 , 1 2 ( t ) + T m ; 1 , 2 2 ( t ) = r m , 1 2 ( 0 ) + r m , 2 2 ( 0 ) ( m 1 ) ! 2 t 2 m 2 + φ = 0 2 m 3 t φ B φ ( t ) ,
where each B φ ( t ) is a bounded function.
Substituting (12) and (18) into r ( s ) ( t ) = C m s ( a , b ) r ( t ) for s = 1 , 2 , 3 , combined with (20), we see that the coordinates of the derivatives of r ( t ) are
r ˙ i , 1 ( t ) = a r i , 1 ( t ) + b r i , 2 ( t ) + r i + 1 , 1 ( t ) = e a t a T m ; i , 1 ( t ) + b T m ; i , 2 ( t ) + T m ; i + 1 , 1 ( t ) , r ˙ i , 2 ( t ) = b r i , 1 ( t ) + a r i , 2 ( t ) + r i + 1 , 2 ( t ) = e a t b T m ; i , 1 ( t ) + a T m ; i , 2 ( t ) + T m ; i + 1 , 2 ( t ) , r ¨ i , 1 ( t ) = a 2 b 2 r i , 1 ( t ) + 2 a b r i , 2 ( t ) + 2 a r i + 1 , 1 ( t ) + 2 b r i + 1 , 2 ( t ) + r i + 2 , 1 ( t ) = e a t a 2 b 2 T m ; i , 1 ( t ) + 2 a b T m ; i , 2 ( t ) + 2 a T m ; i + 1 , 1 ( t ) + 2 b T m ; i + 1 , 2 ( t ) + T m ; i + 2 , 1 ( t ) , r ¨ i , 2 ( t ) = 2 a b r i , 1 ( t ) + a 2 b 2 r i , 2 ( t ) 2 b r i + 1 , 1 ( t ) + 2 a r i + 1 , 2 ( t ) + r i + 2 , 2 ( t ) = e a t 2 a b T m ; i , 1 ( t ) + a 2 b 2 T m ; i , 2 ( t ) 2 b T m ; i + 1 , 1 ( t ) + 2 a T m ; i + 1 , 2 ( t ) + T m ; i + 2 , 2 ( t ) , r i , 1 ( t ) = a a 2 3 b 2 r i , 1 ( t ) + b 3 a 2 b 2 r i , 2 ( t ) + 3 a 2 b 2 r i + 1 , 1 ( t ) + 6 a b r i + 1 , 2 ( t ) + 3 a r i + 2 , 1 ( t ) + 3 b r i + 2 , 2 ( t ) + r i + 3 , 1 ( t ) = e a t a a 2 3 b 2 T m ; i , 1 ( t ) + b 3 a 2 b 2 T m ; i , 2 ( t ) + 3 a 2 b 2 T m ; i + 1 , 1 ( t ) + 6 a b T m ; i + 1 , 2 ( t ) + 3 a T m ; i + 2 , 1 ( t ) + 3 b T m ; i + 2 , 2 ( t ) + T m ; i + 3 , 1 ( t ) , r i , 2 ( t ) = b 3 a 2 b 2 r i , 1 ( t ) + a a 2 3 b 2 r i , 2 ( t ) 6 a b r i + 1 , 1 ( t ) + 3 a 2 b 2 r i + 1 , 2 ( t ) 3 b r i + 2 , 1 ( t ) + 3 a r i + 2 , 2 ( t ) + r i + 3 , 2 ( t ) = e a t b 3 a 2 b 2 T m ; i , 1 ( t ) + a a 2 3 b 2 T m ; i , 2 ( t ) 6 a b T m ; i + 1 , 1 ( t ) + 3 a 2 b 2 T m ; i + 1 , 2 ( t ) 3 b T m ; i + 2 , 1 ( t ) + 3 a T m ; i + 2 , 2 ( t ) + T m ; i + 3 , 2 ( t ) ,
where we have a convention that if i > m , then r i , j ( t ) = 0 ( j = 1 , 2 ) .
It should be noted that in the following subsections we will consider the case where A has more than one block of the form J p ( λ ) or C m ( a , b ) , so when P p ; k ( t ) , T m ; i , 1 ( t ) and T m ; i , 2 ( t ) appear in the following, the r k + l ( 0 ) in (16) should be understood as the coordinate of r ( t ) which corresponds to the ( k + l ) th row of the diagonal block corresponding to the P p ; k ( t ) , and the r 2 i + 2 k 1 ( 0 ) and r 2 i + 2 k ( 0 ) in (21) should be understood as the coordinates of r ( t ) which correspond to the ( 2 i + 2 k 1 ) th and ( 2 i + 2 k ) th row of the diagonal block corresponding to the T m ; i , 1 ( t ) and T m ; i , 2 ( t ) , respectively.

4.2. Function V 2 ( t )

By Proposition 6, we have
V 2 2 ( t ) = 1 i < j n r ˙ i ( t ) r ¨ i ( t ) r ˙ j ( t ) r ¨ j ( t ) 2 .
Considering the form of the expression of torsion τ ( t ) , it is necessary to make a detailed analysis of the function V 2 ( t ) .
Lemma 1.
Suppose that r ˙ ( t ) = A r ( t ) is a linear time-invariant system, where A is an n × n matrix in real Jordan canonical form, and r ( t ) R n . The function V 2 ( t ) is given by (24). Then for any given r ( 0 ) S , we have
( 1 ) V 2 ( t ) 0 if and only if
A = λ λ 0 z × z ( λ R ) or A = J 2 ( 0 ) J 2 ( 0 ) 0 z × z ,
where z { 0 , 1 , , n } ;
( 2 ) if V 2 ( t ) 0 , then there exists a T > 0 , such that V 2 ( t ) > 0 for all t > T .
Proof. 
Suppose A is an n × n matrix in real Jordan canonical form.
(a) If A has a diagonal block C m ( a , b ) (without loss of generality, we assume that this C m ( a , b ) block is the first diagonal block of A), then by (20), (22), (23), and the analysis of Section 4.4 of [3], we have
V 2 2 ( t ) 1 i < j 2 m r ˙ i ( t ) r ¨ i ( t ) r ˙ j ( t ) r ¨ j ( t ) 2 = e 4 a t C t 4 m 4 + φ = 0 4 m 5 t φ B φ ( t ) ,
where the constant C = b 2 a 2 + b 2 2 r m , 1 2 ( 0 ) + r m , 2 2 ( 0 ) 2 ( m 1 ) ! 4 > 0 , and B φ ( t ) ( φ = 0 , 1 , , 4 m 5 ) are bounded functions. It follows that there exists a T > 0 , such that V 2 2 ( t ) > 0 for all t > T .
(b) If A has a diagonal block J p ( λ ) , where p 3 or p = 2 , λ 0 , then by (15), (17), and the analysis of Section 4.2 of [3], we have
1 i < j p r ˙ i ( t ) r ¨ i ( t ) r ˙ j ( t ) r ¨ j ( t ) 2 = e 4 λ t f ( t ) ,
where f ( t ) is a polynomial, and
(b1) if p 3 , then deg ( f ( t ) ) = 4 ( p 2 ) , λ 0 , 4 ( p 3 ) , λ = 0 ;
(b2) if p = 2 and λ 0 , then f ( t ) = λ 4 r 2 4 ( 0 ) > 0 .
We see that for both (b1) and (b2), there exists a T > 0 , such that f ( t ) > 0 for all t > T , thus
V 2 2 ( t ) 1 i < j p r ˙ i ( t ) r ¨ i ( t ) r ˙ j ( t ) r ¨ j ( t ) 2 = e 4 λ t f ( t ) > 0
for all t > T .
(c) If A has J 1 ( λ 1 ) and J 1 ( λ 2 ) as its diagonal blocks, where λ 1 λ 2 and λ 1 λ 2 0 , without loss of generality we can assume A = diag { J 1 ( λ 1 ) , J 1 ( λ 2 ) , } , then by (7), we have
V 2 2 ( t ) e 2 ( λ 1 + λ 2 ) t λ 1 λ 2 ( λ 2 λ 1 ) r 1 ( 0 ) r 2 ( 0 ) 2 > 0 .
(d) If both J 2 ( 0 ) and J 1 ( λ ) ( λ 0 ) are diagonal blocks of A, without loss of generality we can assume A = diag { J 2 ( 0 ) , J 1 ( λ ) , } , then we have
V 2 2 ( t ) r ˙ 1 ( t ) r ¨ 1 ( t ) r ˙ 3 ( t ) r ¨ 3 ( t ) 2 = r 2 ( t ) 0 λ r 3 ( t ) λ 2 r 3 ( t ) 2 = r 2 ( 0 ) 0 λ e λ t r 3 ( 0 ) λ 2 e λ t r 3 ( 0 ) 2 = λ 2 e λ t r 2 ( 0 ) r 3 ( 0 ) 2 > 0 .
In the case of (a) (b) (c) (d), we have show that there exists a T > 0 , such that V 2 ( t ) > 0 for all t > T . Note that (a) (b) (c) (d) cover all cases where A is a matrix in real Jordan canonical form except the two cases in (25). Nevertheless, by direct calculation, we have V 2 ( t ) 0 for the two cases in (25), which completes the proof. □
From Lemma 1, we know that except for the two trivial cases in (25), we have V 2 ( t ) > 0 when t is sufficiently large, that is to say, there exists a T > 0 , such that we have the expression (5) of torsion τ ( t ) for all t > T , which avoids a lot of potential trouble when we consider the limit of τ ( t ) as t + in the proof of Theorem 2.

4.3. Function V 3 ( t )

The function V 3 ( t ) is given by Proposition 6. In fact, we have
V 3 2 ( t ) = 1 i < j < k n r ˙ i ( t ) r ¨ i ( t ) r i ( t ) r ˙ j ( t ) r ¨ j ( t ) r j ( t ) r ˙ k ( t ) r ¨ k ( t ) r k ( t ) 2 .
By (17) and (23), we see that all coordinates of r ( s ) ( t ) ( s = 1 , 2 , 3 ) can be expressed in the form of
r ˙ i ; k ( t ) = e Re ( λ i ) t f i ; k ( t ) , r ¨ i ; k ( t ) = e Re ( λ i ) t g i ; k ( t ) , r i ; k ( t ) = e Re ( λ i ) t h i ; k ( t ) ,
where r i ; k ( s ) ( t ) denotes the coordinate of r ( s ) ( t ) corresponding to the kth row of the ith diagonal block of A. Hence
r ˙ i 1 ; k 1 ( t ) r ¨ i 1 ; k 1 ( t ) r i 1 ; k 1 ( t ) r ˙ i 2 ; k 2 ( t ) r ¨ i 2 ; k 2 ( t ) r i 2 ; k 2 ( t ) r ˙ i 3 ; k 3 ( t ) r ¨ i 3 ; k 3 ( t ) r i 3 ; k 3 ( t ) 2 = e 2 Re λ i 1 + Re λ i 2 + Re λ i 3 t G ( t ) ,
where G ( t ) is a linear combination of terms in the form of t β B γ ( t ) , where B γ ( t ) is a bounded function.
In the remainder of this paper, set
M = max { Re ( λ ) | λ σ ( A ) } .
Then by (28) and (29), we obtain
Δ 1 6 M ,
where Δ 1 denotes the maximum values of α in the terms of the form e α t t β B γ ( t ) in V 3 2 ( t ) .

4.4. Proof of Theorem 2 (1)

In order to give a proof of Theorem 2 (1), we only need to prove Proposition 10 (1). In this subsection, we will discuss the two cases in which the zero solution of the system is unstable, and obtain lim t + τ ( t ) = 0 . In fact, we will prove Lemma 2 and Lemma 3.
Lemma 2.
Under the assumptions of Proposition 10, if M > 0 , then for any given r ( 0 ) S , we have lim t + τ ( t ) = 0 .
Proof. 
Suppose M > 0 . Note that
τ 2 ( t ) = V 3 2 ( t ) V 2 4 ( t ) = 1 i < j < k n r ˙ i ( t ) r ¨ i ( t ) r i ( t ) r ˙ j ( t ) r ¨ j ( t ) r j ( t ) r ˙ k ( t ) r ¨ k ( t ) r k ( t ) 2 1 p < q n r ˙ p ( t ) r ¨ p ( t ) r ˙ q ( t ) r ¨ q ( t ) 2 2 ,
where the functions V 3 2 ( t ) and V 2 4 ( t ) are both linear combinations of terms in the form of e α t t β B γ ( t ) , where each B γ ( t ) is a bounded function. We will prove lim t + τ ( t ) = 0 for the following cases. For simplicity, let t > 0 .
(a) If A has a diagonal block C m ( M , b ) , then by (26), (29), and (30), we have
0 τ 2 ( t ) = V 3 2 ( t ) V 2 4 ( t ) e 6 M t F ( t ) + R ( t ) e 4 M t C t 4 m 4 + φ = 0 4 m 5 t φ B φ ( t ) 2 = e 6 M t F ( t ) + R ( t ) e 8 M t C 2 t 8 m 8 + ψ = 0 8 m 9 t ψ B ψ ( t ) 0 ( t + ) ,
where the constant C > 0 , all B φ ( t ) and B ψ ( t ) are bounded functions, the function F ( t ) is a linear combination of terms in the form of t β B γ ( t ) , and R ( t ) is a linear combination of terms in the form of e α t t β B γ ( t ) , where α < 6 M , and each B γ ( t ) is a bounded function. Hence we obtain lim t + τ ( t ) = 0 .
(b) If A has a diagonal block J p ( M ) ( p 2 ) , then by (27), (29), and (30), we have
0 τ 2 ( t ) = V 3 2 ( t ) V 2 4 ( t ) e 6 M t F ( t ) + R ( t ) e 8 M t f 2 ( t ) 0 ( t + ) ,
where f ( t ) is a polynomial satisfying f ( t ) > 0 and deg ( f ( t ) ) = 4 ( p 2 ) , the function F ( t ) is a linear combination of terms in the form of t β B γ ( t ) , and R ( t ) is a linear combination of terms in the form of e α t t β B γ ( t ) , where α < 6 M , and each B γ ( t ) is a bounded function. Hence we obtain lim t + τ ( t ) = 0 .
(c) If in A only those J 1 ( M ) blocks are diagonal blocks satisfying Re ( λ ) = M , then we should consider the eigenvalues whose real part is less than M. In fact, suppose two J 1 ( M ) diagonal blocks are in the ith and jth row of A, respectively. Then
r ˙ i ( t ) r ¨ i ( t ) r ˙ j ( t ) r ¨ j ( t ) 2 = e M t M r i ( 0 ) e M t M 2 r i ( 0 ) e M t M r j ( 0 ) e M t M 2 r j ( 0 ) 2 = 0 ,
which means this term has no contribution to the value of V 2 2 ( t ) . In addition, note that J 1 ( 0 ) diagonal blocks in A do not affect the value of τ ( t ) . We define
N = max { Re ( λ ) | λ σ ˜ ( A ) \ { M } } ,
where σ ˜ ( A ) denotes the set of eigenvalues of A which excluding the zero eigenvalues in J 1 ( 0 ) blocks.
(c1) Suppose that A has a diagonal block C m ( N , b ) . Let r M ( t ) denote the coordinate of r ( t ) corresponding to the row of a diagonal block J 1 ( M ) of A, and r N , 1 ( t ) , r N , 2 ( t ) denote the coordinate of r ( t ) corresponding to the first and second row of the diagonal block C m ( N , b ) of A, respectively. Then by (22) and (23), we have
r ˙ M ( t ) r ¨ M ( t ) r ˙ N , 1 ( t ) r ¨ N , 1 ( t ) 2 + r ˙ M ( t ) r ¨ M ( t ) r ˙ N , 2 ( t ) r ¨ N , 2 ( t ) 2 = e M t M r M ( 0 ) e M t M 2 r M ( 0 ) e N t N T m ; 1 , 1 ( t ) + b T m ; 1 , 2 ( t ) + T m ; 2 , 1 ( t ) e N t N 2 b 2 T m ; 1 , 1 ( t ) + 2 N b T m ; 1 , 2 ( t ) + 2 + e M t M r M ( 0 ) e M t M 2 r M ( 0 ) e N t b T m ; 1 , 1 ( t ) + N T m ; 1 , 2 ( t ) + T m ; 2 , 2 ( t ) e N t 2 N b T m ; 1 , 1 ( t ) + N 2 b 2 T m ; 1 , 2 ( t ) + 2 = e 2 ( M + N ) t M 2 r M 2 ( 0 ) N 2 + b 2 ( M N ) 2 + b 2 T m ; 1 , 1 2 ( t ) + T m ; 1 , 2 2 ( t ) + χ = 0 2 m 3 t χ B χ ( t ) = e 2 ( M + N ) t M 2 r M 2 ( 0 ) N 2 + b 2 ( M N ) 2 + b 2 r m , 1 2 ( 0 ) + r m , 2 2 ( 0 ) ( m 1 ) ! 2 t 2 m 2 + φ = 0 2 m 3 t φ B φ ( t ) + χ = 0 2 m 3 t χ B χ ( t ) = e 2 ( M + N ) t C t 2 m 2 + ψ = 0 2 m 3 t ψ B ψ ( t ) ,
where the constant C = M 2 N 2 + b 2 ( M N ) 2 + b 2 r M 2 ( 0 ) r m , 1 2 ( 0 ) + r m , 2 2 ( 0 ) ( m 1 ) ! 2 > 0 , and all B χ ( t ) , B φ ( t ) , and B ψ ( t ) are bounded functions.
(c2) Suppose that A has a diagonal block J p ( N ) . Let r M ( t ) denote the coordinate of r ( t ) corresponding to the row of a diagonal block J 1 ( M ) of A, and r N ( t ) the coordinate of r ( t ) corresponding to the first row of the diagonal block J p ( N ) of A. Then by (16) and (17), we have
r ˙ M ( t ) r ¨ M ( t ) r ˙ N ( t ) r ¨ N ( t ) 2 = e M t M r M ( 0 ) e M t M 2 r M ( 0 ) e N t N P p ; 1 ( t ) + P p ; 2 ( t ) e N t N 2 P p ; 1 ( t ) + 2 N P p ; 2 ( t ) + P p ; 3 ( t ) 2 = e 2 ( M + N ) t C t 2 p 2 + φ = 0 2 p 3 t φ B φ ( t ) , N 0 , e 2 ( M + N ) t C ^ t 2 p 4 + ψ = 0 2 p 5 t ψ B ψ ( t ) , N = 0 and p 2 ,
where the constants C , C ^ > 0 , and all B φ ( t ) and B ψ ( t ) are bounded functions.
By (c1) and (c2), we can give the expression of V 2 2 ( t ) in case (c). In fact, we suppose
C m 1 ( N , b 1 ) , C m 2 ( N , b 2 ) , , C m k ( N , b k ) , J p 1 ( N ) , J p 2 ( N ) , , J p l ( N ) ( m 1 m 2 m k , and p 1 p 2 p l )
are the all diagonal blocks whose eigenvalues satisfy Re ( λ ) = N . Then by (24), (32), and (33), we obtain
V 2 2 ( t ) e 2 ( M + N ) t C t ν + φ = 0 ν 1 t φ B φ ( t ) ,
where the constant C > 0 ,
ν = max { 2 m 1 2 , 2 p 1 2 } , N 0 , max { 2 m 1 2 , 2 p 1 4 } , N = 0 ,
and each B φ ( t ) is a bounded function.
In what follows, Δ 1 and Δ 2 denote the maximum values of α in the terms of the form e α t t β B γ ( t ) in V 3 2 ( t ) and V 2 4 ( t ) , respectively. Then by (34), we have
Δ 2 = 4 ( M + N ) .
In the determinant of (29), we can see that at most one row corresponds to a diagonal block with eigenvalue M, and the real parts of eigenvalues of the diagonal blocks corresponding to the other two rows are not greater than N, otherwise the determinant vanishes in V 3 2 ( t ) . Hence we have
Δ 1 2 ( M + 2 N ) .
Thus, we have Δ 1 Δ 2 2 M < 0 . It follows that
0 τ 2 ( t ) = V 3 2 ( t ) V 2 4 ( t ) e Δ 1 t F ( t ) + R ( t ) e Δ 2 t C ˜ t 2 ν + ψ = 0 2 ν 1 t ψ B ψ ( t ) 0 ( t + ) ,
where the constant C ˜ > 0 , each B ψ ( t ) is a bounded function, the function F ( t ) is a linear combination of terms in the form of t β B γ ( t ) , and R ( t ) is a linear combination of terms in the form of e α t t β B γ ( t ) , where α < Δ 1 , and each B γ ( t ) is a bounded function. Hence we obtain lim t + τ ( t ) = 0 .
Note that (a) (b) (c) cover all cases that satisfy M > 0 , which completes the proof. □
Now we give Lemma 3.
Lemma 3.
Under the assumptions of Proposition 10, if M = 0 , and A has a diagonal block C m ( 0 , b ) ( m 2 ) , then for any given r ( 0 ) S , we have lim t + τ ( t ) = 0 .
Proof. 
Suppose M = 0 , and A has a diagonal block C m ( 0 , b ) ( m 2 ) . Then from (30), we have Δ 1 0 . From (24) and (26), we have Δ 2 = 0 .
If Δ 1 < Δ 2 = 0 , then we have lim t + τ ( t ) = 0 .
If Δ 1 = Δ 2 = 0 , in order to obtain the limit of τ ( t ) as t + , we need to compare the highest power of t of terms in the form e 0 t t β B γ ( t ) in the numerator and denominator of τ 2 ( t ) . Let Γ 1 and Γ 2 denote the maximum value of β in the terms of the form e 0 t t β B γ ( t ) in V 3 2 ( t ) and V 2 4 ( t ) , respectively. Then we have
Γ 1 6 ( m 1 ) .
In fact, by (21) and (23), for a diagonal block C m ( 0 , b ) ( m 2 ) , the functions T m ; 1 , 1 ( t ) and T m ; 1 , 2 ( t ) can reach the highest power m 1 of t, namely t m 1 , thus r 1 , 1 ( s ) ( t ) and r 1 , 2 ( s ) ( t ) ( s = 1 , 2 , 3 ) corresponding the first two rows of C m ( 0 , b ) ( m 2 ) can reach the highest power m 1 of t. Hence by (28) and (29), we obtain (36). In addition, by (24) and (26), we have
Γ 2 = 2 ( 4 m 4 ) = 8 ( m 1 ) .
Therefore Γ 1 6 ( m 1 ) < 8 ( m 1 ) = Γ 2 . It follows that
0 τ 2 ( t ) = V 3 2 ( t ) V 2 4 ( t ) φ = 0 Γ 1 t φ B φ ( t ) + R ( t ) C t Γ 2 + ψ = 0 Γ 2 1 t ψ B ψ ( t ) 0 ( t + ) ,
where the constants C > 0 , all B φ ( t ) and B ψ ( t ) are bounded functions, and R ( t ) is a linear combination of terms in the form of e α t t β B γ ( t ) , where α < 0 , and each B γ ( t ) is a bounded function. Hence we obtain lim t + τ ( t ) = 0 . □
Lemma 2 and Lemma 3 show that under the assumptions of Proposition 10, if the zero solution of the system is unstable, then lim t + τ ( t ) = 0 . That is to say, Proposition 10 (1) is proved. Thus we proved Theorem 2 (1).

4.5. Proof of Theorem 2 (2)

We have proved Proposition 10 (1), and in order to prove Proposition 10 (2), we only need to prove the following lemma.
Lemma 4.
Under the assumptions of Proposition 10, if M = 0 , and in matrix A only those C 1 ( 0 , b ) blocks are diagonal blocks satisfying Re ( λ ) = 0 , then for any given r ( 0 ) S , we have lim t + τ ( t ) = 0 or lim t + τ ( t ) = C , where the constant C > 0 .
Proof. 
Set
A = C 1 ( 0 , b 1 ) C 1 ( 0 , b 2 ) C 1 ( 0 , b s ) A ˜ ,
where all eigenvalues of A ˜ have negative real parts.
(1) If s = 1 , then by (24) and (26), we have Δ 2 = 0 . In the determinant of (29), we can see that at most two rows correspond to the diagonal block C 1 ( 0 , b 1 ) , and the real part of eigenvalue of the diagonal block corresponding to the other row is negative. Hence Δ 1 < 0 . It follows that lim t + τ ( t ) = 0 .
(2) If s > 1 , then Δ 1 0 = Δ 2 . By direct calculation, we have
lim t + τ 2 ( t ) = 1 i < j s b i 2 b j 2 b i 2 b j 2 2 r i ; 1 2 ( 0 ) + r i ; 2 2 ( 0 ) r j ; 1 2 ( 0 ) + r j ; 2 2 ( 0 ) k = 1 s b k 2 r k ; 1 2 ( 0 ) + r k ; 2 2 ( 0 ) 2 l = 1 s b l 4 r l ; 1 2 ( 0 ) + r l ; 2 2 ( 0 ) = 0 , b 1 = b 2 = = b s , C > 0 , else .
Hence we have lim t + τ ( t ) = 0 or lim t + τ ( t ) = C > 0 . □
By Proposition 10 (1) and Lemma 4, we proved Proposition 10 (2), which completes the proof of Theorem 2.

4.6. Remark

In Theorem 2 and Proposition 10, the condition that A is invertible cannot be removed. In fact, we have the following two examples.
(1) Let
A = 0 1 0 0 0 0 0 0 0 0 1 1 0 0 1 1 .
Then by (31), we have
τ 2 ( t ) = e 4 t r 2 2 ( 0 ) e 2 t r 2 2 ( 0 ) + r 3 2 ( 0 ) + r 4 2 ( 0 ) 2
for any given r ( 0 ) S . It follows that
lim t + τ ( t ) = 1 | r 2 ( 0 ) | > 0 .
Nevertheless, since det A = 0 , we cannot obtain stability from lim t + τ ( t ) 0 . In fact, noting that A is a matrix in real Jordan canonical form which has a diagonal block J 2 ( 0 ) , we know that the zero solution of the system is unstable.
(2) Let
A = 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 .
Then by a direct calculation, we have
lim t + τ ( t ) = +
for any given r ( 0 ) S . Nevertheless, since det A = 0 , the zero solution of the system is not asymptotically stable.

5. Examples

In this section, we give two examples, which correspond to Theorem 1 and Theorem 2, respectively.

5.1. Example 1

Let r ( t ) = r 1 ( t ) , r 2 ( t ) , r 3 ( t ) , r 4 ( t ) T R 4 , and
A = 25 8 39 19 14 10 26 14 9 0 7 9 5 8 21 1 .
Then r ˙ ( t ) = A r ( t ) is a four-dimensional linear time-invariant system, and det A = 1 320 0 . Set
E = r ( 0 ) R 4 | i = 1 4 v i , 0 0 ,
where
v 1 , 0 = r 1 ( 0 ) 2 r 3 ( 0 ) + r 4 ( 0 ) , v 2 , 0 = r 1 ( 0 ) + 2 r 2 ( 0 ) + r 3 ( 0 ) , v 3 , 0 = r 1 ( 0 ) + 2 r 2 ( 0 ) + 2 r 3 ( 0 ) + r 4 ( 0 ) , v 4 , 0 = r 1 ( 0 ) + r 3 ( 0 ) r 4 ( 0 ) .
Then the Lebesgue measure of E satisfies m ( E ) = + . By direct calculation, the limits of the first curvature and the torsion of the trajectory r ( t ) as t + are lim t + κ ( t ) = 0 and lim t + τ ( t ) = 0 for r ( 0 ) E , respectively. Nevertheless, the third curvature κ 3 ( t ) of the trajectory r ( t ) satisfies
lim t + κ 3 ( t ) = +
for any r ( 0 ) E . Consequently, from Theorem 1, the zero solution of the system is asymptotically stable.
The graph of the function κ 3 ( t ) is shown in Figure 1, where r ( 0 ) = 1 , 1 , 1 , 1 T .

5.2. Example 2

We consider a popular model in classical mechanics called coupled oscillators (cf. [10]). Two masses P and Q are attached with springs. Assume that the masses are identical, i.e., m P = m Q = m , but the spring constants are different, as shown in the Figure 2.
Let x P be the displacement of P from its equilibrium and x Q be the displacement of Q from its equilibrium. Holding Q fixed and moving P, the force on P is
F 1 P = k x P k x P .
Holding P fixed and moving Q, the force on P is
F 2 P = k x Q .
Thus by Newton’s second law we have
m x ¨ P = F 1 P + F 2 P = ( k + k ) x P + k x Q .
Similarly, for Q we have
m x ¨ Q = ( k + k ) x Q + k x P .
Introducing two variables v P = x ˙ P and v Q = x ˙ Q , the above equations are equivalent to the following linear system
x P x Q v P v Q ˙ = 0 0 1 0 0 0 0 1 k + k m k m 0 0 k m k + k m 0 0 x P x Q v P v Q .
For simplicity we denote the system by r ˙ ( t ) = A r ( t ) , where
r ( t ) = x P ( t ) x Q ( t ) v P ( t ) v Q ( t ) , A = 0 0 1 0 0 0 0 1 k + k m k m 0 0 k m k + k m 0 0 .
Set
E = r ( 0 ) R 4 | r 1 2 ( 0 ) r 2 2 ( 0 ) r 3 2 ( 0 ) r 4 2 ( 0 ) 0 .
Then the Lebesgue measure of E satisfies m ( E ) = + . By direct calculation, the torsion τ ( t ) of the trajectory r ( t ) is a periodic function and lim t + τ ( t ) does not exist for any r ( 0 ) E . Hence by Theorem 2, the zero solution of the system (37) is stable.
As an example, we suppose that k / m = 1 and k / m = 2 , and the initial value r ( 0 ) = 1 , 2 , 1 , 2 T . Then we have
τ 2 ( t ) = 5 sin 2 5 t + 2 cos 2 5 t + 17 2 5 sin 2 5 t + 2 cos 2 5 t 11 2 .
The graph of the function τ 2 ( t ) is shown in Figure 3.

6. Conclusions and Future Work

The main contribution of this paper is to further develop the geometric description of stability of linear time-invariant systems in arbitrary dimension. Unlike traditional methods based on linear algebra, we focus on the curvature of curves. Specifically, the main results of this paper, Theorem 1 and Theorem 2 are proved. For the case where A is similar to a real diagonal matrix, Theorem 1 gives a relationship between the ith curvature ( i = 1 , 2 , ) of the trajectory and the stability of the zero solution of the system r ˙ ( t ) = A r ( t ) . Further, Theorem 2 establishes a torsion discriminance for the stability of the system in the case where A is invertible.
For each theorem, we give an example to illustrate the result. In particular, we use the coupled oscillators as an example of the torsion discrimination.
In the future, we will continue to use geometric methods to describe the properties of other kinds of control systems.

Author Contributions

Y.W. participated in raising questions and completing the calculation process; H.S. participated in raising questions and confirming the results; Y.C. and S.Z. participated in part of the calculation process and the calculation of examples. All authors have read and agree to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 61179031).

Acknowledgments

The research is supported partially by science and technology innovation project of Beijing Science and Technology Commission (Z161100005016043).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Lyapunov, A.M. The General Problem of the Stability of Motion. Ph.D. Thesis, Univ. Kharkov, Kharkiv Oblast, Ukraine, 1892. (In Russian). [Google Scholar]
  2. Wang, Y.; Sun, H.; Song, Y.; Cao, Y.; Zhang, S. Description of Stability for Two and Three-Dimensional Linear Time-Invariant Systems Based on Curvature and Torsion. arXiv 2018, arXiv:1808.00290. [Google Scholar]
  3. Wang, Y.; Sun, H.; Huang, S.; Song, Y. Description of Stability for Linear Time-Invariant Systems Based on the First Curvature. Math. Methods Appl. Sci. 2020, 43. [Google Scholar] [CrossRef]
  4. Gluck, H. Higher Curvatures of Curves in Euclidean Space. Am. Math. Mon. 1966, 73, 699–704. [Google Scholar] [CrossRef]
  5. Perko, L. Differential Equations and Dynamical Systems; Springer: Berlin, Germany, 1991. [Google Scholar]
  6. Chen, C.-T. Linear System Theory and Design, 3rd ed.; Oxford University Press: Oxford, UK, 1999. [Google Scholar]
  7. Marsden, J.E.; Ratiu, T.; Abraham, R. Manifolds, Tensor Analysis, and Applications, 3rd ed.; Springer: Berlin, Germany, 2001. [Google Scholar]
  8. do Carmo, M.P. Differential Geometry of Curves and Surfaces; Prentice-Hall: New York, NY, USA, 1976. [Google Scholar]
  9. Horn, R.A.; Johnson, C.R. Matrix Analysis, 2nd ed.; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
  10. Schwartz, M.D. Lecture 3: Coupled Oscillators; Department of Physics, Harvard University: Cambridge, MA, USA, 2016; Available online: http://users.physics.harvard.edu/~schwartz/15cFiles/Lecture3-Coupled-Oscillators.pdf (accessed on 17 December 2019).
Figure 1. Function κ 3 ( t ) .
Figure 1. Function κ 3 ( t ) .
Mathematics 08 00386 g001
Figure 2. 1D Coupled Oscillators.
Figure 2. 1D Coupled Oscillators.
Mathematics 08 00386 g002
Figure 3. Function τ 2 ( t ) .
Figure 3. Function τ 2 ( t ) .
Mathematics 08 00386 g003

Share and Cite

MDPI and ACS Style

Wang, Y.; Sun, H.; Cao, Y.; Zhang, S. Torsion Discriminance for Stability of Linear Time-Invariant Systems. Mathematics 2020, 8, 386. https://doi.org/10.3390/math8030386

AMA Style

Wang Y, Sun H, Cao Y, Zhang S. Torsion Discriminance for Stability of Linear Time-Invariant Systems. Mathematics. 2020; 8(3):386. https://doi.org/10.3390/math8030386

Chicago/Turabian Style

Wang, Yuxin, Huafei Sun, Yueqi Cao, and Shiqiang Zhang. 2020. "Torsion Discriminance for Stability of Linear Time-Invariant Systems" Mathematics 8, no. 3: 386. https://doi.org/10.3390/math8030386

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop