Time-varying vector norm and lower and upper bounds on the solutions of uniformly asymptotically stable linear systems

Based on the eigenvalue idea and the time-varying weighted vector norm in state space we construct here the lower and upper bounds on the solutions of uniformly asymptotically stable linear systems. We generalize the known results for the linear time-invariant systems to the linear time-varying ones.


Introduction
In addition to the Lyapunov stability criteria for the linear system of differential equationsẋ = A(t)x,ẋ = dx/dt, t ≥ t 0 , x ∈ R n , other types of conditions guaranteeing the stability often are useful. Typically these are sufficient conditions that are proved by application of the Lyapunov stability theorems [10], or the Gronwall-Bellman inequality [2], though sometimes either technique can be used, and sometimes both are used in the same proof of stability criterion. One of these theorems, providing the conditions for eventual stability of the linear systems is the following theorem.
Theorem 1 ( [13]). For the linear systemẋ = A(t)x, t ≥ t 0 denote the largest and smallest point-wise eigenvalues of A T (t) + A(t) by λ max (t) and

INTRODUCTION
2 λ min (t). Then for any t 0 and x(t 0 ) the solution x(t) satisfies Throughout the whole paper it is assumed that a matrix function A(t) : [t 0 , ∞) → R n×n is continuous. This theorem belongs to the wider family of sufficient condition for stability of the linear systems based on the "logarithmic measure" of the system matrices [4,p. 58,Theorem 3].
Our aim in this paper is to prove more useful theorem based on the eigenvalues idea for estimating asymptotics of the solutions of uniformly asymptotically stable linear systems. The theory is illustrated by two examples.

Notations, definitions and preliminary results
Let R n denotes n−dimensional vector space over the real numbers, x = (x 1 , . . . , x n ) T ∈ R n is a column vector and the symbol · refers to any (real) vector norm on R n . Specifically, for a symmetric, positive definite real matrix H, we define the weight H vector norm x H x T Hx 1/2 . Obviously, for H = I (I = identity on R n ) we obtain the Euclidean norm, x I . For the matrices H ∈ R n×n as an operator norm we will use an induced norm. Particularly, for weight H vector norm in R n , the norm M H = λ max M TM 1/2 whereM = H 1/2 MH −1/2 , as was proved in [9]. Further, λ i M , i = 1, . . . , n denotes the eigenvalues of the matrix M and λ min M = min{λ i M : i = 1, . . . , n}.
In this paper we will deal solely with the uniformly asymptotically (⇔ uniformly exponentially) stable linear systems [10,Theorem 4.11], [13,Theorem 6.13]; for the different types of stability and their relation, see e. g. [14]. We say, that Definition 2 ( [10,13]). The linear systemẋ = A(t)x is uniformly asymptotically stable (UAS) if there exist finite positive constants γ, λ such that for any t 0 and x(t 0 ) the corresponding solution satisfies Theorem 3 ( [10,13]). The linear systemẋ = A(t)x is uniformly asymptotically stable if and only if there exist finite positive constants γ, λ such that Theorem 1 leads to proof of some simple criterion based on the eigenvalues of A T (t) + A(t); for a wider context in connection with so called "logarithm measure" of the matrices see also e. g. [1], [5], [6].
This criterion is quite conservative in the sense that many UAS linear systems do not satisfy the above condition as we now see.
Then a straightforward computation and Theorem 1 shows that Despite such examples the eigenvalue idea is not to be completely rejected.
In Theorem 4 below we prove for the UAS linear systemsẋ = A(t)x the stronger result than the inequality in Theorem 1.

Main results
The main results of this paper are summarized in the following theorem generalizing [9, Theorem 3.1] to the linear time-varying systems. Recall that although its claims are mainly of theoretical relevance, providing the necessary conditions for exponential stability, within its framework without giving details and exact mathematical explanation the important results regarding convergent systems were derived in [11]; for the definitions and comparisons with the notion of incremental stability see also [12]. Moreover, this theorem provides also the lower bound on the solutions generally classified as difficult to obtain. where The positive constants γ, λ and the transition matrix Φ(t, τ ) are defined in Theorem 3.
Proof. We begin with the analysis of the properties of the matrix function The use of • the Rayleigh-Ritz ratio [8], • the fact that Φ(τ, t) I = Φ T (τ, t) I because every matrix and its transpose have the same characteristic polynomial [7, Lemma 21.1.2], • the fact that spectral radius of the matrix Φ T (τ, t)Φ(τ, t) is less or equal to any induced matrix norm Φ T (τ, t)Φ(τ, t) , and • Theorem 3 yields for every fixed t ≥ t 0 and x ∈ R n that As a consequence, λ max H(t) ≤ γ 2 2λ because there is equality x T H(t)x = λ max H(t) x 2 I for x equal to the eigenvector corresponding to λ max H(t) . To prove the left inequality in (3) we will need the following Observe that the right-hand side inequality is uninteresting for UAS systems, every estimate of x(t) I would grow exponentially as t → ∞.
Proof. The claim of the lemma follows immediately from the chain of inequality and (1).
and, by (4), Arguing analogously as above, λ min H(t) ≥ 1 2L and the inequality (3) is proved. Now we are ready to prove the remaining part of the theorem, namely the inequality (2). Suppose x(t) is a solution ofẋ = A(t)x corresponding to a given t 0 and nonzero x(t 0 ). Let us formally consider a time-varying weighted vector norm of the solutions x(t) H(t) . Then Now we show that the function H(t) satisfieṡ

H(t)
which is positive at each t ≥ t 0 , the Rayleigh-Ritz ratio yields Integrating from t 0 to any t ≥ t 0 one gets Exponentiation followed by taking the nonnegative square root gives for all t ≥ t 0 the inequality Finally using "norm conversion rule" between different weight H 1 and H 2 (recall H 1 , H 2 are symmetric and positive definite matrices) we obtain the inequality (2 which is a special case of (6)   for t ≥ τ ≥ t 0 . The general idea of the proof follows e. g. the proof of [13, Theorem 6.4, p. 100] and so the proof is omitted here. The last inequality generalizes [9, Theorem 3.1] to the linear time-varying systems. Moreover, we get also the lower bound on the solutions. The eigenvalues λ min H = 11/20 − √ 11/20, λ max H = 11/20 + √ 11/20 and the inequality (6) becomes
The result of simulation -the solution of system and lower and upper bounds -are depicted in Fig. 4.

Conclusion
In this paper we established the lower and upper bounds of all solutions to uniformly asymptotically stable linear time-varying systems from the knowledge of one fundamental matrix solution. Our approach is based on the eigenvalue idea and a time-varying metric on the state space R n . The simulation experiments demonstrates the effectiveness of the proposed method for estimating solutions, generally classified as "difficult to obtain", especially in the case of the lower bounds.