Asymptotic Stability of Non-Autonomous Systems and a Generalization of Levinson’s Theorem

We study the asymptotic stability of non-autonomous linear systems with time dependent coefficient matrices { A ( t ) } t ∈ R . The classical theorem of Levinson has been an indispensable tool for the study of the asymptotic stability of non-autonomous linear systems. Contrary to constant coefficient system, having all eigenvalues in the left half complex plane does not imply asymptotic stability of the zero solution. Levinson’s theorem assumes that the coefficient matrix is a suitable perturbation of the diagonal matrix. Our objective is to prove a theorem similar to Levinson’s Theorem when the family of matrices merely admits an upper triangular factorization. In fact, in the presence of defective eigenvalues, Levinson’s Theorem does not apply. In our paper, we first investigate the asymptotic behavior of upper triangular systems and use the fixed point theory to draw a few conclusions. Unless stated otherwise, we aim to understand asymptotic behavior dimension by dimension, working with upper triangular with internal blocks adds flexibility to the analysis.


Introduction
We study the asymptotic stability of non-autonomous linear systems of ordinary differential equations where x(t) ∈ R N and A(t) ∈ R N×N for each t ∈ R. Non-autonomous linear systems typically arise when linearizing a non-linear dynamical system along a particular solution of interest that is not necessarily a fixed point. In this case, the asymptotic stability of a linearized non-autonomous system is the linear stability of the solution of interest. In the case of a fixed point, one ends up with a constant coefficient linear system. Our objective is to generalize the theorem of Levinson [1]. The theorem can be found in many other places (e.g., [2]) in slightly different forms. For the reader's convenience, we include the theorem in the Appendix. This classical theorem has been an indispensable tool in many science and engineering problems, giving the right asymptotic stability of a given system. It ha also been generalized in many different ways by the authors of [3][4][5][6][7]. The exposition in [8] conveys well the historical context and the recent development of the theory as well. We first describe the rudiments of the problem, introduce Levinson's Theorem, and state which aspects of the theorem we still want to generalize.
Surprisingly, the asymptotic stability of a non-autonomous system is quite different from that of a constant coefficient system. For a constant coefficient system, the Jordan factorization of the coefficient matrix, which any matrix admits, tells the complete story of the asymptotic behavior. Each of the eigenvalues and the corresponding invariant subspaces are revealed by factorization. Any solution, in its modulus, exhibits one of three asymptotic behaviors: orbits can asymptotically grow at an exponential rate; orbits can decay to 0 at an exponential rate; or orbits can be maintained at polynomial rate. The sign of the real part of the eigenvalue decides the behavior. It could be expected that for a non-autonomous system the eigenvalues λ i (t) for i = 1, · · · N of A(t) could tell the story as well, or at least part of the story, but it is well known that this is not the case.
Markus and Yamabe [9] presented an example of a non-autonomous system, where the coefficient matrix has eigenvalues with consistently negative real parts while the solution grows exponentially: cos 2t sin 2t sin 2t − cos 2t x y .
In light of the above example, the question arises: About what sort of family {A(t)} t∈R can we draw a conclusion? There are cases where the eigenvalues reveal asymptotic stability. Consider the simplest case of a family of diagonal coefficient matrices, A(t) = diag(λ 1 (t), · · · , λ 2 (t) . In this case, the system is decoupled and the behavior can be read off from the eigenvalues: Now, we introduce the classical theorem of Levinson. With the assumption that A(t) does admit a differentiable factorization, such as A(t) = S(t)U(t)S(t) −1 , the change of variable The point of view is that −S(t) −1 S (t) =: E (t) is a perturbation of U(t). The asymptotic behavior with this perturbation is still in question for we have the counter example of Markus-Yamabe.
First, consider the case U(t) = diag(λ 1 (t), · · · , λ 2 (t) . From the statement of Levinson's Theorem (Theorem A1), we note that the theorem assumes two main conditions: One is the smallness of E (t), which is presented in such a way that the integral ∞ t 0 E (t) dt is finite; the other is the spectral gap conditions. Under these assumptions, Levinson's Theorem states that the asymptotic behavior of a diagonal system persists under small perturbations.
The objective of this paper is to generalize Levinson's Theorem allowing U(t) upper triangular as considered above. Neither the diagonalization nor the Jordan factorization is continuous even if A(t) is smooth in general. We believe that this generalization is theoretically interesting as well as practically useful.
The generalization is theoretically interesting because the applications of Levinson's Theorem become critical when the two eigenvalues become identical in the limit t → ∞. (Of course, it must not be defective for the theorem applicable.) To illustrate the problem, let us take an example of the 2 × 2 diagonal matrix Λ(t) = diag λ 1 (t), λ 2 (t) with λ 1 (t) = 0 and λ 2 (t) = 1 t . We see that the spectral gap vanishes in the limit t → ∞. The gap conditions of Levinson's Theorem demand so little that this example fulfills the condition. As the theorem holds, even with perturbations, the two respective asymptotic rates are to be discerned. Integrating the equations, we see that the two rates are, respectively, 1 and t, and Levinson's Theorem states that these fine discerning rates are persistent. Readers who are familiar with the invariant manifold theory of dynamical systems might find that this phenomenon is not typical. We find this aspect powerful and suggestive at the same time, because this feature manifests exactly when the eigenvalues become identical in the limit, or when there are chances of having a defective eigenvalue. In turn, possibly the theorem is not applicable.
The generalization is practically useful because many engineering problems involve many dimensions. When there are several dimensions, having even only one defective eigenvalue violates the prerequisite of Levinson's Theorem, as diagonalization fails. We can therefore draw no conclusions, which we think not sharp. Our results can be useful if one is only interested in the partial asymptotic behaviors corresponding to a certain block.
The generalizations where the matrix under perturbations is a block diagonal can also be found in the literature. For instance, a persistence theorem on a matrix with a single Jordan block can be found in [5]. Devinatz and Kaplan [10] presented a generalization when eigenvalues become defective in the limit. It is remarkable that they also worked out to give the Lemma showing that under suitable conditions there is an upper triangular factorization that converges to a Jordan factorization in the limit. Theorem 6.6 in [8] and Theorem 4 in [11] (revised in the form of Theorem 6.8 in [8]) are other persistence theorems on a matrix with multiple Jordan blocks. The results in [8,10] are restricted on matrices with Jordan blocks, which makes their results so explicit, but their arguments seems not heavily dependent on having the exact Jordan form. Our result can be easier to apply since we do not check if the matrix is in a certain specific form such as Jordan form, but it is still enough to characterize the asymptotic rate of the corresponding block. Our result is also useful when the continuity of the Jordan factorization is not assured.
We lastly comment on the subject of simultaneous factorization. If {A(t)} t∈R is a smooth one-parameter family of matrices, the possibility of simultaneous factorization is a substantial subject in matrix theory. As stated above, neither the diagonalization nor the Jordan form factorization is continuous. The continuity of Schur factorization under suitable conditions is shown in [12]. It is our working hypothesis that {A(t)} t∈R admits factorization The paper is organized in the following way. In Section 2, we present the preliminaries necessary for stating the problem. In Section 3, we present the main results: first, we present the stability of the system with an upper triangular coefficient matrix, which is the unperturbed system. Then, we provide a persistence theorem.

Preliminaries
The purpose of this section is to establish notation and introduce a few aspects in ordinary differential equation theories that are necessary in stating our problem. Readers who are familiar with those basic notions may want to move on to the next section.
For a vector is an orbit, our primary norm is the sup norm x L ∞ . To appropriately compensate the growth, it is convenient to use the weighted norm. For θ, a given real-valued function, we use the notation In that case, we also use the notation |x(t)| θ := x(t) exp −

Fundamental Matrices
By the Picard-Lindelöf theorem, the linear system has a unique solution for |t − t 0 | ≤ with = min(a, b/M), where a, b, and M are such that, in the domain |t − t 0 | ≤ a and |x − x 0 | ≤ b, f (t, x) is continuous in t and is uniformly Lipschitz in y and | f (t, x)| is bounded by M.
We call this the Assumption (A0). One might have considered a smooth cut-off if necessary. By the Picard-Lindelöf theorem, the solution extends to the whole of R uniquely .
Hence, for any two numbers t and τ, it makes sense to consider the solution matrix Φ(t, τ) that maps x(τ) to x(t) and the whole family {Φ(t, τ)} t,τ∈R of them. In particular, the ith column of Φ(t, τ) is the solution with the initial condition at time τ of the ith coordinate basis. Those matrices whose columns are independent solutions are called the fundamental matrices, and thus Φ(t, τ) is a particular type of a fundamental matrix. For an autonomous system, the solution matrices may be written in the form Φ(t − τ), but, for a non-autonomous system, they explicitly depend on t and τ.
We have Φ(t, t) = 1 for all t and we have the following relations among solution matrices, In particular, Φ(a, b) is always invertible and its inverse is Φ(b, a).

Results
We first consider the asymptotic stability when the coefficient is upper triangular. This result is used as the building blocks of the asymptotic stability of a larger system with blocks that are upper triangular.
For {U(t)} t∈R , a family of N × N upper triangular matrices, we denote λ i (t), i = 1, · · · , N, the diagonal entries of U(t), which are the eigenvalues of U(t). We also define The maximum and minimum growth forward in time (backward in time then follows too) are written in terms of λ(t) and λ(t). (1) with A(t) = U(t). Then, there is a constant C N,K > 0 such that for any a ≤ b and any vector V ∈ R N , we have e b a λ(η)dη

Proposition 1 (stability forward in time). Assume (A0) for U(t) and let {Φ(t, τ)} t,τ∈R be the solution matrices with respect to the system in Equation
The constant C N,K depends only on N and K.
Proof. We first show the upper bound. Let y(a) = V and y(b) = Φ(b, a)V. We show by induction that componentwisely for some constant C j > 0 that depends only on the dimensions N and K. For j = N, the last equation . Thus, the statement is true for j = N with C N = 1. Now, suppose the statement is true for j + 1, j + 2, · · · , N. Since its explicit solution is that By the induction hypothesis, where C j is only dependent on N and the bound K. By expanding (1 Now, we show the lower bound. To this end, we consider the inverse Φ(a, b). We assert that for any real numbers a and b with a ≤ b and for any W ∈ R with the same C. This holds because of the previous assertion. Let y(b) = W and y(a) = Φ(a, b)W. We are to solve the equation backwards in time from b to a. In a new independent variable τ = a + b − t, since d dt y(t) = U(t)y(t), Then, by the previous assertion, with the same N and K. However,ŷ(b) = y(a),ŷ(a) = y(b). The change of variable formula then gives We have shown that The estimate in Equation (2) follows from the two assertions.
The proposition states that knowledge of the eigenvalues is not as revealing as in the case of the diagonal coefficient, but is sufficient to reveal the upper and lower bound. Now, we state the persistence theorem. We need to clarify what we aim to do in parallel to what is stated in Levinson's Theorem. Suppose we have a block diagonal matrix with a few blocks, where the asymptotic stability of each block is known. For instances, the behavior is decided by the maximum and minimum eigenvalues of each block if the block is upper triangular. In general, the range of growth rates of each block overlap. If not, the characterization of a set of orbits by rates in such a non-overlapping range will identify the block. Our result is the first half of what would be the parallel statement of Levinson's Theorem. The assumption here is that the blocks are divided into two groups, one group having rates slower than those in the other group. In essence, we consider a problem with two blocks with separate growth rates. We demonstrate persistence under suitable perturbations.
As shown below, for {U(t)} t∈R , a given family of upper triangular matrices, y solves the system that we call the unperturbed system and x solves the system which is the perturbed system. All of the families of matrices described above assume (A0). The family of solution matrices for the unperturbed problem is denoted by {Φ(t, τ)} t,τ∈R . Theorem 1. Suppose that U(t) = diag(U 0 (t), U 1 (t)) with U 0 (t) and U 1 (t) upper triangular and of dimensions N 0 × N 0 and N 1 × N 1 , respectively. We assume the following.

1.
∞ a 0 E (t) < ∞ for some a 0 . 2. There is a real-valued function θ and constants δ > 0 and A ∈ R such that (a) for any t 2 ≥ t 1 and for any λ 0,i (t) i = 1, · · · , N 0 of eigenvalues of U 0 (t) and (b) for any t 2 ≥ t 1 and for any λ 1,i (t) i = 1, · · · , N 1 of eigenvalues of U 1 (t) Then, there is a constant a and an N 0 -dimensional subspace E of R N such that x(a) ∈ E implies that lim t→∞ |x(t)| θ = 0.
Proof. We use the notation developed in Section 2, Φ 0 (t, τ) We look for a solution of the following integral equation.
A x(t) that solves Equation (4) exists following that each column of Φ(t, τ) solves Equation (3): In particular, we have P 0 x(a) = P 0 y(a) = y(a). Suppose such an x(t) exists. Multiplying e − t a θ(η) dη on both sides and using block matrices calculus, we have From Proposition 1 and our assumptions, we note that the matrix Φ 0 (t, τ)e − t τ θ(η) dη is bounded for t ≥ τ. More precisely, Similarly, the matrix Φ 1 (t, τ)e τ t θ(η) dη y(τ) is bounded for t ≤ τ. More precisely, Let y be fixed and S y be the operator on L ∞ θ ([a, ∞)) that maps x ∈ L ∞ θ ([a, ∞)) to the function the right-hand-side of (5) defined. The previous estimate shows that S y x θ ≤ y θ + 1 2 x θ < ∞ and thus S y x ∈ L ∞ θ ([a, ∞)). The solution of the integral equation is then the fixed point of the operator. The same estimate shows that S y x − S yx θ ≤ 1 2 x −x θ . By the contraction mapping principle, there is a unique fixed point. It is clear that, for the fixed point, x θ ≤ 2 y θ . The map y(a) → x(a) where the function x is the unique fixed point of S y and y(t) = Φ(t, a)y(a), is a linear map from E 0 to its image. As P 0 x(a) = y(a), the linear map has rank N 0 . E is then the range space.
For I 1 , we divide the integral into for some t 1 ≤ t. For any given > 0, we show that we can choose t and t 1 so large keeping the order t ≥ t 1 that I 1 ≤ . With the same reasoning used for I 2 , we can choose t 1 so large that the integral in the interval [t 1 , t] is smaller than 2 . In the interval [a, t 1 ], we write the integral in the form Φ 0 (t, t 1 )e − t t 1 θ(η) dη t 1 a Φ 0 (t 1 , τ)e − t 1 τ θ(η) dη P 0 E (τ)x(τ)e − τ a θ(η) dη dτ.
From Equation (7), Φ 0 (t, t 1 )e − t t 1 θ(η) dη → 0 as t → ∞, and the integral in [a, t 1 ] above must be finite. Therefore, we can choose t so large that the above is smaller than 2 .

Discussion
For an illustration of the usefulness of the generalization, we consider the following simple example:   