Stability of Solutions to Evolution Problems

Large time behavior of solutions to abstract differential equations is studied. The results give sufficient condition for the global existence of a solution to an abstract dynamical system (evolution problem), for this solution to be bounded, and for this solution to have a finite limit as t → ∞ , in particular, sufficient conditions for this limit to be zero. The evolution problem is: u ˙ = A(t)u + F(t, u) + b(t), t ≥ 0; u(0) = u 0 . (*) Here u ˙ := du dt , u = u(t) ∈ H, H is a Hilbert space, t ∈ R + := [0,∞), A(t) is a linear dissipative operator: Re(A(t)u,u) ≤−γ(t)(u, u) where F(t, u) is a nonlinear operator, ‖ F(t, u) ‖ ≤ c 0 ‖ u ‖ p , p > 1, c 0 and p are positive constants, ‖ b(t) ‖ ≤ β(t) , and β(t)≥0 is a continuous function. The basic technical tool in this work are nonlinear differential inequalities. The non-classical case γ(t) ≤ 0 is also treated.


Introduction
A classical area of study is stability of solutions to evolution problems.We identify an evolution problem with an abstract dynamical system.An evolution problem is described by an equation Here F 1 : X → X is a nonlinear operator in a Banach space X, u = u(t) = du dt .Quite often it is convenient to assume X to be a Hilbert space H, because the energy is often interpreted as a quantity (u, u) in a suitable Hilbert space.Suppose that F 1 (t, 0) = 0 and u 0 = 0. Then u = 0 is a solution to Equation (1). A. M. Lyapunov in 1892 published a classical work on stability of motion, where he studied Equation (1) in the case X = R n and F 1 analytic function of u.If F 1 (t, 0) = 0, and F 1 is twice Fréchet differentiable, then one can write F 1 (t, u) = A(t)u + F (t, u), where A(t) is a linear operator in X and F (t, u) = O( u 2 ), u → 0. This representation is a linearization of F 1 around the point u = 0. Lyapunov defined the notion of stability (Lyapunov stability) of the equilibrium solution u = 0 towards small perturbations of the data u 0 .He calls this solution stable (Lyapunov stable), if for any > 0 there is a δ = δ( ) > 0 such that if inequality u 0 < δ holds then sup t≥0 u(t) < .Note that this definition implies the global existence of the solution to problem (1) for all u 0 in the ball u 0 < δ.
The equilibrium solution u = 0 is called unstable if it is not Lyapunov stable.This means that there is an > 0 such that for any δ > 0 there is a u 0 , u 0 < δ, and a t δ > 0 such that u(t δ ) ≥ .
One can give similar definitions for stability and instability of a solution to problem (1) with u 0 = 0.In this case one calls the solution u = u(t; u 0 ) stable if all the solutions u(t; w 0 ) to problem (1), with w 0 in place of u 0 , exist for all t ≥ 0 and satisfy the inequality sup t≥0 u(t; u 0 ) − u(t; w 0 ) < provided that u 0 − w 0 < δ.
A solution u(t; u 0 ) is called asymptotically stable if it is stable and there is a δ > 0 such that all the solutions u(t; w 0 ) with u 0 − w 0 < δ satisfy the relation lim t→∞ u(t; u 0 ) − u(t; w 0 ) = 0.
The equilibrium solution u = 0 is asymptotically stable if it is stable and there is a δ > 0 such that all the solutions u(t; u 0 ) with u 0 < δ satisfy the relation lim t→∞ u(t; u 0 ) = 0.
Stability of the solutions and their behavior as t → ∞ are of interest in a study of dynamical systems.For example, if the equilibrium solution is asymptotically stable, then it does not have chaotic behavior.
If A(t) = A is independent of time and X = R n , then Lyapunov obtained classical results on the stability of the equilibrium solution to problem (1).He assumed that F is analytic with respect to u ∈ R n , that |F (t, u)| ≤ c|u| 2 in a neighborhood of the origin, and c > 0 is a constant.Lyapunov has proved that if the spectrum σ(A) of A lies in the half-plane Rez < 0, then the equilibrium solution u = 0 is asymptotically stable, and if at least one eigenvalue of A lies in the half-plane Rez > 0, then the equilibrium solution is unstable.
If some of the eigenvalues of A lie on the imaginary axis and F = 0, so that problem (1) is linear, and if all the Jordan cells of the Jordan canonical form of the matrix, corresponding to the operator A in R n consist of just one element, then the equilibrium solution is stable.Otherwise it is unstable.
Thus, a necessary and sufficient condition for Lyapunov stability of the equilibrium solution of the linear equation u = Au in R n is known: the spectrum of A has to lie in the left complex half-plane: σ ⊂ {z : Rez ≤ 0}, and the Jordan cells corresponding to purely imaginary eigenvalues of A have to consist of just one element.
If F ≡ 0, then, in general, when the spectrum of A lies in the left half plane of the complex plane, and some eigenvalues of A lie on the imaginary axis, the stability cannot be decided by the linearized part A of F 1 only.One can give examples of A such that the nonlinear part F can be chosen so that the equilibrium solution u = 0 is stable, and F can also be chosen so that this solution is unstable.For instance, consider u = cu 3 , where c = const.This equation can be solved analytically by separation of variables.The result is u(t) = [u −2 (0) − 2ct] −0.5 .Therefore, if c < 0 and |u(0)| ≤ δ, δ > 0, then the solution exists for all t ≥ 0, and is asymptotically stable.But if c > 0, then the solution blows up at a finite time t b , the blow-up time, and t b = [2cu 2 (0)] −1 .In this case the zero solution is unstable.
If A = A(t) the stability theory is more complicated.The case of periodic A(t) was studied much due to its importance in many applications (see [1,2]).
The stability theory in infinite-dimensional spaces, for example, in Hilbert and Banach spaces, was developed in the second half of the 20-th century, see [3] and references therein.Again, the location of the spectrum of A(t) plays an important role in this theory.
The basic novel points of the theory presented below include sufficient conditions for the stability and asymptotic stability of the equilibrium solution to abstract evolution problem (1) in a Hilbert space when σ(A(t)) may lie in the right half-plane for some or all moments of time t > 0, but sup σ(ReA(t)) → 0 as t → ∞.Therefore, our results are new even in the finite-dimensional spaces.
The technical tool, on which our study is based, is a new nonlinear differential inequality.The results are stated in several theorems and illustrated by several examples.These results are taken from the cited papers by the author (see [4][5][6][7][8][9][10][11]), and, especially, from paper [4].In the joint papers by the author's student N. S.Hoang and the author one can find various additional results on nonlinear inequalities (see [12][13][14][15][16][17]). Some versions of this inequality has been used in the monographs [18,19], where the Dynamical Systems Method (DSM) for solving operator equations was developed.
The literature on stability of solutions to evolution problems and their behavior at large times is enormous, and we refer the reader mainly to the papers and books directly related to the novel points mentioned above.
Consider an abstract nonlinear evolution problem where u(t) is a function with values in a Hilbert space H, A(t) is a linear bounded dissipative operator in H, which satisfies inequality where γ(t) and β(t) ≥ 0 are continuous real-valued functions, defined on all of R + := [0, ∞), c 0 > 0 and p > 1 are constants.
Recall that a linear operator A in a Hilbert space is called dissipative if Re(Au, u) ≤ 0 for all u ∈ D(A), where D(A) is the domain of definition of A. Dissipative operators are important because they describe systems in which energy is dissipating, for example, due to friction or other physical reasons.Passive nonlinear networks can be described by Equation (2) with a dissipative linear operator A(t), see [5,20], Chapter 3, and [21].
Our goal is to give sufficient conditions for the existence and uniqueness of the solution to problem (2) and ( 3) for all t ≥ 0, that is, for global existence of u(t), for boundedness of sup t≥0 u(t) < ∞, or to the relation lim t→∞ u(t) = 0.
If b(t) ≡ 0, then one says that Equations ( 2) and ( 3) is the problem with persistently acting perturbations.The zero solution is called Lyapunov stable for problem (2) and (3) with persistently acting perturbations if for any > 0, however small, one can find a δ = δ( ) > 0, such that if u 0 ≤ δ, and sup t≥0 b(t) ≤ δ, then the solution to Cauchy problem (2) and (3) satisfies the estimate sup t≥0 u(t) ≤ .
We do not discuss here the method of Lyapunov functions for a study of stability (see, for example [22,23]).
The approach, developed in this work, consists of reducing the stability problems to some nonlinear differential inequality and estimating the solutions to this inequality.
In Section 2 the formulation and a proof of two theorems, containing the result concerning this inequality and its discrete analog, are given.In Section 3 some results concerning Lyapunov stability of zero solution to Equation (2) are obtained.In Section 4 we derive stability results in the case when γ(t) < 0 in formula (4).This means that the linear operator A(t) in Equation ( 2) may have spectrum in the half-plane Rez > 0.
In the theory of chaos one of the reasons for the chaotic behavior of a solution to an evolution problem to appear is the lack of stability of solutions to this problem ( [25,26]).The results presented in Section 3 can be considered as sufficient conditions for chaotic behavior not to appear in the evolution system described by problem ( 2) and (3).

A Differential Inequality
In this Section an essentially self-contained proof is given of an estimate for non-negative solutions of a nonlinear inequality In Section 3 some of the many possible applications of this estimate (see estimate (12) below) are demonstrated.
It is not assumed a priori that solutions g(t) ≥ 0 to inequality (8) are defined on all of R + , that is, that these solutions exist globally.In Theorem 1 we give sufficient conditions for the global existence of g(t).Moreover, under these conditions a bound on g(t) is given, see estimate (12) in Theorem 1.This bound yields the relation lim t→∞ g(t) = 0 if lim t→∞ µ(t) = ∞ in Equation (12).
Let us formulate our assumptions.We assume that g(t) ≥ 0. We do not assume that the functions γ, α and β are non-negative.However, in many applications the functions α and β are bounds on some norms, and then these functions are non-negative.The function γ(t) is often (but not always) non-negative.For example, this happens if γ(t) comes from an estimate of the type (Au, u) ≥ γ(u, u).If the functions α and β are bounds from above on some norms, then one may assume without loss of generality that these functions are smooth, because one can approximate a non-smooth function with an arbitrary accuracy by an infinitely smooth function, and choose this smooth function to be greater than the function it approximates.Assumption A 1 .We assume that the function g(t) ≥ 0 is defined on some interval [0, T ), has a bounded derivative ġ(t) := lim s→+0 g(t+s)−g(t) s from the right at any point of this interval, and g(t) satisfies inequality (8) at all t at which g(t) is defined.The functions γ(t), and β(t), are real-valued, defined on all of R + and continuous there.The function α(t, g) is continuous on R + × R + and locally Lipschitz with respect to g.This means that and µ(0)g(0) ≤ 1 One can replace the initial point t = 0 by some point t 0 ∈ R, and assume that the interval of time is [t 0 , t 0 + T ), and that inequalities hold for t ≥ t 0 , rather than for t ≥ 0. The proofs and the conclusions remain unchanged.
Assume ψ 0 ≥ φ 0 , and for any t and x for which both f and g are defined.Assume that f and g are continuous functions in a set [0, s) × (a, b), φ 0 ∈ (a, b), ψ is the maximal solution to (**) and φ is any solution to (*).Then φ(t) ≤ ψ(t) on the maximal interval [0, T ) of the existence of both φ and ψ.
Proof of the Comparison Lemma.First, let us assume for simplicity that problems (*) and (**) have a unique solution.Later we will discard this simplifying assumption.If f and g satisfy a local Lipschitz condition with respect to φ, respectively, ψ, then our simplifying assumption holds.Assume secondly, also for simplicity, that g(t, x) > f (t, x).Under this simplifying assumption it is easy to prove the conclusion of the Lemma, because the graph of ψ must lie above the graph of ψ for t > 0. Indeed, in a small neighborhood [0, δ), where δ > 0 is sufficiently small, the graph of ψ lies above the graph of φ.This is obviously true if φ 0 < ψ 0 , because of the continuity of φ and ψ.If φ 0 = ψ 0 , then the graph of ψ lies above the graph of φ because φ(0) < ψ(0) due to the assumption f (0, φ 0 ) < g(0, φ 0 ) = g(0, ψ 0 ).To check the last claim assume that there is a point t 1 ∈ [0, T ) such that φ(t 1 ) = ψ(t 1 ), and φ(t) < ψ(t) for t ∈ (0, t 1 ).Then φ(t) − φ(t 1 ) < ψ(t) − ψ(t 1 ).Divide this inequality by t − t 1 < 0 and get Pass to the limit t → t 1 , t < t 1 , in the above inequality, use the differential equations for φ and ψ and the equality φ(t 1 ) = ψ(t 1 ), and obtain the following relation: This relation contradicts the assumption f (t, x) < g(t, x).This contradiction proves the conclusion of the Comparison Lemma under the additional assumption f (t, x) < g(t, x).
To prove the Comparison Lemma under the original assumption f (t, x) ≤ g(t, x), let us consider problem (*) with f replaced by f n := f − 1 n < f .Let φ n solve problem (*) with f replaced by f n , and with the same initial condition as in (*).Since f n (t, x) < g(t, x), then, by what we have just proved, it follows that φ n (t) ≤ ψ(t) on the common interval [0, T n ) of the existence of φ n and ψ.By the standard result about continuous dependence of the solution to (*) on a parameter, one concludes that lim n→∞ T n = T and lim n→∞ φ n (t) = φ(t) for any t ∈ [0, T ).Therefore, passing to the limit n → ∞ in the inequality φ n (t) ≤ ψ(t) one gets the conclusion of the Comparison Lemma under the original assumption f (t, x) ≤ g(t, x).
If the simplifying assumption concerning uniqueness of the solutions to (*) and (**) is dropped, then (*) and (**) may have many solutions.The limit of the solution φ n is the minimal solution to (*).If one considers problem (**) with g replaced by g n := g + 1 n > g, and denotes by ψ n the corresponding solution, then the limit lim n→∞ ψ n (t) = ψ(t) is the maximal solution to (**).In this case the above argument yields the conclusion of the Lemma with ψ(t) being the maximal solution to (**), and φ(t) being any solution to (*).The Comparison Lemma is proved.2 Remark 2. If φ(t) is bounded from below for all t ≥ 0, so that c ≤ φ(t) for all t ≥ 0, and ψ(t) exists globally, that is, for all t ≥ 0, then the inequality c ≤ φ(t) ≤ ψ(t) and the continuity of f (t, x) on the set [0, ∞) × R imply that any solution φ to (*) exists globally.Indeed, if it would exist only on a finite interval [0, T ) then it has to tend to infinity as t → T , but this is impossible because the bound c ≤ φ(t) ≤ ψ(t) and the global existence and continuity of ψ do not allow φ(t) to grow to infinity as t → T .
Let us formulate and prove a discrete version of Theorem 1.
Theorem 2. Assume that g n ≥ 0, α(n, g n ) ≥ 0, and α(n, g n ) ≥ α(n, p n ) if g n ≥ p n .If there exists a sequence µ n > 0 such that α(n, and Proof.For n = 0 inequality (21) holds because of Equation (20).Assume that it holds for all n ≤ m and let us check that then it holds for n = m + 1.If this is done, Theorem 2 is proved.2 Using the inductive assumption, one gets: This and inequality (19) imply: The last inequality is obvious since it can be written as Theorem 2 is proved.
Theorem 2 was formulated in [14] and proved in [15].We included for completeness a proof, which is shorter than the one in [15].
Let us give a few simple examples of applications of Theorem 1.
Example 1.Consider the inequality Assume g ≥ 0. Choose µ(t) = t + 1.Then inequality (10) holds if and g(0) ≤ 1.Thus, inequality (10) holds if This inequality holds obviously.Therefore, any g ≥ 0, that satifies inequalities (22) and g(0) ≤ 1, exists for all t ≥ 0 and satisfies the estimate In this example the linearized problem This solution tends to infinity as t → ∞.Example 2. Consider a classical problem where A(t) is a linear operator in R n and F is a nonlinear operator.Assume that Re(A(t)u, u) ≤ −γ(u, u), where γ = const > 0, and ||F (t, u)|| ≤ c||u|| p , p = const > 1, c = const > 0, and || • || is the norm of a vector in R n .We also assume that Equation ( 23) has the following property: Property P: If a solution to Equation ( 23) is defined on the maximal interval of its existence [0, T ) and T < ∞, then It is known (see, for example [27]), that Property P holds if By Peano's theorem the Cauchy problem where u ∈ R n , has a local solution on an interval [0, a), provided that f is a continuous function on It is known that in every infinite-dimensional Banach space the Peano theorem fails.Therefore, in an infinite-dimensional Banach space we assume that problems ( 23) and ( 24) have a solution, and if [0, T ) is the maximal interval of the existence of the solution, then Property P holds.This happens, for example, if f (t, u) satisfies a local Lipschitz condition with respect to u and is continuous with respect to t ∈ [0, T ].Indeed, if a local Lipschitz condition holds, then the local interval of the existence of the solution to the Cauchy problem ( 24) is of the length b = min(RM −1 , L), provided that f is continuous with respect to t and satisfies the estimates Under these assumptions the solution to problem ( 24) is unique and stays in the ball B(u 0 , R) for t ∈ [0, b].
To see that Property P holds for problem (24) if f satisfies a local Lipschitz condition with respect to u, assume that the solution to Equation (24) does not exist for t > T .Under our assumptions, if the solution u of problem (24) satisfies the inequality sup 0≤t<T ||u(t)|| < ∞, then the constants M, L and R are finite.Therefore b > 0. Take the initial point t 0 = T − 0.5b.By the local existence theorem the solution u(t) exists on the interval [T − 0.5b, T + 0.5b].This is a contradiction, since we have assumed that this solution does not exist for t > T .This contradiction proves that Property P holds for problem (24) if f satisfies a local Lipschitz condition.
Let us use Theorem 1 to prove asymptotic stability of the zero solution to Equation ( 23) and to illustrate the application of our general method for a study of stability of solutions to abstract evolution problems, the method that we develop below.
Let g(t) := ||u(t)||, where the norm is taken in R n .Take a dot product of Equation ( 23) with u, then take the real part of both sides of the resulting equation and get Re( u, u) = g ġ = Re(Au, u) + Re(F (t, u), u) ≤ −γg 2 + cg p+1 Since g ≥ 0, one obtains from the above inequality an inequality of the type Equation ( 8), namely, where γ and c are positive constants.Choose where λ = const > 0, a = const ∈ (0, γ).Note that a can be chosen arbitrarily close to γ.We choose λ later.Denote b := γ − a > 0. Then inequality (11) holds for any g(0) if c > 0 is sufficiently small.Inequality (10) In turn, the last inequality holds for an arbitrary fixed c > 0 and an arbitrary small fixed b > provided that λ > 0 is sufficiently large.One concludes that for any initial data u 0 the solution to Equation (23) exists globally and admits an estimate ||u(t)|| ≤ λ −1 e −at , where the positive constant a < γ can be chosen arbitrarily close to γ if the positive constant c is sufficiently small.
The above argument remains valid also for unbounded, closed, densely defined linear operators A(t), provided that Property P holds.
If A(t) is a generator of a C 0 semigroup T (t), and F satisfies a local Lipschitz condition, then problem ( 23) is equivalent to the equation u = T (t)F (t, u), and this equation may be useful for a study of the global existence of the solution to problem (23) (see [28]).
Example 3. Consider an example in which the solution blows up in a finite time, so it does not exist globally.Consider the problem Here D is a bounded domain with a smooth boundary S, N is an outer unit normal to S, u 0 > 0 is a smooth function.Let Take the inner product in H of Equation ( 26) and u, then take real part of both sides of the resulting equation and get g ġ ≤ −γg 2 + αg 3 + βg Since g ≥ 0 one obtains an inequality of the type Equation ( 8), namely Now it is possible to use Theorem 1.
Choose µ(t) = λe kt , where λ and k are positive constants, k < γ.Assume that λg 0 ≤ 1, where g 0 := ||u 0 (x)||.Then inequality (11) holds for any initial data u 0 , that is, for any g 0 , if λ is sufficiently small.Inequality (10) holds if α(t)e −kt λ + λe kt β(t) ≤ γ − k One can easily impose various conditions on α and β so that the above inequality hold.For example, assume that α decays monotonically as t grows, α(0) λ < (γ − k)/2, and β(t) ≤ νe −k t , where k > k, k = const, ν > 0 is a constant, λν ≤ (γ − k)/2.Then inequality (10)  In Sections 3 and 4 some stability results for abstract evolution problems are presented in detail.These results are formulated in four theorems.The basic ideas are similar to the ones discussed in examples in this Section, but new assumptions and new technical tools are used.

Stability Results
In this Section we develop a method for a study of stability of solutions to the evolution problems described by the Cauchy problem (2) and (3) for abstract differential equations with a dissipative bounded linear operator A(t) and a nonlinearity F (t, u) satisfying inequality (5).Condition (5) means that for sufficiently small u(t) the nonlinearity is of the higher order of smallness than u(t) .We also study the large time behavior of the solution to problem (2) and (3) with persistently acting perturbations b(t).
In this paper we assume that A(t) is a bounded linear dissipative operator, but our methods are valid also for unbounded linear dissipative operators A(t), for which one can prove global existence of the solution to problem (2) and (3).We do not go into further detail in this paper.
Let us formulate the first stability result.Theorem 3. Assume that Re(Au, u) ≤ −k u 2 ∀u ∈ H, k = const > 0, and inequality (4) holds with γ(t) = k.Then the solution to problem (2) and (3) with b(t) = 0 satisfies an esimate u(t) = O(e −(k− )t ) as t → ∞.Here 0 < < k can be chosen arbitrarily small if u 0 is sufficiently small.This theorem implies asymptotic stability in the sense of Lyapunov of the zero solution to Equation (2) with b(t) = 0. Our proof of Theorem 3 is new and very short.
Proof of Theorem 3. Multiply Equation (2) (in which b(t) = 0 is assumed) by u, denote g = g(t) := u(t) , take the real part, and use assumption (4) with γ(t) = k > 0, to get If g(t) > 0 then the derivative ġ does exist, and as one can check.If g(t) = 0 on an open subset of R + , then the derivative ġ does exist on this subset and ġ(t) = 0 on this subset.If g(t) = 0 but in in any neighborhood (t − δ, t + δ) there are points at which g does not vanish, then by ġ we understand the derivative from the right, that is, This limit does exist and is equal to u(t) .Indeed, the function u(t) is continuously differentiable, so The assumption about the existence of the bounded derivative ġ(t) from the right in Theorem 3 was made because the function u(t) does not have, in general, the derivative in the usual sense at the points t at which u(t) = 0, no matter how smooth the function u(t) is at the point τ .Indeed, Consequently, the right and left derivatives of u(t) at the point t at which u(t) = 0 do exist, but are different.Therefore, the derivative of u(t) at the point t at which u(t) = 0 does not exist in the usual sense.
However, as we have proved above, the derivative ġ(t) from the right does exist always, provided that u(t) is continuously differentiable at the point t.
Since g ≥ 0, inequality (27) yields inequality (8) with γ(t) = k > 0, β(t) = 0, and α(t, g) = c 0 g p , p > 1. Inequality (10) takes the form We choose λ and b so that inequalities (32) and (33) hold.This is always possible if b < k and u 0 is sufficiently small.Let us formulate a stability result in which we assume that b(t) ≡ 0. The function b(t) has physical meaning of persistently acting perturbations.