Next Article in Journal
On the Class of Dominant and Subordinate Products
Previous Article in Journal
A Converse to a Theorem of Oka and Sakamoto for Complex Line Arrangements

Article

# Stability of Solutions to Evolution Problems

Mathematics Department, Kansas State University, Manhattan, KS 66506-2602, USA
Mathematics 2013, 1(2), 46-64; https://doi.org/10.3390/math1020046
Received: 26 February 2013 / Revised: 25 April 2013 / Accepted: 25 April 2013 / Published: 13 May 2013

## Abstract

Large time behavior of solutions to abstract differential equations is studied. The results give sufficient condition for the global existence of a solution to an abstract dynamical system (evolution problem), for this solution to be bounded, and for this solution to have a finite limit as $\text{t}\to \text{}\infty$ , in particular, sufficient conditions for this limit to be zero. The evolution problem is: $\stackrel{˙}{u}\text{}=\text{}A\left(t\right)u\text{}+\text{}F\left(t,\text{}u\right)\text{}+\text{}b\left(t\right),\text{}t\text{}\ge \text{}0;\text{}u\left(0\right)\text{}=\text{}{u}_{0}.$ (*) Here $\stackrel{˙}{u}\text{}:=\text{}\frac{du}{dt}\text{},\text{}u\text{}=\text{}u\left(t\right)\text{}\in \text{}H,\text{}H$ is a Hilbert space, $t\text{}\in \text{}{R}_{+}\text{}:=\text{}\left[0,\infty \right),\text{}A\left(t\right)$ is a linear dissipative operator: $\text{Re}\left(A\left(t\right)u,u\right)\text{}\le -\gamma \left(t\right)\left(u,\text{}u\right)$ where $F\left(t,\text{}u\right)$ is a nonlinear operator, $‖F\left(t,\text{}u\right)\text{}‖\text{}\le \text{}{c}_{0}{‖u‖}^{p},\text{}p\text{}>\text{}1,\text{}{c}_{0}$ and p are positive constants, $‖b\left(t\right)\text{}‖\text{}\le \text{}\beta \left(t\right)$, and $\beta \left(t\right)\ge 0$ is a continuous function. The basic technical tool in this work are nonlinear differential inequalities. The non-classical case $\gamma \left(t\right)\text{}\le \text{}0$ is also treated.

## 1. Introduction

A classical area of study is stability of solutions to evolution problems. We identify an evolution problem with an abstract dynamical system. An evolution problem is described by an equation
$u ˙ ( t ) = F 1 ( t , u ) , u ( 0 ) = u 0$
Here $F 1 : X → X$ is a nonlinear operator in a Banach space X, $u ˙ = u ˙ ( t ) = d u d t$. Quite often it is convenient to assume X to be a Hilbert space H, because the energy is often interpreted as a quantity $( u , u )$ in a suitable Hilbert space. Suppose that $F 1 ( t , 0 ) = 0$ and $u 0 = 0$. Then $u = 0$ is a solution to Equation (1). A. M. Lyapunov in 1892 published a classical work on stability of motion, where he studied Equation (1) in the case $X = R n$ and $F 1$ analytic function of u. If $F 1 ( t , 0 ) = 0$, and $F 1$ is twice Fréchet differentiable, then one can write $F 1 ( t , u ) = A ( t ) u + F ( t , u )$, where $A ( t )$ is a linear operator in X and $∥ F ( t , u ) ∥ = O ( ∥ u ∥ 2 ) , ∥ u ∥ → 0 .$ This representation is a linearization of $F 1$ around the point $u = 0$. Lyapunov defined the notion of stability (Lyapunov stability) of the equilibrium solution $u = 0$ towards small perturbations of the data $u 0$. He calls this solution stable (Lyapunov stable), if for any $ϵ > 0$ there is a $δ = δ ( ϵ ) > 0$ such that if inequality $∥ u 0 ∥ < δ$ holds then $sup t ≥ 0 ∥ u ( t ) ∥ < ϵ$. Note that this definition implies the global existence of the solution to problem (1) for all $u 0$ in the ball $∥ u 0 ∥ < δ$.
The equilibrium solution $u = 0$ is called unstable if it is not Lyapunov stable. This means that there is an $ϵ > 0$ such that for any $δ > 0$ there is a $u 0$, $∥ u 0 ∥ < δ$, and a $t δ > 0$ such that $∥ u ( t δ ) ∥ ≥ ϵ$.
One can give similar definitions for stability and instability of a solution to problem (1) with $u 0 ≠ 0$. In this case one calls the solution $u = u ( t ; u 0 )$ stable if all the solutions $u ( t ; w 0 )$ to problem (1), with $w 0$ in place of $u 0$, exist for all $t ≥ 0$ and satisfy the inequality $sup t ≥ 0 ∥ u ( t ; u 0 ) − u ( t ; w 0 ) ∥ < ϵ$ provided that $∥ u 0 − w 0 ∥ < δ$.
A solution $u ( t ; u 0 )$ is called asymptotically stable if it is stable and there is a $δ > 0$ such that all the solutions $u ( t ; w 0 )$ with $∥ u 0 − w 0 ∥ < δ$ satisfy the relation $lim t → ∞ ∥ u ( t ; u 0 ) − u ( t ; w 0 ) ∥ = 0$.
The equilibrium solution $u = 0$ is asymptotically stable if it is stable and there is a $δ > 0$ such that all the solutions $u ( t ; u 0 )$ with $∥ u 0 ∥ < δ$ satisfy the relation $lim t → ∞ ∥ u ( t ; u 0 ) ∥ = 0$.
Consider problem (1) with $F 1 ( t , u ) + ϕ ( t , u )$ in place of $F 1 ( t , u )$. The term $ϕ ( t , u )$ is called persistently acting perturbations. The equilibrium solution $u = 0$ is called stable with respect to persistently acting perturbations if for any $ϵ > 0$ there exists a $δ = δ ( ϵ ) > 0$ such that if $∥ ϕ ( t , u ) ∥ < δ$ and $∥ u 0 ∥ < δ$, then $sup t ≥ 0 ∥ u ( t ; u 0 ) ∥ < ϵ$.
Stability of the solutions and their behavior as $t → ∞$ are of interest in a study of dynamical systems. For example, if the equilibrium solution is asymptotically stable, then it does not have chaotic behavior.
If $A ( t ) = A$ is independent of time and $X = R n$, then Lyapunov obtained classical results on the stability of the equilibrium solution to problem (1). He assumed that F is analytic with respect to $u ∈ R n$, that $| F ( t , u ) | ≤ c | u | 2$ in a neighborhood of the origin, and $c > 0$ is a constant. Lyapunov has proved that if the spectrum $σ ( A )$ of A lies in the half-plane Re$z < 0$, then the equilibrium solution $u = 0$ is asymptotically stable, and if at least one eigenvalue of A lies in the half-plane Re$z > 0$, then the equilibrium solution is unstable.
If some of the eigenvalues of A lie on the imaginary axis and $F = 0$, so that problem (1) is linear, and if all the Jordan cells of the Jordan canonical form of the matrix, corresponding to the operator A in $R n$ consist of just one element, then the equilibrium solution is stable. Otherwise it is unstable.
Thus, a necessary and sufficient condition for Lyapunov stability of the equilibrium solution of the linear equation $u ˙ = A u$ in $R n$ is known: the spectrum of A has to lie in the left complex half-plane: $σ ⊂ { z : R e z ≤ 0 }$, and the Jordan cells corresponding to purely imaginary eigenvalues of A have to consist of just one element.
If $F ≢ 0$, then, in general, when the spectrum of A lies in the left half plane of the complex plane, and some eigenvalues of A lie on the imaginary axis, the stability cannot be decided by the linearized part A of $F 1$ only. One can give examples of A such that the nonlinear part F can be chosen so that the equilibrium solution $u = 0$ is stable, and F can also be chosen so that this solution is unstable. For instance, consider $u ˙ = c u 3$, where $c = c o n s t$. This equation can be solved analytically by separation of variables. The result is $u ( t ) = [ u − 2 ( 0 ) − 2 c t ] − 0 . 5$. Therefore, if $c < 0$ and $| u ( 0 ) | ≤ δ$, $δ > 0$, then the solution exists for all $t ≥ 0$, and is asymptotically stable. But if $c > 0$, then the solution blows up at a finite time $t b$, the blow-up time, and $t b = [ 2 c u 2 ( 0 ) ] − 1$. In this case the zero solution is unstable.
If $A = A ( t )$ the stability theory is more complicated. The case of periodic $A ( t )$ was studied much due to its importance in many applications (see [1,2]).
The stability theory in infinite-dimensional spaces, for example, in Hilbert and Banach spaces, was developed in the second half of the 20-th century, see [3] and references therein. Again, the location of the spectrum of $A ( t )$ plays an important role in this theory.
The basic novel points of the theory presented below include sufficient conditions for the stability and asymptotic stability of the equilibrium solution to abstract evolution problem (1) in a Hilbert space when $σ ( A ( t ) )$ may lie in the right half-plane for some or all moments of time $t > 0$, but $sup σ ( R e A ( t ) ) → 0$ as $t → ∞$. Therefore, our results are new even in the finite-dimensional spaces.
The technical tool, on which our study is based, is a new nonlinear differential inequality. The results are stated in several theorems and illustrated by several examples. These results are taken from the cited papers by the author (see [4,5,6,7,8,9,10,11]), and, especially, from paper [4]. In the joint papers by the author’s student N. S.Hoang and the author one can find various additional results on nonlinear inequalities (see [12,13,14,15,16,17]). Some versions of this inequality has been used in the monographs [18,19], where the Dynamical Systems Method (DSM) for solving operator equations was developed.
The literature on stability of solutions to evolution problems and their behavior at large times is enormous, and we refer the reader mainly to the papers and books directly related to the novel points mentioned above.
Consider an abstract nonlinear evolution problem
$u ˙ = A ( t ) u + F ( t , u ) + b ( t ) , u ˙ : = d u d t$
$u ( 0 ) = u 0$
where $u ( t )$ is a function with values in a Hilbert space H, $A ( t )$ is a linear bounded dissipative operator in H, which satisfies inequality
$Re ( A ( t ) u , u ) ≤ − γ ( t ) ∥ u ∥ 2 , t ≥ 0 ; ∀ u ∈ H$
where $F ( t , u )$ is a nonlinear map in H,
$∥ F ( t , u ) ∥ ≤ c 0 ∥ u ( t ) ∥ p , p > 1$
$∥ b ( t ) ∥ ≤ β ( t )$
$γ ( t )$ and $β ( t ) ≥ 0$ are continuous real-valued functions, defined on all of $R + : = [ 0 , ∞ )$, $c 0 > 0$ and $p > 1$ are constants.
Recall that a linear operator A in a Hilbert space is called dissipative if Re$( A u , u ) ≤ 0$ for all $u ∈ D ( A )$, where $D ( A )$ is the domain of definition of A. Dissipative operators are important because they describe systems in which energy is dissipating, for example, due to friction or other physical reasons. Passive nonlinear networks can be described by Equation (2) with a dissipative linear operator $A ( t )$, see [5,20], Chapter 3, and [21].
Let $σ : = σ ( A ( t ) )$ denote the spectrum of the linear operator $A ( t )$, $Π : = { z : R e z < 0 }$, $ℓ : = { z : R e z = 0 }$, and $ρ ( σ , ℓ )$ denote the distance between sets σ and . We assume that
$σ ⊂ Π$
but we allow $lim t → ∞ ρ ( σ , ℓ ) = 0$. This is the basic novel point in our theory. The usual assumption in stability theory (see, e.g., [3]) is $sup z ∈ σ R e z ≤ − γ 0$, where $γ 0 = c o n s t > 0$. For example, if $A ( t ) = A ∗ ( t )$, where $A ∗$ is the adjoint operator, and if the spectrum of $A ( t )$ consists of eigenvalues $λ j ( t )$, $0 ≥ λ j ( t ) ≥ λ j + 1 ( t )$, then, we allow $lim t → ∞ λ 1 ( t ) = 0$. This is in contrast with the usual theory, where the assumption is $λ 1 ( t ) ≤ − γ 0$, $γ 0 > 0$ is a constant, is used.
Moreover, our results cover the case, apparently not considered earlier in the literature, when Re$( A ( t ) u , u ) ≤ γ ( t ) | | u | | 2$ with $γ ( t ) > 0$, $lim t → ∞ γ ( t ) = 0$. This means that the spectrum of $A ( t )$ may be located in the half-plane Re$z ≤ γ ( t )$, where $γ ( t ) > 0$, but $lim t → ∞ γ ( t ) = 0$.
Our goal is to give sufficient conditions for the existence and uniqueness of the solution to problem (2) and (3) for all $t ≥ 0$, that is, for global existence of $u ( t )$, for boundedness of $sup t ≥ 0 ∥ u ( t ) ∥ < ∞ ,$ or to the relation $lim t → ∞ ∥ u ( t ) ∥ = 0$.
If $b ( t ) = 0$ in Equation (2), then $u ( t ) = 0$ solves Equation (2) and $u ( 0 ) = 0$. This equation is called zero solution to Equation (2) with $b ( t ) = 0$.
If $b ( t ) ≢ 0$, then one says that Equations (2) and (3) is the problem with persistently acting perturbations. The zero solution is called Lyapunov stable for problem (2) and (3) with persistently acting perturbations if for any $ϵ > 0$, however small, one can find a $δ = δ ( ϵ ) > 0$, such that if $∥ u 0 ∥ ≤ δ$, and $sup t ≥ 0 ∥ b ( t ) ∥ ≤ δ$, then the solution to Cauchy problem (2) and (3) satisfies the estimate $sup t ≥ 0 ∥ u ( t ) ∥ ≤ ϵ$.
We do not discuss here the method of Lyapunov functions for a study of stability (see, for example [22,23]).
The approach, developed in this work, consists of reducing the stability problems to some nonlinear differential inequality and estimating the solutions to this inequality.
In Section 2 the formulation and a proof of two theorems, containing the result concerning this inequality and its discrete analog, are given. In Section 3 some results concerning Lyapunov stability of zero solution to Equation (2) are obtained. In Section 4 we derive stability results in the case when $γ ( t ) < 0$ in formula (4). This means that the linear operator $A ( t )$ in Equation (2) may have spectrum in the half-plane Re$z > 0$.
Our results are closely related to the Dynamical Systems Method (DSM), see [6,7,12,13,18,19]. Recently these results were applied to biological problems ([24]) and to evolution equations with delay ([8]).
In the theory of chaos one of the reasons for the chaotic behavior of a solution to an evolution problem to appear is the lack of stability of solutions to this problem ([25,26]). The results presented in Section 3 can be considered as sufficient conditions for chaotic behavior not to appear in the evolution system described by problem (2) and (3).

## 2. A Differential Inequality

In this Section an essentially self-contained proof is given of an estimate for non-negative solutions of a nonlinear inequality
$g ˙ ( t ) ≤ − γ ( t ) g ( t ) + α ( t , g ( t ) ) + β ( t ) , t ≥ 0 ; g ( 0 ) = g 0 ; g ˙ : = d g d t$
In Section 3 some of the many possible applications of this estimate (see estimate (12) below) are demonstrated.
It is not assumed a priori that solutions $g ( t ) ≥ 0$ to inequality (8) are defined on all of $R +$, that is, that these solutions exist globally. In Theorem 1 we give sufficient conditions for the global existence of $g ( t )$. Moreover, under these conditions a bound on $g ( t )$ is given, see estimate (12) in Theorem 1. This bound yields the relation $lim t → ∞ g ( t ) = 0$ if $lim t → ∞ μ ( t ) = ∞$ in Equation (12).
Let us formulate our assumptions. We assume that $g ( t ) ≥ 0$. We do not assume that the functions $γ , α$ and β are non-negative. However, in many applications the functions α and β are bounds on some norms, and then these functions are non-negative. The function $γ ( t )$ is often (but not always) non-negative. For example, this happens if $γ ( t )$ comes from an estimate of the type $( A u , u ) ≥ γ ( u , u )$. If the functions α and β are bounds from above on some norms, then one may assume without loss of generality that these functions are smooth, because one can approximate a non-smooth function with an arbitrary accuracy by an infinitely smooth function, and choose this smooth function to be greater than the function it approximates.
$Assumption A 1 .$ We assume that the function $g ( t ) ≥ 0$ is defined on some interval $[ 0 , T )$, has a bounded derivative $g ˙ ( t ) : = lim s → + 0 g ( t + s ) − g ( t ) s$ from the right at any point of this interval, and $g ( t )$ satisfies inequality (8) at all t at which $g ( t )$ is defined. The functions $γ ( t )$, and $β ( t )$, are real-valued, defined on all of $R +$ and continuous there. The function $α ( t , g )$ is continuous on $R + × R +$ and locally Lipschitz with respect to g. This means that
$| α ( t , g ) − α ( t , h ) | ≤ L ( T , M ) | g − h |$
if $t ∈ [ 0 , T ]$, $| g | ≤ M$ and $| h | ≤ M$. Here $M = c o n s t > 0$ and $L ( T , M ) > 0$ is a constant independent of g, h, and t.
$Assumption A 2 .$ There exists a $C 1 ( R + )$ function $μ ( t ) > 0$, such that
$α t , 1 μ ( t ) + β ( t ) ≤ 1 μ ( t ) γ ( t ) − μ ˙ ( t ) μ ( t ) , ∀ t ≥ 0$
and
$μ ( 0 ) g ( 0 ) ≤ 1$
One can replace the initial point $t = 0$ by some point $t 0 ∈ R$, and assume that the interval of time is $[ t 0 , t 0 + T )$, and that inequalities hold for $t ≥ t 0$, rather than for $t ≥ 0$. The proofs and the conclusions remain unchanged.
Theorem 1.
If Assumptions A1 and A2 hold, then any solution $g ( t ) ≥ 0$ to inequality (8) exists on all of $R +$, i.e., $T = ∞$, and satisfies the following estimate:
$0 ≤ g ( t ) ≤ 1 μ ( t ) ∀ t ∈ R +$
If $μ ( 0 ) g ( 0 ) < 1$, then $0 ≤ g ( t ) < 1 μ ( t ) ∀ t ∈ R + .$
Remark 1.
If$lim t → ∞ μ ( t ) = ∞$, then $lim t → ∞ g ( t ) = 0$.
Proof of Theorem 1.
Let us rewrite inequality for μ as follows:
$− γ ( t ) μ − 1 ( t ) + α ( t , μ − 1 ( t ) ) + β ( t ) ≤ d μ − 1 ( t ) d t$
Let $ϕ ( t )$ solve the following Cauchy problem:
$ϕ ˙ ( t ) = − γ ( t ) ϕ ( t ) + α ( t , ϕ ( t ) ) + β ( t ) , t ≥ 0 ; ϕ ( 0 ) = ϕ 0$
The assumption that $α ( t , g )$ is locally Lipschitz with respect to g guarantees local existence and uniqueness of the solution $ϕ ( t )$ to problem (14). From the comparison result (see A Comparison Lemma proved below) it follows that
$ϕ ( t ) ≤ μ − 1 ( t ) ∀ t ≥ 0$
provided that $ϕ ( 0 ) ≤ μ − 1 ( 0 )$, where $ϕ ( t )$ is the unique solution to problem (15). Let us take $ϕ ( 0 ) = g ( 0 )$. Then $ϕ ( 0 ) ≤ μ − 1 ( 0 )$ by the assumption, and an inequality, similar to Equation (15), implies that
$g ( t ) ≤ ϕ ( t ) t ∈ [ 0 , T )$
Inequalities $ϕ ( 0 ) ≤ μ − 1 ( 0 )$, Equations (15) and (16) imply
$g ( t ) ≤ ϕ ( t ) ≤ μ − 1 ( t ) , t ∈ [ 0 , T )$
By the assumption, the function $μ ( t )$ is defined for all $t ≥ 0$ and is bounded on any compact subinterval of the set $[ 0 , ∞ )$. Consequently, the functions $ϕ ( t )$ and $g ( t ) ≥ 0$ are defined for all $t ≥ 0$, and estimate (12) is established.
If $g ( 0 ) < μ − 1 ( 0 )$, then one obtains by a similar argument the strict inequality $g ( t ) < μ − 1 ( t ) , t ≥ 0$.
Theorem 1 is proved.□
Let us now prove the comparison result that was used above, see, for example [27], Theorem III.4.1.
A Comparison Lemma.
Let
$ϕ ˙ ( t ) = f ( t , ϕ ) , ϕ ( 0 ) = ϕ 0 , ( ∗ )$
and
$ψ ˙ ( t ) = g ( t , ψ ) , ψ ( 0 ) = ψ 0 . ( ∗ ∗ )$
Assume $ψ 0 ≥ ϕ 0$, and
$g ( t , x ) ≥ f ( t , x ) ( ∗ ∗ ∗ )$
for any t and x for which both f and g are defined. Assume that f and g are continuous functions in a set $[ 0 , s ) × ( a , b )$, $ϕ 0 ∈ ( a , b )$, ψ is the maximal solution to (**) and ϕ is any solution to (*). Then $ϕ ( t ) ≤ ψ ( t )$ on the maximal interval $[ 0 , T )$ of the existence of both ϕ and ψ.
Proof of the Comparison Lemma.
First, let us assume for simplicity that problems (*) and (**) have a unique solution. Later we will discard this simplifying assumption. If f and g satisfy a local Lipschitz condition with respect to ϕ, respectively, ψ, then our simplifying assumption holds. Assume secondly, also for simplicity, that $g ( t , x ) > f ( t , x )$. Under this simplifying assumption it is easy to prove the conclusion of the Lemma, because the graph of ψ must lie above the graph of ψ for $t > 0$. Indeed, in a small neighborhood $[ 0 , δ )$, where $δ > 0$ is sufficiently small, the graph of ψ lies above the graph of ϕ. This is obviously true if $ϕ 0 < ψ 0$, because of the continuity of ϕ and ψ. If $ϕ 0 = ψ 0$, then the graph of ψ lies above the graph of ϕ because $ϕ ˙ ( 0 ) < ψ ˙ ( 0 )$ due to the assumption $f ( 0 , ϕ 0 ) < g ( 0 , ϕ 0 ) = g ( 0 , ψ 0 )$. To check the last claim assume that there is a point $t 1 ∈ [ 0 , T )$ such that $ϕ ( t 1 ) = ψ ( t 1 )$, and $ϕ ( t ) < ψ ( t )$ for $t ∈ ( 0 , t 1 )$. Then $ϕ ( t ) − ϕ ( t 1 ) < ψ ( t ) − ψ ( t 1 )$. Divide this inequality by $t − t 1 < 0$ and get
$ϕ ( t ) − ϕ ( t 1 ) t − t 1 > ψ ( t ) − ψ ( t 1 ) t − t 1$
Pass to the limit $t → t 1$, $t < t 1$, in the above inequality, use the differential equations for ϕ and ψ and the equality $ϕ ( t 1 ) = ψ ( t 1 )$, and obtain the following relation:
$f ( t 1 , ϕ ( t 1 ) ) = ϕ ˙ ( t 1 ) ≥ ψ ˙ ( t 1 ) = g ( t 1 , ψ ( t 1 ) ) = g ( t 1 , ϕ ( t 1 ) )$
This relation contradicts the assumption $f ( t , x ) < g ( t , x )$. This contradiction proves the conclusion of the Comparison Lemma under the additional assumption $f ( t , x ) < g ( t , x )$.
To prove the Comparison Lemma under the original assumption $f ( t , x ) ≤ g ( t , x )$, let us consider problem (*) with f replaced by $f n : = f − 1 n < f$. Let $ϕ n$ solve problem (*) with f replaced by $f n$, and with the same initial condition as in (*). Since $f n ( t , x ) < g ( t , x )$, then, by what we have just proved, it follows that $ϕ n ( t ) ≤ ψ ( t )$ on the common interval $[ 0 , T n )$ of the existence of $ϕ n$ and ψ. By the standard result about continuous dependence of the solution to (*) on a parameter, one concludes that $lim n → ∞ T n = T$ and $lim n → ∞ ϕ n ( t ) = ϕ ( t )$ for any $t ∈ [ 0 , T )$. Therefore, passing to the limit $n → ∞$ in the inequality $ϕ n ( t ) ≤ ψ ( t )$ one gets the conclusion of the Comparison Lemma under the original assumption $f ( t , x ) ≤ g ( t , x )$.
If the simplifying assumption concerning uniqueness of the solutions to (*) and (**) is dropped, then (*) and (**) may have many solutions. The limit of the solution $ϕ n$ is the minimal solution to (*). If one considers problem (**) with g replaced by $g n : = g + 1 n > g$, and denotes by $ψ n$ the corresponding solution, then the limit $lim n → ∞ ψ n ( t ) = ψ ( t )$ is the maximal solution to (**). In this case the above argument yields the conclusion of the Lemma with $ψ ( t )$ being the maximal solution to (**), and $ϕ ( t )$ being any solution to (*). The Comparison Lemma is proved. □
Remark 2.
If $ϕ ( t )$ is bounded from below for all $t ≥ 0$, so that $c ≤ ϕ ( t )$ for all $t ≥ 0$, and $ψ ( t )$ exists globally, that is, for all $t ≥ 0$, then the inequality $c ≤ ϕ ( t ) ≤ ψ ( t )$ and the continuity of $f ( t , x )$ on the set $[ 0 , ∞ ) × R$ imply that any solution ϕ to (*) exists globally. Indeed, if it would exist only on a finite interval $[ 0 , T )$ then it has to tend to infinity as $t → T$, but this is impossible because the bound $c ≤ ϕ ( t ) ≤ ψ ( t )$ and the global existence and continuity of ψ do not allow $ϕ ( t )$ to grow to infinity as $t → T$.
Let us formulate and prove a discrete version of Theorem 1.
Theorem 2.
Assume that $g n ≥ 0$, $α ( n , g n ) ≥ 0 ,$
$g n + 1 ≤ ( 1 − h n γ n ) g n + h n α ( n , g n ) + h n β n ; h n > 0 , 0 < h n γ n < 1$
and $α ( n , g n ) ≥ α ( n , p n )$ if $g n ≥ p n$. If there exists a sequence $μ n > 0$ such that
$α ( n , 1 μ n ) + β n ≤ 1 μ n ( γ n − μ n + 1 − μ n h n μ n )$
and
$g 0 ≤ 1 μ 0$
then
$0 ≤ g n ≤ 1 μ n , ∀ n ≥ 0$
Proof.
For $n = 0$ inequality (21) holds because of Equation (20). Assume that it holds for all $n ≤ m$ and let us check that then it holds for $n = m + 1$. If this is done, Theorem 2 is proved. □
Using the inductive assumption, one gets:
$g m + 1 ≤ ( 1 − h m γ m ) 1 μ m + h m α ( m , 1 μ m ) + h m β m$
This and inequality (19) imply:
$g m + 1 ≤ ( 1 − h m γ m ) 1 μ m + h m 1 μ m ( γ m − μ m + 1 − μ m h m μ m ) = μ m − 1 − μ m + 1 − μ m μ m 2 ≤ μ m + 1 − 1$
The last inequality is obvious since it can be written as
$− ( μ m − μ m + 1 ) 2 ≤ 0$
Theorem 2 is proved. □
Theorem 2 was formulated in [14] and proved in [15]. We included for completeness a proof, which is shorter than the one in [15].
Let us give a few simple examples of applications of Theorem 1.
Example 1.
Consider the inequality
$g ˙ ( t ) ≤ t g − ( t + 1 ) 2 g 2 − 2 ( t + 1 ) − 2$
Assume $g ≥ 0$. Choose $μ ( t ) = t + 1$. Then inequality (10) holds if
$( t + 1 ) [ − ( t + 1 ) 2 ( t + 1 ) − 2 − 2 ( t + 1 ) − 2 ] ≤ − t − ( t + 1 ) − 1$
and $g ( 0 ) ≤ 1$. Thus, inequality (10) holds if
$− t − 1 − 2 ( t + 1 ) − 1 ≤ − t − ( t + 1 ) − 1$
This inequality holds obviously. Therefore, any $g ≥ 0$, that satifies inequalities (22) and $g ( 0 ) ≤ 1$, exists for all $t ≥ 0$ and satisfies the estimate
$0 ≤ g ( t ) ≤ 1 t + 1$
In this example the linearized problem
$g ˙ ( t ) = t g − 2 ( t + 1 ) − 2 , g ( 0 ) = g 0$
has a unique solution
$g ( t ) = e t 2 / 2 [ g ( 0 ) − 2 ∫ 0 t e − s 2 2 ( s + 1 ) − 2 d s ]$
This solution tends to infinity as $t → ∞$.
Example 2.
Consider a classical problem
$u ˙ ( t ) = A ( t ) u + F ( t , u ) , u ( 0 ) = u 0$
where $A ( t )$ is a linear operator in $R n$ and F is a nonlinear operator. Assume that $Re ( A ( t ) u , u ) ≤ − γ ( u , u )$, where $γ = c o n s t > 0$, and $| | F ( t , u ) | | ≤ c | | u | | p$, $p = c o n s t > 1$, $c = c o n s t > 0$, and $| | · | |$ is the norm of a vector in $R n$. We also assume that Equation (23) has the following property:
Property P:
If a solution to Equation (23) is defined on the maximal interval of its existence $[ 0 , T )$ and $T < ∞$, then $lim t → T − 0 | | u ( t ) | | = ∞$.
It is known (see, for example [27]), that Property P holds if $F ( t , u )$ is a continuous function on $[ 0 , T ] × R n$.
By Peano’s theorem the Cauchy problem
$u ˙ ( t ) = f ( t , u ) , u ( 0 ) = u 0$
where $u ∈ R n$, has a local solution on an interval $[ 0 , a )$, provided that f is a continuous function on $[ 0 , T ] × D ( u 0 )$, where $a ∈ ( 0 , T )$ and $D ( u 0 )$ is a neighborhood of $u 0$. This solution is non-unique, in general. One can give an explicit estimate of the length a of the interval on which the solution does exist. Namely, $a = m i n ( T , b M )$, where $M : = m a x | u − u 0 | ≤ b , t ∈ [ 0 , T ] | f ( t , u ) |$, and the neighborhood $D ( u 0 )$ is taken to be the set ${ u : | u − u 0 | ≤ b }$.
It is known that in every infinite-dimensional Banach space the Peano theorem fails. Therefore, in an infinite-dimensional Banach space we assume that problems (23) and (24) have a solution, and if $[ 0 , T )$ is the maximal interval of the existence of the solution, then Property P holds. This happens, for example, if $f ( t , u )$ satisfies a local Lipschitz condition with respect to u and is continuous with respect to $t ∈ [ 0 , T ]$. Indeed, if a local Lipschitz condition holds, then the local interval of the existence of the solution to the Cauchy problem (24) is of the length $b = m i n ( R M − 1 , L )$, provided that f is continuous with respect to t and satisfies the estimates $| | f ( t , u ) | | ≤ M$, $| | f ( t , u ) − f ( t , v ) | | ≤ L | | u − v | |$, in the region $[ 0 , T ] × B ( u 0 , R )$, $B ( u 0 , R ) : = { u : | | u − u 0 | | ≤ R }$. Under these assumptions the solution to problem (24) is unique and stays in the ball $B ( u 0 , R )$ for $t ∈ [ 0 , b ]$.
To see that Property P holds for problem (24) if f satisfies a local Lipschitz condition with respect to u, assume that the solution to Equation (24) does not exist for $t > T$. Under our assumptions, if the solution u of problem (24) satisfies the inequality $sup 0 ≤ t < T | | u ( t ) | | < ∞$, then the constants $M , L$ and R are finite. Therefore $b > 0$. Take the initial point $t 0 = T − 0 . 5 b$. By the local existence theorem the solution $u ( t )$ exists on the interval $[ T − 0 . 5 b , T + 0 . 5 b ]$. This is a contradiction, since we have assumed that this solution does not exist for $t > T$. This contradiction proves that Property P holds for problem (24) if f satisfies a local Lipschitz condition.
Let us use Theorem 1 to prove asymptotic stability of the zero solution to Equation (23) and to illustrate the application of our general method for a study of stability of solutions to abstract evolution problems, the method that we develop below.
Let $g ( t ) : = | | u ( t ) | |$, where the norm is taken in $R n$. Take a dot product of Equation (23) with u, then take the real part of both sides of the resulting equation and get
$Re ( u ˙ , u ) = g g ˙ = Re ( A u , u ) + Re ( F ( t , u ) , u ) ≤ − γ g 2 + c g p + 1$
Since $g ≥ 0$, one obtains from the above inequality an inequality of the type Equation (8), namely,
$g ˙ ( t ) ≤ − γ g ( t ) + c g p ( t ) , p = c o n s t > 1$
where γ and c are positive constants. Choose
$μ ( t ) = λ e a t$
where $λ = c o n s t > 0$, $a = c o n s t ∈ ( 0 , γ )$. Note that a can be chosen arbitrarily close to γ. We choose λ later. Denote $b : = γ − a > 0$. Then inequality (11) holds for any $g ( 0 )$ if $c > 0$ is sufficiently small. Inequality (10) holds if
$c λ − ( p − 1 ) e − ( p − 1 ) a t ≤ γ − a = b$
Since $p > 1$ this inequality holds if $c λ − ( p − 1 ) ≤ b$. In turn, the last inequality holds for an arbitrary fixed $c > 0$ and an arbitrary small fixed $b >$ provided that $λ > 0$ is sufficiently large.
One concludes that for any initial data $u 0$ the solution to Equation (23) exists globally and admits an estimate $| | u ( t ) | | ≤ λ − 1 e − a t$, where the positive constant $a < γ$ can be chosen arbitrarily close to γ if the positive constant c is sufficiently small.
The above argument remains valid also for unbounded, closed, densely defined linear operators $A ( t )$, provided that Property P holds.
If $A ( t )$ is a generator of a $C 0$ semigroup $T ( t )$, and F satisfies a local Lipschitz condition, then problem (23) is equivalent to the equation $u = T ( t ) F ( t , u )$, and this equation may be useful for a study of the global existence of the solution to problem (23) (see [28]).
Example 3.
Consider an example in which the solution blows up in a finite time, so it does not exist globally. Consider the problem
$u ˙ − Δ u = u 2 i n [ 0 , ∞ ) × D ⊂ R n ; u N = 0 ; u ( 0 , x ) = u 0 ( x )$
Here D is a bounded domain with a smooth boundary S, N is an outer unit normal to S, $u 0 > 0$ is a smooth function. Let
$g 0 : = ∫ D u 0 ( x ) d x , g ( t ) : = ∫ D u ( t , x ) d x$
Integrate Equation (25) over D and get $g ˙ ( t ) = ∫ D u 2 d x$. Use the inequality
$∫ D u d t 2 ≤ c ∫ D u 2 d x$
where $c = c ( D ) = c o n s t > 0$, and get $g ˙ ≥ g 2 / c$. Integrating this inequality, one obtains $g ( t ) ≥ [ 1 g 0 − c t ] − 1$. Since $c > 0$ and $g 0 > 0$ it follows that
$lim t → t b g ( t ) = ∞$
where $t b : = 1 c g 0$ is the blow-up time, and $t < t b$. Consequently, for any initial data with $g 0 > 0$ the solution to Equation (25) does not exist globally.
Example 4.
Consider the following equation
$u ˙ + A ( t ) u + ϕ ( u ) − ψ ( t , u ) = f ( t , u ) , u ( 0 , x ) = u 0 ( x )$
where $u = u ( t , x )$, ϕ and $ψ ( t , u )$ are smooth functions growing to infinity as $| u | → ∞$. Let us assume that
$u ϕ ( u ) ≥ 0 , u ψ ( t , u ) ≥ 0 ∀ t ≥ 0$
and
$u ψ ( t , u ) ≤ α ( t ) | u | 3 , | u f ( t , u ) | ≤ β ( t ) | u |$
where $α ( t ) > 0$ and $β ( t ) > 0$ are continuous functions, $x ∈ D ⊂ R n$, D is a bounded domain,
$Re ( A u , u ) ≥ γ ( u , u ) ∀ u ∈ D ( A ) , γ = c o n s t > 0$
A is an operator in a Hilbert space $H = L 2 ( D )$, the domain of definition of A, $D ( A )$, is a dense in H linear set, $( u , v )$ is an inner product in H, $| | u | | 2 = ( u , u )$. An example of A is $A = − Δ ,$ the Laplacean with the Dirichlet boundary condition on S, the boundary of D. Denote $g ( t ) : = | | u ( t ) | |$. We want to estimate the large time behavior of the solution u to Equation (26).
Take the inner product in H of Equation (26) and u, then take real part of both sides of the resulting equation and get
$g g ˙ ≤ − γ g 2 + α g 3 + β g$
Since $g ≥ 0$ one obtains an inequality of the type Equation (8), namely
$g ˙ ≤ − γ g + α ( t ) g 2 + β$
Now it is possible to use Theorem 1.
Choose $μ ( t ) = λ e k t$, where λ and k are positive constants, $k < γ$. Assume that $λ g 0 ≤ 1$, where $g 0 : = | | u 0 ( x ) | |$. Then inequality (11) holds for any initial data $u 0$, that is, for any $g 0$, if λ is sufficiently small. Inequality (10) holds if
$α ( t ) e − k t λ + λ e k t β ( t ) ≤ γ − k$
One can easily impose various conditions on α and β so that the above inequality hold. For example, assume that α decays monotonically as t grows, $α ( 0 ) λ < ( γ − k ) / 2$, and $β ( t ) ≤ ν e − k ′ t$, where $k ′ > k$, $k ′ = c o n s t$, $ν > 0$ is a constant, $λ ν ≤ ( γ − k ) / 2$. Then inequality (10) holds, and it implies that
$| | u ( t ) | | ≤ e − k t λ$
so that the exponential decay of $| | u ( t ) | |$ as $t → ∞$ is established.
In Section 3 and Section 4 some stability results for abstract evolution problems are presented in detail. These results are formulated in four theorems. The basic ideas are similar to the ones discussed in examples in this Section, but new assumptions and new technical tools are used.

## 3. Stability Results

In this Section we develop a method for a study of stability of solutions to the evolution problems described by the Cauchy problem (2) and (3) for abstract differential equations with a dissipative bounded linear operator $A ( t )$ and a nonlinearity $F ( t , u )$ satisfying inequality (5). Condition (5) means that for sufficiently small $∥ u ( t ) ∥$ the nonlinearity is of the higher order of smallness than $∥ u ( t ) ∥$. We also study the large time behavior of the solution to problem (2) and (3) with persistently acting perturbations $b ( t )$.
In this paper we assume that $A ( t )$ is a bounded linear dissipative operator, but our methods are valid also for unbounded linear dissipative operators $A ( t )$, for which one can prove global existence of the solution to problem (2) and (3). We do not go into further detail in this paper.
Let us formulate the first stability result.
Theorem 3.
Assume that Re$( A u , u ) ≤ − k ∥ u ∥ 2$ $∀ u ∈ H$, $k = c o n s t > 0$, and inequality (4) holds with $γ ( t ) = k$. Then the solution to problem (2) and (3) with $b ( t ) = 0$ satisfies an esimate $∥ u ( t ) ∥ = O ( e − ( k − ϵ ) t )$ as $t → ∞$. Here $0 < ϵ < k$ can be chosen arbitrarily small if $∥ u 0 ∥$ is sufficiently small.
This theorem implies asymptotic stability in the sense of Lyapunov of the zero solution to Equation (2) with $b ( t ) = 0$. Our proof of Theorem 3 is new and very short.
Proof of Theorem 3.
Multiply Equation (2) (in which $b ( t ) = 0$ is assumed) by u, denote $g = g ( t ) : = ∥ u ( t ) ∥$, take the real part, and use assumption (4) with $γ ( t ) = k > 0$, to get
$g g ˙ ≤ − k g 2 + c 0 g p + 1 , p > 1$
If $g ( t ) > 0$ then the derivative $g ˙$ does exist, and
$g ˙ ( t ) = R e u ˙ ( t ) , u ( t ) ∥ u ( t ) ∥$
as one can check. If $g ( t ) = 0$ on an open subset of $R +$, then the derivative $g ˙$ does exist on this subset and $g ˙ ( t ) = 0$ on this subset. If $g ( t ) = 0$ but in in any neighborhood $( t − δ , t + δ )$ there are points at which g does not vanish, then by $g ˙$ we understand the derivative from the right, that is,
$g ˙ ( t ) : = lim s → + 0 g ( t + s ) − g ( t ) s = lim s → + 0 g ( t + s ) s$
This limit does exist and is equal to $∥ u ˙ ( t ) ∥$. Indeed, the function $u ( t )$ is continuously differentiable, so
$lim s → + 0 ∥ u ( t + s ) ∥ s = lim s → + 0 ∥ s u ˙ ( t ) + o ( s ) ∥ s = ∥ u ˙ ( t ) ∥$
The assumption about the existence of the bounded derivative $g ˙ ( t )$ from the right in Theorem 3 was made because the function $∥ u ( t ) ∥$ does not have, in general, the derivative in the usual sense at the points t at which $∥ u ( t ) ∥ = 0$, no matter how smooth the function $u ( t )$ is at the point τ. Indeed,
$lim s → − 0 ∥ u ( t + s ) ∥ s = lim s → − 0 ∥ s u ˙ ( t ) + o ( s ) ∥ s = − ∥ u ˙ ( t ) ∥$
because $lim s → − 0 | s | s = − 1$. Consequently, the right and left derivatives of $∥ u ( t ) ∥$ at the point t at which $∥ u ( t ) ∥ = 0$ do exist, but are different. Therefore, the derivative of $∥ u ( t ) ∥$ at the point t at which $∥ u ( t ) ∥ = 0$ does not exist in the usual sense.
However, as we have proved above, the derivative $g ˙ ( t )$ from the right does exist always, provided that $u ( t )$ is continuously differentiable at the point t.
Since $g ≥ 0$, inequality (27) yields inequality (8) with $γ ( t ) = k > 0$, $β ( t ) = 0$, and $α ( t , g ) = c 0 g p$, $p > 1$. Inequality (10) takes the form
$c 0 μ p ( t ) ≤ 1 μ ( t ) k − μ ˙ ( t ) μ ( t ) , ∀ t ≥ 0$
Let
$μ ( t ) = λ e b t , λ , b = c o n s t > 0$
We choose the constants λ and b later. Inequality (10), with μ defined in Equation (29), takes the form
$c 0 λ p − 1 e ( p − 1 ) b t + b ≤ k , ∀ t ≥ 0$
This inequality holds if it holds at $t = 0$, that is, if
$c 0 λ p − 1 + b ≤ k$
Let $ϵ > 0$ be arbitrary small number. Choose $b = k − ϵ > 0$. Then Equation (31) holds if
$λ ≥ c 0 ϵ 1 p − 1$
Condition (11) holds if
$∥ u 0 ∥ = g ( 0 ) ≤ 1 λ$
We choose λ and b so that inequalities (32) and (33) hold. This is always possible if $b < k$ and $∥ u 0 ∥$ is sufficiently small.
By Theorem 1, if inequalities (31)–(33) hold, then one gets estimate (12):
$0 ≤ g ( t ) = ∥ u ( t ) ∥ ≤ e − ( k − ϵ ) t λ , ∀ t ≥ 0$
Theorem 3 is proved. □
Remark 3.
One can formulate the result differently. Namely, choose $λ = ∥ u 0 ∥ − 1$. Then inequality (33) holds, and becomes an equality. Substitute this λ into Equation (31) and get
$c 0 ∥ u 0 ∥ p − 1 + b ≤ k$
Since the choice of the constant $b > 0$ is at our disposal, this inequality can always be satisfied if $c 0 ∥ u 0 ∥ p − 1 < k$. Therefore, condition
$c 0 ∥ u 0 ∥ p − 1 < k$
is a sufficient condition for the estimate
$∥ u ( t ) ∥ ≤ ∥ u 0 ∥ e − ( k − c 0 ∥ u 0 ∥ p − 1 ) t$
to hold (assuming that $c 0 ∥ u 0 ∥ p − 1 < k$).
Let us formulate the second stability result.
Theorem 4.
Assume that inequalities (4)–(6) hold and
$γ ( t ) = c 1 ( 1 + t ) q 1 , q 1 ≤ 1 ; c 1 , q 1 = c o n s t > 0$
Suppose that $ϵ ∈ ( 0 , c 1 )$ is an arbitrary small fixed number,
$λ ≥ c 0 ϵ 1 / ( p − 1 ) and ∥ u ( 0 ) ∥ ≤ 1 λ$
Then the unique solution to (2) and (3) with $b ( t ) = 0$ exists on all of $R +$ and
$0 ≤ ∥ u ( t ) ∥ ≤ 1 λ ( 1 + t ) c 1 − ϵ , ∀ t ≥ 0$
Theorem 4 gives the size of the initial data, namely, $∥ u ( 0 ) ∥ ≤ 1 λ$, for which estimate (36) holds. For a fixed nonlinearity $F ( t , u )$, that is, for a fixed constant $c 0$ from assumption (5), the maximal size of $∥ u ( 0 ) ∥$ is determined by the minimal size of λ.
The minimal size of λ is determined by the inequality $λ ≥ c 0 ϵ 1 / ( p − 1 )$, that is, by the maximal size of $ϵ ∈ ( 0 , c 1 )$. If $ϵ < c 1$ and $c 1 − ϵ$ is very small, then $λ > λ m i n : = c 0 c 1 1 / ( p − 1 )$ and λ can be chosen very close to $λ m i n$.
Proof of#xA0;Theorem#xA0;4.
Let
$μ ( t ) = λ ( 1 + t ) ν , λ , ν = c o n s t > 0$
We will choose the constants λ and ν later. Inequality (10) (with $β ( t ) = 0$) holds if
$c 0 λ p − 1 ( 1 + t ) ( p − 1 ) ν + ν 1 + t ≤ c 1 ( 1 + t ) q 1 , ∀ t ≥ 0$
If
$q 1 ≤ 1 a n d ( p − 1 ) ν ≥ q 1$
then inequality (38) holds if
$c 0 λ p − 1 + ν ≤ c 1$
Let $ϵ > 0$ be an arbitrary small number. Choose
$ν = c 1 − ϵ$
Then inequality (40) holds if inequality (32) holds. Inequality (11) holds because we have assumed in Theorem 4 that $∥ u ( 0 ) ∥ ≤ 1 λ$. Combining inequalities (32), (33) and (12), one obtains the desired estimate:
$0 ≤ ∥ u ( t ) ∥ = g ( t ) ≤ 1 λ ( 1 + t ) c 1 − ϵ , ∀ t ≥ 0$
Condition (32) holds for any fixed small $ϵ > 0$ if λ is sufficiently large. Condition (33) holds for any fixed large λ if $∥ u 0 ∥$ is sufficiently small.
Theorem 4 is proved. □
Let us formulate a stability result in which we assume that $b ( t ) ≢ 0$. The function $b ( t )$ has physical meaning of persistently acting perturbations.
Theorem 5.
Let $b ( t ) ≢ 0$, conditions (4)– (6) and (35) hold, and
$β ( t ) ≤ c 2 ( 1 + t ) q 2$
where $c 2 > 0$ and $q 2 > 0$ are constants. Assume that
$q 1 ≤ min { 1 , q 2 − ν , ν ( p − 1 ) } , ∥ u ( 0 ) ∥ ≤ λ 0 − 1$
where $λ 0 > 0$ is a constant defined in Equation (51), and
$c 2 1 − 1 p c 0 1 p ( p − 1 ) 1 p p p − 1 + ν ≤ c 1$
Then problem (2) and (3) has a unique global solution $u ( t )$, and the following estimate holds:
$∥ u ( t ) ∥ ≤ 1 λ 0 ( 1 + t ) ν , ∀ t ≥ 0$
Proof of Theorem 5.
Let $g ( t ) : = ∥ u ( t ) ∥$. As in the proof of Theorem 4, multiply (2) by u, take the real part, use the assumptions of Theorem 5, and get the inequality:
$g ˙ ≤ − c 1 ( 1 + t ) q 1 g + c 0 g p + c 2 ( 1 + t ) q 2$
Choose $μ ( t )$ by formula (37). Apply Theorem 1 to inequality (47). Condition (10) takes now the form
$c 0 λ p − 1 ( 1 + t ) ( p − 1 ) ν + λ c 2 ( 1 + t ) q 2 − ν + ν 1 + t ≤ c 1 ( 1 + t ) q 1 ∀ t ≥ 0$
If assumption (44) holds, then inequality (48) holds provided that it holds for $t = 0$, that is, provided that
$c 0 λ p − 1 + λ c 2 + ν ≤ c 1$
Condition (11) holds if
$g ( 0 ) ≤ 1 λ$
The function $h ( λ ) : = c 0 λ p − 1 + λ c 2$ attains its global minimum in the interval $[ 0 , ∞ )$ at the value
$λ = λ 0 : = ( p − 1 ) c 0 c 2 1 / p$
and this minimum is equal to
$h m i n = c 0 1 p c 2 1 − 1 p ( p − 1 ) 1 p p p − 1$
Thus, substituting $λ = λ 0$ in formula (49), one concludes that inequality (49) holds if the following inequality holds:
$c 0 1 p c 2 1 − 1 p ( p − 1 ) 1 p p p − 1 + ν ≤ c 1$
while inequality (50) holds if
$∥ u ( 0 ) ∥ ≤ 1 λ 0$
Therefore, by Theorem 1, if conditions (52)–(53) hold, then estimate (12) yields
$∥ u ( t ) ∥ ≤ 1 λ 0 ( 1 + t ) ν , ∀ t ≥ 0$
where $λ 0$ is defined in Equation (51).
Theorem 5 is proved. □

## 4. Stability Results under Non-classical Assumptions

Let us assume that Re$( A ( t ) u , u ) ≤ γ ( t ) ∥ u ∥ 2$, where $γ ( t ) > 0$. This corresponds to the case when the linear operator $A ( t )$ may have spectrum in the right half-plane Re$z > 0$. Our goal is to derive under this assumption sufficient conditions on $γ ( t )$, $α ( t , g )$, and $β ( t )$, under which the solution to problem (2) is bounded as $t → ∞$, and stable. We want to demonstrate new methodology, based on Theorem 1. By this reason we restrict ourselves to a derivation of the simplest results under simplifying assumptions. However, our derivation illustrates the method applicable in many other problems.
Our assumptions in this Section are:
$β ( t ) = 0 , γ ( t ) = c 1 ( 1 + t ) − m 1 , α ( t , g ) = c 2 ( 1 + t ) − m 2 g p , p > 1$
Let us choose
$μ ( t ) = d + λ ( 1 + t ) − n$
The constants $c j , m j , λ , d , n ,$ are assumed positive.
We want to show that a suitable choice of these parameters allows one to check that basic inequality (10) for μ is satisfied, and, therefore, to obtain inequality (12) for $g ( t )$. This inequality allows one to derive global boundedness of the solution to Equation (2), and the Lyapunov stability of the zero solution to Equation (2) (with $u 0 = 0$). Note that under our assumptions $μ ˙ < 0$, $lim t → ∞ μ ( t ) = d$. We choose $λ = d$. Then $( 2 d ) − 1 ≤ μ − 1 ( t ) ≤ d − 1$ for all $t ≥ 0$. The basic inequality (10) takes the form
$c 1 ( 1 + t ) − m 1 + c 2 ( 1 + t ) − m 2 [ d + λ ( 1 + t ) − n ] − p + 1 ≤ n λ ( 1 + t ) − n − 1 [ d + λ ( 1 + t ) − n ] − 1$
and
$g 0 ( d + λ ) ≤ 1$
Since we have chosen $λ = d$, condition (56) is satisfied if
$d = ( 2 g 0 ) − 1$
Choose n so that
$n + 1 ≤ min { m 1 , m 2 }$
Then (55) holds if
$c 1 + c 2 d − p + 1 ≤ n λ d − 1$
Inequality (59) is satisfied if $c 1$ and $c 2$ are sufficiently small. Let us formulate our result, which folows from Theorem 1.
Theorem 6.
If inequalities (59) and (58) hold, then
$0 ≤ g ( t ) ≤ [ d + λ ( 1 + t ) − n ] − 1 ≤ d − 1 , ∀ t ≥ 0$
Estimate (60) proves global boundedness of the solution $u ( t )$, and implies Lyapunov stability of the zero solution to problem (2) with $b ( t ) = 0$ and $u 0 = 0$.
Indeed, by the definition of Lyapunov stability of the zero solution, one should check that for an arbitrary small fixed $ϵ > 0$ estimate $sup t ≥ 0 ∥ u ( t ) ∥ ≤ ϵ$ holds provided that $∥ u ( 0 ) ∥$ is sufficiently small. Let $∥ u ( 0 ) ∥ = g 0 = δ$. Then estimate (60) yields $sup t ≥ 0 ∥ u ( t ) ∥ ≤ d − 1$, and (57) implies $sup t ≥ 0 ∥ u ( t ) ∥ ≤ 2 δ$. So, $ϵ = 2 δ$, and the Lyapunov stability is proved. □

## References

1. Yakubovich, V.; Starzhinskii, V. Linear Differential Operators with Periodic Coefficients and their Applications; Nauka: Moscow, Russia, 1972. (in Russian) [Google Scholar]
2. Yakubovich, V.A.; Starzhinskii, V.M. Linear Differential Equations with Periodic Coefficients. 1, 2.; Louvish, D., Translator; Halsted Press: New York, NY, USA; John Wiley and Sons: Toronto, ON, Canada, Israel Program for Scientific Translations, Jerusalem-London; 1975; Volume 1: xii+386 pp.; Volume 2: xii+pp. 387–839. [Google Scholar]
3. Daleckii, Y.L.; Krein, M.G. Stability of Solutions of Differential Equations in Banach Spaces; American Mathematical Society: Providence, RI, USA, 1974. [Google Scholar]
4. Ramm, A.G. Stability of solutions to some evolution problems. Chaotic Model. Simul. (CMSIM) 2011, 1, 17–27. [Google Scholar]
5. Ramm, A.G. Stationary Regimes in Passive Nonlinear Networks. In Nonlinear Electromagnetics; Uslenghi, P., Ed.; Academic Press: New York, NY, USA, 1980; pp. 263–302. [Google Scholar]
6. Ramm, A.G. Dynamical Systems Method (DSM) and Nonlinear Problems. In Spectral Theory and Nonlinear Analysis; Lopez-Gomez, J., Ed.; World Scientific Publishers: Singapore, Singapore, 2005; pp. 201–228. [Google Scholar]
7. Ramm, A.G. How large is the class of operator equations solvable by a DSM Newton-type method? Appl. Math. Lett. 2011, 24, 860–865. [Google Scholar] [CrossRef]
8. Ramm, A.G. Stability of solutions to abstract evolution equations with delay. J. Math. Anal. Appl. 2012, 396, 523–527. [Google Scholar] [CrossRef]
9. Ramm, A.G. A nonlinear inequality and evolution problems. J. Inequal. Spec. Funct. 2010, 1, 1–9. [Google Scholar]
10. Ramm, A.G. Asymptotic stability of solutions to abstract differential equations. J. Abstr. Differ. Equ. 2010, 1, 27–34. [Google Scholar]
11. Ramm, A.G. Stability result for abstract evolution problems. Math. Meth. Appl. Sci. 2013, 36, 422–426. [Google Scholar] [CrossRef]
12. Hoang, N.S.; Ramm, A.G. The Dynamical Systems Method for solving nonlinear equations with monotone operators. Asian Eur. Math. J. 2010, 3, 57–105. [Google Scholar] [CrossRef]
13. Hoang, N.S.; Ramm, A.G. DSM of Newton-type for solving operator equations F(u) = f with minimal smoothness assumptions on F. Int. J. Comp.Sci. Math. 2010, 3, 3–55. [Google Scholar] [CrossRef]
14. Hoang, N.S.; Ramm, A.G. DSM of Newton-type for solving operator equations F(u) = f with minimal smoothness assumptions on F. Int. J. Comp. Sci. Math. 2010, 3, 3–55. [Google Scholar] [CrossRef]
15. Hoang, N.S.; Ramm, A.G. A nonlinear inequality and applications. Nonlinear Anal. Theory Methods Appl. 2009, 71, 2744–2752. [Google Scholar] [CrossRef]
16. Hoang, N.S.; Ramm, A.G. Nonlinear differential inequality. Math. Inequal. Appl. 2011, 14, 967–976. [Google Scholar] [CrossRef]
17. Hoang, N.S.; Ramm, A.G. Some nonlinear inequalities and applications. J. Abstr. Differ. Equ. Appl. 2011, 2, 4–101. [Google Scholar]
18. Ramm, A.G. Dynamical Systems Method for Solving Operator Equations; Elsevier: Amsterdam, The Nehterlands, 2007. [Google Scholar]
19. Ramm, A.G.; Hoang, N.S. Dynamical Systems Method and Applications. Theoretical Developments and Numerical Examples; Wiley: Hoboken, NJ, USA, 2012; ISBN ISBN-13: 978-1-118-02428-7. [Google Scholar]
20. Ramm, A.G. Theory and Applications of Some New Classes of Integral Equations; Springer-Verlag: New York, NY, USA, 1980. [Google Scholar]
21. Temam, R. Infinite-dimensional Dynamical Systems in Mechanics and Physics; Springer-Verlag: New York, NY, USA, 1997. [Google Scholar]
22. Krasovski, N. Problems of the Theory of Stability of Motion; Stanford University Press: Stanford, CA, USA, 1963. [Google Scholar]
23. Rouche, N.; Habets, P.; Laloy, M. Stability Theory by Lyapunov’s Direct Method; Springer Verlag: New York, NY, USA, 1977. [Google Scholar]
24. Ramm, A.G.; Volpert, V. Convergence of time-dependent Turing structures to a stationary solution. Acta Appl. Math. 2013, 123, 31–42. [Google Scholar] [CrossRef]
25. Davies, B. Exploring Chaos; Perseus Books: Reading, MA, USA, 1999. [Google Scholar]