Next Article in Journal
Widest Path in Networks with Gains/Losses
Next Article in Special Issue
New Nonlinear Retarded Integral Inequalities and Their Applications to Nonlinear Retarded Integro-Differential Equations
Previous Article in Journal
A Statistical Model for Count Data Analysis and Population Size Estimation: Introducing a Mixed Poisson–Lindley Distribution and Its Zero Truncation
Previous Article in Special Issue
New Conditions for Testing the Oscillation of Solutions of Second-Order Nonlinear Differential Equations with Damped Term
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Existence and Properties of the Solution of Nonlinear Differential Equations with Impulses at Variable Times

1
School of Mathematics and Statistics, Guizhou University, Guiyang 550025, China
2
Department of Mathematics, Zunyi Normal University, Zunyi 563006, China
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(2), 126; https://doi.org/10.3390/axioms13020126
Submission received: 31 December 2023 / Revised: 1 February 2024 / Accepted: 13 February 2024 / Published: 18 February 2024
(This article belongs to the Special Issue Advances in Difference Equations)

Abstract

:
In this paper, a class of nonlinear ordinary differential equations with impulses at variable times is considered. The existence and uniqueness of the solution are given. At the same time, modifying the classical definitions of continuous dependence and Gâteaux differentiability, some results on the continuous dependence and Gâteaux differentiable of the solution relative to the initial value are also presented in a new topology sense. For the autonomous impulsive system, the periodicity of the solution is given. As an application, the properties of the solution for a type of controlled nonlinear ordinary differential equation with impulses at variable times is obtained. These results are a foundation to study optimal control problems of systems governed by differential equations with impulses at variable times.

1. Introduction

We begin by introducing the problem studied. Let  R + [ 0 , + ) Y ( t ) = { y i ( t ) | i Λ { 1 , 2 , , p } } f : R + × R n R n y i : R + R n  and  J i : R n R n  ( i = 1 , 2, ⋯, p) be given maps. Consider the following differential equations with impulses at variable times
x ˙ ( t ) = f ( t , x ( t ) ) , { x ( t ) } Y ( t ) = , t 0 , x ( t + ) = J i ( x ( t ) ) + x ( t ) , { x ( t ) } Y ( t ) = y i ( t ) , t 0 , x ( 0 ) = x 0 .
The main purpose of this study is (i) to provide a sufficient condition for the existence and uniqueness of solution x for impulsive system (1); and (ii) to give the necessary and sufficient condition for the exact times when solution x meets set  Y ( t ) ; (iii) to present the properties of the solution relative to the initial value.
There are some interesting phenomena for impulsive system (1). First, it is clear that the system  x ˙ ( t ) = x ( t ) + u ( t )  is controllable (see [1]), but the following impulsive system
x ˙ ( t ) = x ( t ) + u ( t ) , x ( t ) 1 , x ( t + ) = 0 , x ( t ) = 1
is not controllable. Similarly, the system  x ˙ ( t ) = x ( t )  is stable, but the impulsive system
x ˙ ( t ) = x ( t ) , x ( t ) 1 , x ( t + ) = 2 , x ( t ) = 1
is not stable when the initial value  x ( 0 ) 1 . Let us look at the third example. Denote by  x ( · ; 0 , x 0 )  the solution of the following impulsive differential system
x ˙ ( t ) = 2 t , x ( t ) 1 , t > 0 , x ( t + ) = 0 , x ( t ) = 1 , t 0
with the initial value  ( 0 , x 0 ) . Then, we have
x t ; 0 , 1 + 1 n = t 2 + 1 + 1 n , t 0 , x ( t ; 0 , 1 ) = 1 , t = 0 , t 2 m , t m , m + 1 , m N .
This implies that the impulsive system (1) never has any continuous solution with respect to the initial value in  L 1 . In addition, we can also use simple cases to show that the impulsive system (1) may not have a global solution.
The motivation for studying this problem is as follows. First of all, many physical phenomena and application models are characterized by (1). For example, integrate-and-fire models derived from a physical oscillation circuit [2,3] is widely used in neuroscience research, which is concerned with current–voltage relations at which the states can be reset once the voltage reaches a threshold level [4,5]. Again, in the application, it is crucial to choose appropriate threshold levels for making decisions to trigger or suspend an impulsive intervention: ref. [6] used glucose threshold level-guided injections of insulin; ref. [7] used the time that when an economic threshold was reached by the number of pests as the time of impulsive intervention. Second, the theory of impulsive differential equations has been an object of increasing interest because of its wide applicability in biology, medicine and more and more fields (see [8] and its references). The significant interest in the investigation of differential equations with impulse effects is explained by the development of equipment in which a significant role is played by complex systems [9,10,11]. In particular, the qualitative theory of impulsive system (1) has not been systematically established and it is natural to investigate it. We discuss the existence and uniqueness of a global solution and its properties for nonlinear ordinary differential equations with impulses at variable times (1) under weaker conditions. It is worth pointing out that the solutions of differential systems with impulses may experience pulse phenomena, namely, the solutions may hit a given surface a finite or infinite number of times, causing a rhythmical beating. This situation presents difficulties in the investigation of properties of solutions of such systems. In addition, it is not suitable for the stronger conditions of a control problem. Consequently, it is desirable to find weaker conditions that guarantee the absence or presence of pulse phenomena. More generally, it is significant to find conditions where the solution only meets a given surface  k N  times ( N  denote the set of natural numbers).
Before concluding this section, we review the previous literature on the qualitative analysis of impulsive differential equations. In fact, the qualitative analysis of impulsive differential equations can at least be traced back to the works by N.M. Kruylov and N.N. Bogolyubov [12] in 1937 in their classical monograph Introduction to Nonlinear Mechanics. A mathematical formulation of the differential equation with impulses at fixed times was first presented by A.M. Samoilenko and N.A. Perestyuk [13] in 1974. Since then, the qualitative theory on differential equation with impulses at fixed times in finite (or infinite) dimensional spaces has been extensively studied (see [14,15,16,17] and the references therein). For the differential equations with impulses at variable times, A.M. Samoilenko and N.A. Perestyuk [18] gave in 1981 the mathematical model
x ˙ ( t ) = f ( t , x ( t ) ) , t τ i ( x ( t ) ) , x ( t + ) = x ( t ) + J i ( x ( t ) ) , t = τ i ( x ( t ) ) .
Later relevant works were published by D.D. Bainov and A.B. Dishliev [19] in 1984, S. Hu [20] in 1989, etc. For more details, one can see the monographs of V. Lakshmikantham [21] in 1989, A.M. Samoilenko [22] in 1995, D.D. Bainov [23] in 1995 and M. Benchohra [24] in 2006 and so on. In a word, these works established the qualitative theory of (2) under stronger conditions. However, they are not suitable for the stronger conditions of a control problem and impulsive differential equations in infinite dimensional spaces. At the same time, when  y i , ( i Λ )  is a one-to-one mapping,  x ( t ) = y i ( t )  is equal to  t = y i 1 ( x ( t ) ) . Hence, (2) can be treated as a simplified case of (1). For the linear case of (1), Peng et al. [25] obtained the existence and uniqueness of the solution and its properties.
The rest of the paper is organized as follows. Section 2 presents the main results. In Section 3, Section 4 and Section 5, the proofs of the three main theorems are given in turn. The periodicity of an autonomous impulsive system is presented in Section 6. As an application, the variation in the solution relative to the control is presented in Section 7, which is a foundation for studying optimal control problems of systems governed by differential equations with impulses at variable times. Finally, some new phenomena of impulsive differential systems are summarized.

2. Main Results

We present our main results in this section. To state the first one, some preliminaries are introduced. Throughout this paper, we fix  T > 0  and assume that  T = + R + = [ 0 , ) L l o c 1 R + ; R n × n = { x : ( 0 , + ) R n × n | | x ( · ) | L 1 0 , T ; R n × n , T > 0 } . We first introduce several definitions. We define the function set  P C Y [ 0 , T ) , R n = { x : [ 0 , T ) R n | x  is continuous at t when  x ( t ) Y ( t ) , x is left continuous at t, and the right limit  x ( t + )  exists when  x ( t ) Y ( t ) } . For  x P C Y [ 0 , T ) , R n t [ 0 , T )  is called an irregular point if  x ( t ) Y ( t ) . Otherwise, t is called a regular point. One can directly verify that the function set  P C Y [ 0 , T ) , R n  is not linear. Denoted by  B ( z , θ 2 ) , the closed ball (in  R n ) is centered at z and has radius  θ 2 > 0 .
Definition 1.
A piecewise continuous function  x θ  is said to be an approximate  P C -solution of (1) if  x θ ( · ) x θ ( · ; 0 , x 0 )  satisfies the following integral equation with impulses
x θ ( t ) = x 0 + 0 t f τ , x θ ( τ ) d τ + 0 t j < t , x θ ( t j ) B ( y i ( t j ) , θ 2 ) J i x θ ( t j ) .
In particular, when  θ = 0 , we call  x ( · ) x 0 ( · ) P C Y [ 0 , T ) , R n  a  P C -solution of (1).
Meanwhile, we introduce the following basic assumptions.
[F](1)  f : R + × R n R n  is measurable in t on  R +  and locally Lipschitz continuous in x, i.e., for any  ρ > 0 , there exits  L ( ρ ) > 0  such that for all  x , y R n  with  | x | | y | ρ , we have
| f ( t , x ) f ( t , y ) | L ( ρ ) | x y | for any t R + .
(2) There exists a constant  k ˜ > 0  such that
| f ( t , x ) | k ˜ ( 1 + | x | ) for any t R + .
(3) f is continuous, partially differentiable in x, and  f x ( · , x ) L l o c 1 R + , R n × n .
[Y](1)  y i C R + , R n , and  y i ( t ) y j ( t )  for all  t R +  and  i j  ( i , j Λ ).
(2)  y i C 1 ( [ 0 , T ] , R n ) , and  f t , y i t y ˙ i t  ( i Λ ).
[J](1)  J i : R n R n  is continuous, and
Υ i ( t ) y i ( t ) + J i ( y i ( t ) ) y j ( t ) for all t R + and i , j Λ .
(2)  J i : R n R n  is continuous, partially differentiable.
It is clear that when assumptions [F](1)(2) hold, for any fixed  ( s , z s ) R + × R n , the differential equation
z ˙ ( t ) = f ( t , z ( t ) ) , t > s , z ( s ) = z s ,
has a unique solution  z ( · ; s , z s ) C [ s , + ) , R n  given by
z ( t ; s , z s ) = z s + s t f ( τ , z ( τ ; s , z s ) ) d τ .
We define several functions:
F i ( t ; s , z s ) = z ( t ; s , z s ) y i ( t ) , z s y i ( s ) ( i = 1 , 2 , , p ) , t s
and
F i j ( t ; s , Υ i ( s ) ) = z ( t ; s , Υ i ( s ) ) y j ( t ) , Υ i ( s ) y j ( s ) ( i , j = 1 , 2 , , p ) , t s ,
where  · , ·  denotes the inner product in  R n .
The first main result is presented as follows.
Theorem 1.
Suppose assumptions [F](1)(2), [Y](1) and [J](1) hold.
(1) The system (1) admits a unique  P C -solution  x P C Y R + , R n .
(2) x has exactly one irregular point set  t i | 0 t 1 < t 2 < < t k < +  over  R +  if and only if there exists  l i Λ  ( i = 1 , 2 , , k ) such that
F l 1 ( t 1 ; 0 , x 0 ) = 0 , F l i l i + 1 ( t i + 1 ; t i , Υ l i ( t i ) ) = 0 f o r i = 1 , 2 , , k 1 ,
and
F l k j t ; t k , Υ l k ( t k ) > 0 f o r   a n y t [ t k , + ) f o r   a l l j Λ .
We have to point out that the necessary and sufficient conditions of a pulse phenomenon is also given in Theorem 1. Moreover, for the existence of a solution of system (2), in order to ensure  t k = τ k ( x )  is monotonous with respect to k in [21], it requires that  τ k ( x )  be smooth and satisfy the corresponding inequality conditions. However, using Theorem 1, we can obtain immediately the following result.
Corollary 1.
Suppose assumptions [F](1)(2), [Y](1) and [J](1) hold. If  y i  is invertible and  τ i = y i 1  for any  i Λ , then the system (2) admits a unique  P C -solution  x P C Y R + , R n .
Now, we state our second and third main results. It follows from Theorem 1 that for any fixed, sufficiently small  θ > 0 , (1) has a unique approximate  P C -solution  x θ  provided that assumptions [F](1)(2), [Y](1) and [J](1) hold. Let  v R n x θ ( · ; θ , x 0 + θ v )  be an approximate  P C -solution of Equation (1) corresponding to  ( θ , x 0 + θ v ) . We note that (1) is not well posed. Thus, we can never expect to have the continuity of the solution with respect to the initial value. We have to modify the classical definition of continuity and differentiability, respectively.
Definition 2.
Let  v R n  be fixed. The  P C -solution  x ( · ; 0 , x 0 )  of (1) is said to have a continuous dependence relative to the initial value  ( 0 , x 0 )  if the following facts hold:
(i) When  x ( t ; 0 , x 0 ) y i ( t )  ( i Λ ),  x θ ( t ; θ , x 0 + θ v ) x ( t ; 0 , x 0 )  as  θ 0 ;
(ii) For any sufficient small  ε > 0 , there exist  δ > 0  and  I ε [ 0 , T ]  such that
x θ t ; θ , x 0 + θ v x t ; 0 , x 0 < ε f o r   a n y t I ε ,
when  μ [ 0 , T ] I ε < ε 0 < θ < δ , where μ denotes the Lebesgue measure.
Definition 3.
Let  v R n  be fixed. The  P C -solution  x · ; 0 , x 0  of (1) is said to be Gâteaux differentiable relative to the initial value  ( 0 , x 0 )  if the Gâteaux derivative  φ ( t )  of  x t ; 0 , x 0  exists at  ( 0 , x 0 )  for all  t [ 0 , T ]  with  x t ; 0 , x 0 y i ( t ) , otherwise,
φ ( t ) = lim s t φ ( s ) ,
where
φ ( t ) = lim ε 0 x ε t ; ε , x 0 + ε v x t ; 0 , x 0 ε w h e n x t ; 0 , x 0 y i ( t ) .
Let us state the following main results.
Theorem 2.
Suppose assumptions [F](1)(2), [Y](1) and [J](1) hold. Then, the  P C -solution  x ( · ; 0 , x 0 )  of (1) has a continuous dependence relative to the initial value  ( 0 , x 0 )  in the sense of Definition 2.
Theorem 3.
Suppose assumptions [F], [Y] and [J] hold. Then, the  P C -solution  x · ; 0 , x 0  of (1) is Gâteaux differentiable relative to the initial value  ( 0 , x 0 )  in the sense of Definition 3. Moreover, its Gâteaux derivative φ is a  P C -solution of the following differential equation with impulses
φ ˙ ( t ) = f x ( t , x ( t ) ) φ ( t ) , t ( 0 , T ] , x ( t ) Y ( t ) = , φ ( t + ) = φ ( t ) + J i ( y i ( t ) ) [ φ ( t ) + h ˙ t ( 0 ) f ( t , y i ( t ) ) ] , x ( t ) Y ( t ) = y i ( t ) , φ ( 0 ) = v f ( 0 , x 0 ) .
Here,  h t  denotes the solution of the equation  { x ε t ; ε , x 0 + ε v } B y i ( t ) , ε 2  in ε for some  i Λ .

3. Proof of Theorem 1

Throughout this section, we define the function  r : ( 0 , + ) R +  given by
r ( T ) = Δ 1 2 inf s , t [ 0 , T ] | y i ( s ) y j ( t ) | , y i ( s ) Υ j ( t ) , y i ( s ) Υ i ( t ) | i , j Λ and i j ,
where  Υ j  is defined by (4). It is easy from assumptions [J](1) and [Y](1) to see  Υ i C [ 0 , T ] , R n  for all  i Λ . Hence, there exists a constant  M ( T )  such that
| Υ i ( t ) | M ( T ) for any t [ 0 , T ] and i Λ
and
r ( T ) > 0 for all T > 0 .
To claim the existence and uniqueness of the solution of (1), we need the following Lemma.
Lemma 1.
If assumptions [F](1)(2), [Y](1) and [J](1) hold, then for any  ( s , ξ ) [ 0 , T ) × { Υ i ( t ) | t [ 0 , T ] , i Λ } , there is a  δ > 0  which is independent of  ( s , ξ )  such that the following differential equation
ϕ ˙ ( t ) = f ( t , ϕ ( t ) ) , t > s , ϕ ( s ) = ξ ,
has a unique solution  ϕ C [ s , s + δ ] , R n  and
ϕ ( t ) y i ( t ) r ( T ) 2 f o r   a n y t [ s , s + δ ] a n d i Λ .
Proof. 
It follows from assumptions [F](1)(2) that (13) has a unique solution  ϕ C [ s , T ] , R n  and
| ϕ ( t ) | | ξ | + s t k ˜ ( 1 + | ϕ ( τ ) | ) d τ .
Using Gronwall’s inequality, we have
| ϕ ( t ) | | ξ | + k ˜ T e k ˜ ( t s ) .
Together with (11), this means that
| ϕ ( t ) | M ( T ) + k ˜ T e k ˜ T M ˜ ( T ; k ˜ ) for any t [ 0 , T ] .
Consequently, for any  t [ 0 , T ] , we have
ϕ ( t ) ξ s t | f ( τ , 0 ) f ( τ , 0 ) + f ( τ , ϕ ( τ ) ) | d τ s t | f ( τ , 0 ) | d τ + s t | f ( τ , ϕ ( τ ) ) f ( τ , 0 ) | d τ s t | f ( τ , 0 ) | d τ + s t L ( M ˜ ( T ; k ˜ ) ) | ϕ ( τ ) | d τ s t | f ( τ , 0 ) | d τ + L ( M ˜ ( T ; k ˜ ) ) M ˜ ( T ; k ˜ ) | t s | k ˜ + L ( M ˜ ( T ; k ˜ ) ) M ˜ ( T ; k ˜ ) | t s |
Together with (12) and
ϕ ( t ) y i ( t ) y i ( t ) ξ ϕ ( t ) ξ ,
we have
ϕ ( t ) y i ( t ) y i ( t ) ξ ϕ ( t ) ξ y i ( t ) ξ k ˜ + L ( M ˜ ( T ; k ˜ ) ) M ˜ ( T ; k ˜ ) | t s | 2 r ( T ) k ˜ + L ( M ˜ ( T ; k ˜ ) ) M ˜ ( T ; k ˜ ) | t s |
and there exists a constant  δ = δ ( T , k ˜ ) = 3 r ( T ) 2 k ˜ + L ( M ˜ ( T ; k ˜ ) ) M ˜ ( T ; k ˜ ) > 0  such that (14) holds. □
Now, we prove conclusion (1) of Theorem 1. For any  T > 0 , with respect to the number of irregular point of that system (1), there are only two possibilities: Case (1), x has no irregular point on  [ 0 , T ]  and Case (2), x has at least one irregular point on  [ 0 , T ] . For Case (1), it follows from assumptions [F](1)(2) that (1) has a unique solution  x C ( [ 0 , T ] , R n ) . For Case (2), there exists  i Λ  and  t 1 > 0  such that  x ( t 1 ; 0 , x 0 ) = y i ( t 1 ) , and  t 1  is the time of the first impulse. In a similar way, if no more impulse occurs, it follows from assumptions [F](1)(2) that (1) has a unique solution  x C ( [ t 1 , T ] , R n ) . If another impulse occurs, there exists  j Λ  and  t 2 > t 1 , such that  x ( t 2 ; t 1 , y i ( t 1 ) + J i ( y i ( t 1 ) ) ) = y j ( t 2 ) , and  t 2  is the time of the second impulse. At the same time, from Lemma 1, we have  | t 1 t 2 | > δ . By a mathematical induction method, the system (1) has a unique  P C -solution  x P C Y [ 0 , T ] , R n . Thus, when  T , Equation (1) admits a unique  P C -solution  x ( · ; 0 , x 0 )  on  R + .
Next, we discuss the number of irregular points for solution x of (1) over  R + .
Lemma 2.
If assumptions [F](1)(2), [Y](1) and [J](1) hold, then solution x of (1) has no irregular point over  R +  if and only if the following algebraic equations
F i ( t ; 0 , x 0 ) = 0 h a s   n o   s o l u t i o n   o n R + f o r   a l l i Λ .
Proof. 
For the first step, we prove the sufficient condition. We assume solution x of (1) has an irregular point over  [ 0 , + ) , then there exist  i Λ  and  t 1 [ 0 , + )  such that  x ( t 1 ; 0 , x 0 ) = y i ( t 1 ) , and together with (5) and (6), we have
F i ( t 1 ; 0 , x 0 ) = x ( t 1 ; 0 , x 0 ) y i ( t 1 ) , x 0 y i ( 0 ) = x 0 + 0 t 1 f ( τ , x ( τ ; 0 , x 0 ) ) d τ y i ( t 1 ) , x 0 y i ( 0 ) = x 0 y i ( t 1 ) + t 1 0 f ( τ , x ( τ ; 0 , x 0 ) ) d τ , x 0 y i ( 0 ) = x 0 x ( 0 ; t 1 , y i ( t 1 ) ) , x 0 y i ( 0 ) = x 0 x ( 0 ) , x 0 y i ( 0 ) = 0
This contradicts F i ( t ; 0 , x 0 ) = 0 has   no   solution   on R + for   all i Λ .  the proof of the sufficient condition is completed.
For the second step, we prove the necessary condition. In fact, we can prove that under assumptions [F](1)(2), [Y](1) and [J](1), if solution x of (1) has no irregular point over  R + , then  F i ( t ; 0 , x 0 ) > 0 on R + for   all i Λ .  First of all, if solution x of (1) has no irregular point over  R + , then  for   all i Λ F i ( t ; 0 , x 0 ) C ( [ 0 , + ) , R ) . In addition,  for   all i Λ F i ( 0 ; 0 , x 0 ) = x 0 y i ( 0 ) , x 0 y i ( 0 ) > 0 . Combined with the proof of the sufficient condition, we have  F i ( t ; 0 , x 0 ) > 0 on R + for   all i Λ .  The proof of the necessary condition is completed. □
Now, we prove the necessary condition on (2) in Theorem 1. For convenience, we let  x ( · ) = x ( · ; 0 , x 0 )  and  t i | 0 t 1 < t 2 < < t k < +  stand for the irregular point set of x over  R + . Then, there exists  l 1 Λ  such that
x ( t 1 ) = y l 1 ( t 1 ) .
Together with (6), we can affirm
F l 1 ( t 1 ; 0 , x 0 ) = 0 .
For the second irregular point  t 2  of x, there exists  l 2 Λ  such that
x ( t 2 ) = x ( t 2 ; t 1 , Υ l 1 ( y l 1 ( t 1 ) ) ) = y l 2 ( t 2 ) .
Together with (7), it follows
F l 1 l 2 ( t 2 ; t 1 , Υ l 1 ( y l 1 ( t 1 ) ) = 0 .
Similarly, for the irregular point  t k  of x, there is an  l k Λ  such that
F l k 1 l k ( t k ; t k 1 , Υ l k 1 ( t k 1 ) ) = 0 .
Moreover, we can see from Lemma 2 that x has no irregular point on  [ t k , + )  if and only if
F l k j t ; t k , Υ l k ( t k ) = 0 has   no   solution   on [ t k , + ) for   all j Λ .
Combined with (7), it is easy from assumptions [J](1) and [Y](1) to see that  F l k j · ; t k , Υ l k ( t k ) C ( [ t k , + ) , R )  and
F l k j t k ; t k , Υ l k ( t k ) = z ( t k ; t k , Υ k k ( t k ) ) y j ( t k ) , Υ l k ( t k ) y j ( t k ) > 0 for   all j Λ .
Therefore, together with (16), this means (8) and (9) hold.
For the sufficient condition on (2) in Theorem 1, suppose  t i | 0 t 1 < t 2 < < t k < +  satisfies (8) and (9). For  t k , take  F l k 1 l k ( t k ; t k 1 , Υ l k 1 ( t k 1 ) ) = 0  and  F l k j t ; t k , Υ l k ( t k ) > 0 for   any t [ t k , + ) for   all j Λ , and combine with Lemma 2, then,  t k  is the irregular point. For  t k 1 , take  F l k 2 l k 1 ( t k 1 ; t k 2 , Υ l k 2 ( t k 2 ) ) = 0  and  F l k 1 j t ; t k 1 , Υ l k 1 ( t k 1 ) > 0 for   any t [ t k 1 , t k )  for all  j Λ , and combine with Lemma 2, then,  t k 1  is the irregular point. Analogously,  t i | 0 t 1 < t 2 < < t k < +  is the irregular point set of x over  R + . This completes the proof.

4. Proof of Theorem 2

Throughout this section, we fix  T > 0  and vector  v R n . It follows from Theorem 1 that the irregular points to the  P C -solution x of (1) occur at most a finite number of times on the interval  [ 0 , T ] . There are only two possibilities: Case (1), x has no irregular point on  [ 0 , T ]  and Case (2), x has at least one irregular point on  [ 0 , T ] .
In Case (1), the  P C -solution x has a continuous dependence relative to the initial value in the sense of the classical definition, i.e.,
x θ · ; θ , x 0 + θ v x · ; 0 , x 0 C ( [ 0 , T ] , R n ) 0 as θ 0 .
In Case (2), if  x 0 = y i ( 0 )  for some  i Λ , we only study the  P C -solution  x · ; 0 + , Υ i ( 0 ) . Consequently, we may assume that  x · ; 0 , x 0  meets the movement obstacle set  Y ( t ) k times in  [ 0 , T ] , and let  t ¯ j i  be the moments when  x · ; 0 , x 0  hits the movement obstacle line  y i ( · ) , this moment is exactly the jth hits movement obstacle set  Y ( t ) , ( i Λ j = 1 , 2, ⋯, k). For convenience, let  { t ¯ j i | 0 < t ¯ 1 i < < t ¯ k r < T }  denote the irregular point set of  x · ; 0 , x 0  on  [ 0 , T ] . By Theorem 1, one can prove that the impulsive differential Equation (1) has a unique approximate  P C -solution  x θ ( · ; θ , x 0 + θ v )  corresponding to the initial value  ( θ , x 0 + θ v ) . Note that the approximate  P C -solution (3) is the  P C -solution of (1), as  θ = 0 . According to the continuous dependence of the solution of an ODE on parameters, there exists  δ ¯ ¯ > 0 , such that when  0 θ < δ ¯ ¯ x θ ( · ; θ , x 0 + θ v )  and  x 0 ( · ; 0 , x 0 )  have the same number of irregular points on  [ t 0 , T ] . Let  t j i ( θ )  be the irregular moments of  x θ ( · ; θ , x 0 + θ v ) . Notice approximate  P C -solution (3) is the  P C -solution of (1), again, as  θ = 0 , and using the continuous dependence of the solution of an ODE on parameters, there exists  δ ¯ ¯ > δ ¯ > 0 , such that when  0 θ < δ ¯ max { t ¯ j i , t j i ( θ ) } < min { t ¯ j + 1 r , t j + 1 r ( θ ) } .
For a sufficient small  ε > 0 , the  P C -solution  x 0 ( · ; 0 , x 0 )  of (1) does not meet movement obstacle set  Y ( t )  on  [ 0 , t ¯ 1 i ε 4 k ] . Similarly, using the continuous dependence of the solution of an ODE on parameters, approximate  P C -solution (3) is the  P C -solution of (1), as  θ = 0 . It yields that there is a  δ ¯ > δ 1 > 0  such that for any  0 < θ < min { δ 1 , ε 4 k } , the inequality  | x θ ( · ; θ , x 0 + θ v ) x 0 ( · ; 0 , x 0 ) | < ε  holds on  [ θ , t ¯ 1 i ε 4 k ] . Furthermore, together with  x ( t ¯ 1 i ) = y i ( t ¯ 1 i ) , we have  x θ ( t 1 i ( θ ) ) = y ˜ i B y i θ 2 , this means
lim θ 0 t 1 i ( θ ) = t ¯ 1 i .
Together with the continuity of  J i , we have
lim θ 0 J i x θ t 1 i ( θ ) ; θ , x 0 + θ v = J i x t ¯ 1 i ; 0 , x 0 .
It follows from (4) that
lim θ 0 Υ i t 1 i ( θ ) = Υ i t ¯ 1 i ,
where
Υ i t 1 i ( θ ) = x θ t 1 i ( θ ) ; θ , x 0 + θ v + J i x θ t 1 i ( θ ) ; θ , x 0 + θ v .
For the time interval  t ¯ 1 i + ε 4 k , t ¯ 2 j ε 4 k ,
x θ t ; θ , x 0 + θ v x t ; 0 , x 0 = x θ t ; t 1 i ( θ ) , Υ i t 1 i ( θ ) x t ; t ¯ 1 i , Υ i t ¯ 1 i Υ i t 1 i ( θ ) Υ i t ¯ 1 i + t 1 i ( θ ) t f ( τ , x θ ( τ ) ) d τ t ¯ 1 i t f ( τ , x ( τ ) ) d τ 2 M ( T ) + min { t 1 i ( θ ) , t ¯ 1 i } max { t 1 i ( θ ) , t ¯ 1 i } f ( τ , x θ ( τ ) ) d τ + L ( M ˜ ( T ; k ˜ ) ) max { t 1 i ( θ ) , t ¯ 1 i } t x θ ( τ ) x ( τ ) d τ 2 M ( T ) + k ˜ ( 1 + M ˜ ( T ; k ˜ ) ) t 1 i ( θ ) t ¯ 1 i + L ( M ˜ ( T ; k ˜ ) ) max { t 1 i ( θ ) , t ¯ 1 i } t x θ ( τ ) x ( τ ) d τ
From Gronwall’s inequality, we obtain the estimate
x θ t ; θ , x 0 + θ v x t ; 0 , x 0 exp ( L ( M ˜ ( T ; k ˜ ) ) [ t max { t 1 i ( θ ) , t ¯ 1 i } ] ) 2 M ( T ) + k ˜ ( 1 + M ˜ ( T ; k ˜ ) ) t 1 i ( θ ) t ¯ 1 i
which implies that there is a  δ 2 > 0  with  δ 2 < δ 1  such that for any  θ > 0  with  θ < δ 2 ,
x θ t ; θ , x 0 + θ v x t ; u , 0 , x 0 = x θ t ; t 1 i ( θ ) , Υ i t 1 i ( θ ) x t ; t ¯ 1 i , Υ i t ¯ 1 i < ε for any t t ¯ 1 i + ε 4 k , t ¯ 2 j ε 4 k .
Let
Υ i t j i ( θ ) = x θ t j i ( θ ) ; θ , x 0 + θ v + J i x θ t j i ( θ ) ; θ , x 0 + θ v , j > 1 , i Λ .
In general, by repeating the above process, one can show that there is a  δ j + 1 > 0  with  δ j + 1 < δ j  such that for any  θ > 0  with  θ < δ j + 1 ,
x θ t ; θ , x 0 + θ v x t ; 0 , x 0 = x θ t ; t j i ( θ ) , Υ i t j i ( θ ) x t ; t ¯ j i , Υ i t ¯ j i < ε for any t t ¯ j i + ε 4 k , t ¯ j + 1 r ε 4 k
and
lim θ 0 t j + 1 r ( θ ) = t ¯ j + 1 r ,
lim θ 0 J r x θ t j + 1 r ( θ ) ; θ , x 0 + θ v = J r x t ¯ j + 1 r ; 0 , x 0 ,
lim θ 0 Υ r t j + 1 r ( θ ) = Υ r t ¯ j + 1 r ,
where
Υ r t j + 1 r ( θ ) = x θ t j + 1 r ( θ ) ; θ , x 0 + θ v + J r x θ t j r ( θ ) ; θ , x 0 + θ v .
In short, for any sufficient small  ε > 0 , there exists a  δ > 0  such that
x θ t ; θ , x 0 + θ v x t ; 0 , x 0 < ε for any t I ε when θ < δ ,
and  μ ( [ 0 , T ] I ε ) < ε , where
I ε = θ , t ¯ 1 i ε 4 k j = 1 k 1 t ¯ j i + ε 4 k , t ¯ j + 1 r ε 4 k t ¯ k r + ε 4 k , T .
This completes the proof.

5. Proof of Theorem 3

Throughout this section, we fix  T > 0 . It follows from Theorem 2 that there are only two possibilities: Case  ( i ) x · ; 0 , x 0  has no irregular point on  [ 0 , T ]  and Case  ( i i ) x · ; 0 , x 0  has at least one irregular point on  [ 0 , T ] .
In Case  ( i ) , one can directly check that  x · ; 0 , x 0  of (1) is Gâteaux differentiable, and its Gâteaux derivative  φ  is a weak solution of the following differential equation
φ ˙ ( t ) = f x ( t , x ( t ; 0 , x 0 ) ) φ ( t ) , t ( 0 , T ] , φ ( 0 ) = v f ( 0 , x 0 ) .
To discuss Case  ( i i ) , we define function  h t  given by
h t ( ε ) denotes   the   solution   of   the   equation H ( ε , t ) = 0 .
Here,
H ( ε , t ) = x ε t ; ε , x 0 + ε v y ˜ ( t , ε ) ,
where  y ˜ ( t , ε ) = y ˜ i ( t , ε )  for some  i Λ y ˜ i ( t , ε ) B y i ( t ) , ε 2 . By Theorem 2, when  x t ; 0 , x 0 = y i ( t ) , there is a  δ > 0  such that definition (18) holds for all  ε [ 0 , δ ] , that is,  h t : [ 0 , δ ] O ( t )  is a function, and  h t ( 0 ) = t , where  O ( t )  denotes some neighborhood of t. For convenience, let  { t j i | 0 < t 1 i < < t k r < T }  denote the irregular point set of  x · ; 0 , x 0  on  [ 0 , T ] . If  y i C 1 ( [ 0 , T ] , R n ) , it follows from Theorem 2 and (19) that there is an  δ > 0  such that
H C [ 0 , δ ] × [ 0 , T ] and H ε , h t j i ( ε ) = 0 for   any ε [ 0 , δ ] , i Λ , j = 1 , 2 , , k
and
H t ( ε , t ) = f t , x ε t ; ε , x 0 + ε v y ˜ t ( t , ε ) .
According to assumption [Y](2),  f t j i , y i t j i y ˙ i t j i  ( j = 1 , 2 , , k i Λ ), we have
H t ε , h t j i ( ε ) = f h t j i ( ε ) , x ε h t j i ( ε ) ; ε , x 0 + ε v y ˙ i ( h t j i ( ε ) ) 0 in R n , ε [ 0 , δ ] ,
where  j = 1 , 2, ⋯, k. Let  f = f 1 , f 2 , , f n y i = y i 1 , y i 2 , , y i n  ( i Λ ). Without loss of generality, we suppose
f 1 h t j i ( ε ) , x ε h t j i ( ε ) ; ε , x 0 + ε v y ˙ i 1 ( h t j i ( ε ) ) 0 in R , ε [ 0 , δ ] , j = 1 , 2 , , k ,
We introduce the following functions
Φ ε ( t , s ) = exp s t f x ( τ , x ε ( τ ; ε , x 0 + ε v ) ) d τ ;
then,
Φ ( t , s ) = lim ε 0 Φ ε ( t , s ) = exp s t f x ( τ , x ( τ ; 0 , x 0 ) ) d τ .
We let
Φ ε 1 ( t , s ) and Φ 1 ( t , s ) denote   the   first   line   vector   of Φ ε ( t , s ) and Φ ( t , s ) , respectively .
We first claim the following lemma.
Lemma 3.
Suppose assumption [F](3) holds. Then,  h t  is differentiable over  [ 0 , δ ]  for some  δ > 0 , and its derivative is given by
h ˙ t j i ( 0 ) = Φ 1 t 1 i , 0 f ( 0 , x 0 ) v f 1 t 1 i , y i t 1 i y ˙ i 1 t 1 i , j = 1 , h ˙ t j 1 r ( 0 ) Φ 1 t j i , t j 1 r f t j 1 r , y r t j 1 r I + J r y r t j 1 r y ˙ r t j 1 r f 1 t j i , y i t j i y ˙ i 1 t j i , j > 1 .
Here, I is a unit matrix.
Proof. 
When  t 0 , h t 1 i ( ε ) , it follows from assumption [F](3), (10) and (3) that
H ε ( ε , t ) = lim ξ 0 x ε + ξ ( t ; ε + ξ , x 0 + ( ε + ξ ) v ) x ε ( t ; ε , x 0 + ε v ) ξ + ε y ˜ i ( t , ε ) = lim ξ 0 ε + ξ t 0 1 f x ( s , x ε ( s ; ε , x 0 + ε v ) + θ ( x ε + ξ ( s ; ε + ξ , x 0 + ( ε + ξ ) v ) x ε ( s ; ε , x 0 + ε v ) ) ) x ε + ξ ( s ; ε + ξ , x 0 + ( ε + ξ ) v ) x ε ( s ; ε , x 0 + ε v ) ξ d θ d s v f ( ε , x 0 + ε v ) + ε y ˜ i ( t , ε ) .
One can see from (21) and the above equality that
H ε ( ε , t ) = Φ ε ( t , ε ) ( v f ( ε , x 0 + ε v ) ) + ε y ˜ i ( t , ε ) .
Combining (20), (21) and (22), we have
h ˙ t 1 i ( ε ) = Φ ε 1 h t j i ( ε ) , ε v f ( ε , x 0 + ε v ) + ε y ˜ i 1 ( t , ε ) f 1 h t 1 i ( ε ) , x ε h t 1 i ( ε ) ; ε , x 0 + ε v y ˙ i 1 h t 1 i ( ε )
and
h ˙ t 1 i ( 0 ) = Φ 1 t 1 i , 0 v f ( 0 , x 0 ) y ˙ i 1 t 1 i f 1 t 1 i , y i t 1 i ,
In general, when  t h t j 1 r ( ε ) , h t j i ( ε ) , it follows from assumption [F](3), (10), (3) and (4) that
H ε ( ε , t ) = lim ξ 0 x ε + ξ ( t ; ε + ξ , x 0 + ( ε + ξ ) v ) x ε ( t ; ε , x 0 + ε v ) ξ + ε y ˜ i ( t , ε ) = lim ξ 0 x ε + ξ t ; h t j 1 r ( ε + ξ ) , Υ r h t j 1 r ( ε + ξ ) x ε t ; h t j 1 r ( ε ) , Υ r h t j 1 r ( ε ) ξ + ε y ˜ i ( t , ε ) = lim ξ 0 h t j 1 r ( ε + ξ ) t 0 1 f x ( s , x ε ( s ; ε , x 0 + ε v ) + θ ( x ε + ξ ( s ; ε + ξ , x 0 + ( ε + ξ ) v ) x ε ( s ; ε , x 0 + ε v ) ) ) x ε + ξ ( s ; ε + ξ , x 0 + ( ε + ξ ) v ) x ε ( s ; ε , x 0 + ε v ) ξ d θ d s + lim ξ 0 Υ r h t j 1 r ( ε + ξ ) , ε + ξ Υ r h t j 1 r ( ε ) , ε ξ lim ξ 0 h t j 1 r ( ε ) h t j 1 r ( ε + ξ ) f s , x ( s ; ε , x 0 + ε v ) d s ξ + ε y ˜ r ( t , ε ) = lim ξ 0 h t j 1 r ( ε + ξ ) t 0 1 f x ( s , x ε ( s ; ε , x 0 + ε v ) + θ ( x ε + ξ ( s ; ε + ξ , x 0 + ( ε + ξ ) v ) x ε ( s ; ε , x 0 + ε v ) ) ) x ε + ξ ( s ; ε + ξ , x 0 + ( ε + ξ ) v ) x ε ( s ; ε , x 0 + ε v ) ξ d θ d s + I + J r y ˜ r h t j 1 r ( ε ) , ε [ h ˙ t j 1 r ( ε ) t y ˜ r h t j 1 r ( ε ) , ε + ε y ˜ r h t j 1 r ( ε ) , ε ] h ˙ t j 1 r ( ε ) f h t j 1 r ( ε ) , y ˜ r h t j 1 r ( ε ) , ε + ε y ˜ r ( t , ε ) .
We can also infer from (21) and the above equality that
H ε ( ε , t ) = ε y ˜ r ( t , ε ) + Φ ε t , h t j 1 r ( ε ) I + J r y ˜ r h t j 1 r ( ε ) , ε [ h ˙ t j 1 r ( ε ) t y ˜ r h t j 1 r ( ε ) , ε + ε y ˜ r h t j 1 r ( ε ) , ε ] h ˙ t j 1 r ( ε ) Φ ε t , h t j 1 r ( ε ) f h t j 1 r ( ε ) , y ˜ r h t j 1 r ( ε ) , ε .
Together with (20) and (22), by the implicit function theorem, we have
h ˙ t j i ( ε ) = Φ ε 1 h t j i ( ε ) , h t j 1 r ( ε ) I + J r y ˜ r h t j 1 r ( ε ) , ε f 1 h t j i ( ε ) , x ε h t j i ( ε ) ; ε , x 0 + ε v y ˙ i 1 h t 2 j ( ε ) · h ˙ t j 1 r ( ε ) t y ˜ r h t j 1 r ( ε ) , ε + ε y ˜ r h t j 1 r ( ε ) , ε ε y ˜ r 1 ( t , ε ) h ˙ t j 1 r ( ε ) Φ ε 1 h t j i ( ε ) , h t j 1 r ( ε ) f h t j 1 r ( ε ) , y ˜ r h t j 1 r ( ε ) , ε f 1 h t j i ( ε ) , x ε h t j i ( ε ) ; ε , x 0 + ε v y ˙ i 1 h t j i ( ε ) .
Further, this means that
h ˙ t j i ( 0 ) = h ˙ t j 1 r ( 0 ) Φ 1 t j i , t j 1 r f t j 1 r , y r t j 1 r I + J r y r t j 1 r y ˙ r t j 1 r f 1 t j i , y i t j i y ˙ i 1 t j i .
This completes the proof. □
Now, we claim Case  ( i i ) . For  t 0 , t 1 i , similarly to Case  ( i ) , it is not difficult to check the following result
φ ˙ ( t ) = f x ( t , x ( t ; 0 , x 0 ) ) φ ( t ) , t 0 , t 1 i , φ ( 0 ) = v f ( 0 , x 0 ) ,
Combining with Lemma 3, we first note that
lim ε 0 x ε h t j i ( ε ) ; ε , x 0 + ε v x t j i ; 0 , x 0 ε = lim ε 0 x ε h t j i ( ε ) ; ε , x 0 + ε v x ε t j i ; ε , x 0 + ε v ε + lim ε 0 x ε t j i ; ε , x 0 + ε v x t j i ; 0 , x 0 ε = φ t j i + h ˙ t j i ( 0 ) f t j i , y i t j i .
Together with assumption [J](2), When  h t j i ( ε ) > t j i , we have
φ t j i + = lim ε 0 x ε h t j i ( ε ) + ; ε , x 0 + ε v x h t j i ( ε ) ; 0 , x 0 ε = lim ε 0 1 ε [ x ε h t j i ( ε ) ; ε , x 0 + ε v + J i x ε h t j i ( ε ) ; ε , x 0 + ε v x h t j i ( ε ) ; t j i , x t j i ; 0 , x 0 + J i x t j i ; 0 , x 0 ] = lim ε 0 1 ε [ x ε h t j i ( ε ) ; ε , x 0 + ε v + J i x ε h t j i ( ε ) ; ε , x 0 + ε v x t j i ; 0 , x 0 J i x t j i ; 0 , x 0 t j i h t j i ( ε ) f s , x s ; 0 , x 0 d s ] = I + J i y i t j i φ t j i + h ˙ t j i ( 0 ) f t j i , y i t j i h ˙ t j i ( 0 ) f t j i , y i t j i = φ t j i + J i y i t j i φ t j i + h ˙ t j i ( 0 ) f t j i , y i t j i .
When  h t j i ( ε ) < t j i , we also have
φ t j i + = lim ε 0 x ε t j i ; ε , x 0 + ε v x t j i + ; 0 , x 0 ε = lim ε 0 1 ε [ x ε h t j i ( ε ) ; ε , x 0 + ε v + J i x ε h t j i ( ε ) ; ε , x 0 + ε v x t j i ; 0 , x 0 J i x t j i ; 0 , x 0 t j i h t j i ( ε ) f s , x ε s ; ε , x 0 + ε v d s ] = φ t j i + J i y i t j i φ t j i + h ˙ t j i ( 0 ) f t j i , y i t j i .
Consequently, we have
φ t j i + = φ t j i + J i y i t j i φ t j i + h ˙ t j i ( 0 ) f t j i , y i t j i , i Λ , j = 1 , 2 , , k .
Therefore, when  t t j i , t j + 1 r  ( j = 1 , 2, ⋯,  k 1 ) or  t t k r , T , it follows from assumption [F](3) and (10), (3), (4), (17), (22) and (24) that
φ ( t ) = lim θ 0 x θ ( t ; θ , x 0 + θ v ) x ( t ; 0 , x 0 ) θ = lim θ 0 x θ ( t ; h t j i ( θ ) , Υ r ( h t j i ( θ ) ) ) x ( t ; t j i , Υ r ( t j i ) ) θ = lim θ 0 Υ i ( h t j i ( θ ) ) Υ i ( t j i ) ) θ + lim θ 0 h t j i ( θ ) t 0 1 f x ( s , x ( s ; 0 , x 0 ) + ξ ( x θ ( s ; θ , x 0 + θ v ) x ( s ; 0 , x 0 ) ) ) x θ ( s ; θ , x 0 + θ v ) x ( s ; 0 , x 0 ) θ d ξ d s lim θ 0 1 θ t j i h t j i ( θ ) f ( s , x ( s ; 0 , x 0 ) ) d s = h ˙ t j i ( 0 ) f t j i , y i t j i + lim θ 0 h t j i ( θ ) t 0 1 f x ( s , x ( s ; 0 , x 0 ) + ξ ( x θ ( s ; θ , x 0 + θ v ) x ( s ; 0 , x 0 ) ) ) x θ ( s ; θ , x 0 + θ v ) x ( s ; 0 , x 0 ) θ d ξ d s + I + J i y i t j i φ t j i + h ˙ t j i ( 0 ) f t j i , y i t j i .
Thus, combining with (23) and (25), we obtain from the above equality that
φ ˙ ( t ) = f x ( t , x ( t ; 0 , x 0 ) ) φ ( t ) , t ( 0 , T ] and t t j i , i Λ , j = 1 , 2 , , k , φ ( 0 ) = v f ( 0 , x 0 ) , φ t j i + = φ t j i + J i y i t j i φ t j i + h ˙ t j i ( 0 ) f t j i , y i t j i , j = 1 , 2 , , k .
This completes the proof of Theorem 3.

6. Periodicity of an Autonomous Impulsive System

As an application, in this section, we discuss the periodicity of the solution of the following impulsive differential equation
x ˙ ( t ) = g ( x ( t ) ) , x ( t ) y 1 , t 0 , x ( t + ) = y 2 , x ( t ) = y 1 , t 0 , x ( 0 ) = x 0 ,
where  y 1 y 2 R n , and  y 1 y 2 . We introduce the function
G ( t ; s , z s ) = z ( t , s , z s ) y 1 , z s y 1 for   any t s 0 .
Here,
z ( t , s , z s ) = z s + s t g ( z ( τ , s , z s ) ) d τ , for   any t s 0 .
For function  G ( · ; 0 , x 0 ) , it is clear that
G ( t ; 0 , x 0 ) = 0 has   no   solution   on R +
or
t 1 is   the   minimum   solution   of G ( t ; 0 , x 0 ) = 0 on R + .
Similarly, it is obvious that
G ( t ; t 1 , y 2 ) = 0 has   no   solution   on [ t 1 , + )
or
t 2 is   the   minimum   solution   of G ( t ; t 1 , y 2 ) = 0 on [ t 1 , + ) .
Let  P C y 1 y 2 R + , R n = { x : [ 0 , + ) R n | x  be continuous at t when  x ( t ) y 1 , x is left-continuous at t and the right limit  x ( t + )  exists when  x ( t ) = y 1 } . We check the following main result for autonomous impulsive system (26).
Theorem 4.
Suppose  g : R n R n  is locally Lipschitz continuous in x, and there exists a constant  k ˜ > 0  such that
| g ( x ) | k ˜ ( 1 + | x | ) f o r   a n y t 0 .
(1) If (27) holds, then (26) has a unique solution  x C R + , R n .
(2) If (28) and (29) hold, then the solution of (26) has a unique irregular point  t 1 .
(3) If (29) and (30) hold, then the solution of (26) is a periodic function on  [ t 1 , + ) .
Proof. 
Using Theorem 1, we directly check that autonomous impulsive system (26) has a unique solution  x P C y 1 y 2 R + , R n . Further, there are only three possibilities for the solution: Case (i), x has not irregular point on  R + ; Case (ii), x has a unique irregular point on  R + ; and Case (iii), x has two irregular points on  R +  at least.
For Case (i), it follows from (2) of Theorem 1 that x has no irregular point on  R +  if and only if (27) holds. This means (26) has a unique solution  x C R + , R n . Similarly, for Case (ii), together with (28) and (29), we can also infer that x only has a unique irregular point  t 1 .
For Case (iii), let  t 1  and  t 2  denote the smallest two irregular points of solution x on  R +  and  T = t 2 t 1 . We claim
x ( t + T ) = x ( t ) for   any t [ t 1 , + ) .
By the definitions of  t 1  and  t 2  (see (28) and (30)), solution x of (26) has not irregular point on  ( t 1 , t 2 )  and satisfies
x ( t ) = y 2 + t 1 t g ( x ( s ) ) d s for   any t ( t 1 , t 2 ] and x ( t 2 ) = x ( t 1 ) = y 1 .
When  t ( t 1 , t 2 ] , we have  t + T ( t 2 , t 2 + T ]  and
x ( t + T ) = y 2 + t 1 + T t + T g ( x ( s ) ) d s = y 2 + t 1 t g ( x ( s + T ) ) d s .
It is easy to see that by the assumption conditions of g, there exists  ρ > 0  such that  | x ( t ) | | x ( T + t ) | ρ  for every  t ( t 1 , t 2 ] . Furthermore, we assert from (32) and (33) that
| x ( t + T ) x ( t ) | t 1 t | g ( x ( s + T ) ) g ( x ( s ) ) | d s L ( ρ ) t 1 t | x ( s + T ) x ( s ) | d s .
Together with Gronwall’s inequality, one can verify that
x ( t + T ) = x ( t ) for   any t ( t 1 , t 2 ] .
Consequently, we can infer that (31) holds. Thus, this means that solution x of (26) is a periodic function on  [ t 1 , + )  with period T. The proof is completed. □

7. Application

As an application, in this section, we discuss the variation in the solution relative to the control for the following control impulsive differential equation
x ˙ ( t ) = f ( t , x ( t ) ) + B ( t ) u ( t ) , { x ( t ) } Y ( t ) = , t 0 , x ( t + ) = J i ( x ( t ) ) + x ( t ) , { x ( t ) } Y ( t ) = y i ( t ) , t 0 , x ( 0 ) = x 0 ,
where control function  u L l o c 1 R + , R m B L l o c R + , R n × m .
Using the idea of Theorems 1 and 2, for any  T > 0  and  u L 1 ( 0 , T ) , R m , one can prove the following result.
Theorem 5.
Suppose assumptions [F](1)(2), [Y](1) and [J] hold. Then, system (34) has a unique  P C -solution  x ( · ; u ) x ( · ; u , 0 , x 0 ) P C Y [ 0 , T ] , R n  given by
x ( t ; u ) = x 0 + 0 t [ f τ , x ( τ ; u ) + B ( τ ) u ( τ ) ] d τ + 0 t j < t , x ( t j ; u ) = y i ( t j ) ) J i x ( t j ; u ) .
Moreover, solution  x ( · ; u )  has a continuous dependence relative to the control u in the sense of Definition 2.
Moreover, for any fixed sufficient small  θ > 0  and fixed  v L 1 [ 0 , T ] , R m , (34) has a unique  P C -approximate solution  x θ ( · ) x θ ( · ; u + θ v , 0 , x 0 )  which satisfies
x θ ( t ) = x 0 + 0 t [ f τ , x θ ( τ ) + B ( τ ) ( u ( τ ) + θ v ( τ ) ) ] d τ + 0 t j < t , x θ ( t j ) B ( y i ( t j ) , θ 2 ) J i x θ ( t j ) .
To discuss the variation in the solution relative to the control, we introduce the following definitions.
Definition 4.
The  P C -solution  x · ; u , 0 , x 0  of (34) is said to be Gâteaux differentiable relative to the control u if the Gâteaux derivative  ψ ( · )  of  x t ; u  exists at u for all  t [ 0 , T ]  with  x t ; u , 0 , x 0 y i ( t ) ; otherwise,
ψ ( t ) = lim s t ψ ( s ) ,
where
ψ ( t ) = lim ε 0 x ε t ; u + ε v , 0 , x 0 x t ; u , 0 , x 0 ε w h e n x t ; u , 0 , x 0 y i ( t ) .
Theorem 6.
Suppose assumptions [F], [Y] and [J] hold and  u C [ 0 , T ] , R m B C ( [ 0 , T ] , R n × m ) . The  P C -solution  x ( · ) = x · ; u , 0 , x 0  of (34) is Gâteaux differentiable relative to the control u in the sense of Definition 4. Moreover, its Gâteaux derivative ψ is a  P C -solution of the following differential equation with impulses
ψ ˙ ( t ) = f x ( t , x ( t ) ) ψ ( t ) + B ( t ) v ( t ) , t ( 0 , T ] , x t y i ( t ) , i Λ , ψ ( 0 ) = 0 , ψ ( t + ) = ψ ( t ) + J i ( y i ( t ) ) ψ ( t ) + g ˙ t ( 0 ) ( f t , y i t + B ( t ) u ( t ) , x t = y i ( t ) , i Λ .
Proof. 
There are only two possibilities: Case (I),  x · ; u , 0 , x 0  has no irregular point on  [ 0 , T ]  and Case (II),  x · ; u , 0 , x 0  has at least one irregular point on  [ 0 , T ] .
In Case (I), one can directly check that  x · ; u , 0 , x 0  of (34) is Gâteaux differentiable, and its Gâteaux derivative  ψ  is a weak solution of the following differential equation
ψ ˙ ( t ) = f x ( t , x ( t ; u ) ) ψ ( t ) + B ( t ) v ( t ) , t ( 0 , T ] , ψ ( 0 ) = 0 .
To discuss Case (II), we define function  g t  given by
g t ( ε ) denotes   the   solution   of   the   equation G ( ε , t ) = 0 .
Here,
G ( ε , t ) = x ε t ; u + ε v , 0 , x 0 y ˜ ( t , ε ) .
By Theorem 5, when  x t ; u , 0 , x 0 = y i ( t ) , there is a  δ > 0  such that for all  ε [ 0 , δ ] g t : [ 0 , δ ] O ( t )  is a function and  g t ( 0 ) = t , where  O ( t )  denotes some neighborhood of t. For convenience, let  { t j i | 0 < t 1 i < < t k r < T }  denote the irregular point set of  x · ; u , 0 , x 0  on  [ 0 , T ] . If  y i C 1 ( [ 0 , T ] , R n ) , it follows that there is a  δ > 0  such that
G t ( ε , t ) = f t , x ε t ; u + ε v , 0 , x 0 + B ( t ) [ u ( t ) + ε v ( t ) ] y ˜ t ( t , ε ) .
Further, when  f t j i , y i t j i + B t j i u t j i y ˙ i t j i  ( j = 1 , 2 , , k i Λ ), without loss of generality, we assume
f 1 g t j i ( ε ) , x ε g t j i ( ε ) ; u + ε v , 0 , x 0 + B 1 g t j i ( ε ) u g t j i ( ε ) y ˙ i 1 ( g t j i ( ε ) ) 0 in R , i Λ , ε [ 0 , δ ] , j = 1 , 2 , , k ,
where  B 1  denotes the first line vector of B. We introduce the following functions given by
Ψ ε ( t , s ) = exp s t f x ( τ , x ε ( τ ; u + ε v , 0 , x 0 ) ) d τ ,
then
Ψ ( t , s ) = lim ε 0 Ψ ε ( t , s ) = exp s t f x ( τ , x ( τ ; u , 0 , x 0 ) ) d τ .
We let
Ψ ε 1 ( t , s ) and Ψ 1 ( t , s ) denote   the   first   line   vector   of Ψ ε ( t , s ) and Ψ ( t , s ) , respectively .
Now, we calculate the variation in the solution relative to the control in Case (II). For  t 0 , t 1 i , similar to Case (I), it is not difficult to check the following result:
ψ ˙ ( t ) = f x ( t , x ( t ; u , 0 , x 0 ) ) ψ ( t ) + B ( t ) v ( t ) , t 0 , t 1 i , ψ ( 0 ) = 0 .
When  t 0 , g t 1 i ( ε ) , it follows from assumption [F](3), (35) and (10) that
G ε ( ε , t ) = lim ξ 0 x ε + ξ ( t ; u + ( ε + ξ ) v , 0 , x 0 ) x ε ( t ; u + ε v , 0 , x 0 ) ξ + ε y ˜ i ( t , ε ) = lim ξ 0 0 t 0 1 f x ( s , x ε ( s ; u + ε v , 0 , x 0 ) + θ ( x ε + ξ ( s ; u + ( ε + ξ ) v , 0 , x 0 ) x ε ( s ; u + ε v , 0 , x 0 ) ) ) x ε + ξ ( s ; u + ( ε + ξ ) v , 0 , x 0 ) x ε ( s ; u + ε v , 0 , x 0 ) ξ d θ d s 0 t B ( s ) v ( s ) d s + ε y ˜ i ( t , ε ) .
It follows from (37) and the above that
G ε ( ε , t ) = 0 t Ψ ε ( t , s ) B ( s ) v ( s ) d s + ε y ˜ i ( t , ε ) .
Using the implicit function theorem, combined with (36), we have
g ˙ t 1 i ( ε ) = 0 g t 1 i ( ε ) Ψ ε 1 ( g t 1 i ( ε ) , s ) B ( s ) v ( s ) d s + ε y ˜ i 1 ( g t 1 i ( ε ) , ε ) f 1 g t 1 i ( ε ) , x ε g t 1 i ( ε ) ; u + ε v , 0 , x 0 + B 1 g t 1 i ( ε ) u g t 1 i ( ε ) y ˙ i 1 ( g t 1 i ( ε ) ) .
In the above equation, the vector product is the inner product operation. In the following operations, the vector product is also the inner product operation. Together with Theorem 5, we obtain
g ˙ t 1 i ( 0 ) = 0 t 1 i Ψ 1 t 1 i , s B ( s ) v ( s ) d s f 1 t 1 i , x t 1 i ; u , 0 , x 0 + B 1 t 1 i u t 1 i y ˙ i 1 ( t 1 i ) .
Further,
lim ε 0 x ε g t 1 i ( ε ) ; u + ε v , 0 , x 0 x t 1 i ; u , 0 , x 0 ε = lim ε 0 x ε g t 1 i ( ε ) ; u + ε v , 0 , x 0 x ε t 1 i ; u + ε v , 0 , x 0 ε + lim ε 0 x ε t 1 i ; u + ε v , 0 , x 0 x t 1 i ; u , 0 , x 0 ε = ψ t 1 i + g ˙ t 1 i ( 0 ) f t 1 i , y i t 1 i + B t 1 i u t 1 i .
Together with assumption [J](2), it follows from (40) and (41) that when  g t 1 i ( ε ) > t 1 i ,
ψ t 1 i = lim ε 0 x ε g t 1 i ( ε ) + ; u + ε v , 0 , x 0 x g t 1 i ( ε ) ; u , 0 , x 0 ε = lim ε 0 1 ε [ x ε g t 1 i ( ε ) ; u + ε v , 0 , x 0 + J i x ε g t 1 i ( ε ) ; u + ε v , 0 , x 0 x g t 1 i ( ε ) ; u , t 1 i , x t 1 i ; u , 0 , x 0 + J i x t 1 i ; u , 0 , x 0 ] = lim ε 0 1 ε [ x ε g t 1 i ( ε ) ; u + ε v , 0 , x 0 + J i x ε g t 1 i ( ε ) ; u + ε v , 0 , x 0 x t 1 i ; u , 0 , x 0 J i x t 1 i ; u , 0 , x 0 t 1 i g t 1 i ( ε ) f s , x s ; u , 0 , x 0 + B ( s ) u ( s ) d s ] = I + J i y i t 1 i ψ t 1 i + g ˙ t 1 i ( 0 ) f t 1 i , y i t 1 i + B t 1 i u t 1 i g ˙ t 1 i ( 0 ) f t 1 i , y i t 1 i + B t 1 i u t 1 i = ψ t 1 i + J i y i t j i ψ t j i + g ˙ t 1 i ( 0 ) f t 1 i , y i t 1 i + B t 1 i u t 1 i ,
and when  g t 1 i ( ε ) < t 1 i , we also have
ψ t 1 i + = lim ε 0 x ε t 1 i ; u + ε v , 0 , x 0 x t 1 i + ; u , 0 , x 0 ε = lim ε 0 1 ε [ x ε g t 1 i ( ε ) ; u + ε v , 0 , x 0 + J i x ε g t 1 i ( ε ) ; u + ε v , 0 , x 0 x t 1 i ; u , 0 , x 0 J i x t 1 i ; u , 0 , x 0 ] lim ε 0 1 ε t 1 i g t 1 i ( ε ) f s , x ε s ; u + ε v , 0 , x 0 + B ( s ) ( u ( s ) + ε v ( s ) ) d s = ψ t 1 i + J i y i t 1 i ψ t 1 i + g ˙ t 1 i ( 0 ) f t 1 i , y i t 1 i + B t 1 i u t 1 i .
Consequently, we have
ψ t 1 i + = ψ t 1 i + J i y i t 1 i ψ t 1 i + g ˙ t 1 i ( 0 ) f f t 1 i , y i t 1 i + B t 1 i u t 1 i , i Λ .
Generally speaking, we first note that
lim ε 0 x ε g t j 1 r ( ε ) ; u + ε v , 0 , x 0 x t j 1 r ; u , 0 , x 0 ε = lim ε 0 x ε g t j 1 r ( ε ) ; u + ε v , 0 , x 0 x ε t j 1 r ; u + ε v , 0 , x 0 ε + lim ε 0 x ε t j 1 r ; u + ε v , 0 , x 0 x t j 1 r ; u , 0 , x 0 ε = φ t j 1 r + g ˙ t j 1 r ( 0 ) f t j 1 r , y r t j 1 r + B t j 1 r u t j 1 r .
Further, when  t g t j 1 r ( ε ) , g t j i ( ε ) , one can infer from assumption [F](3), (35), (10) and (43) that
G ε ( ε , t ) = lim ξ 0 x ε + ξ ( t ; u + ( ε + ξ ) v , 0 , x 0 ) x ε ( t ; u + ε v , 0 , x 0 ) ξ + ε y ˜ i ( t , ε ) = lim ξ 0 1 ξ [ x ε + ξ t ; u + ( ε + ξ ) v , g t j 1 r ( ε + ξ ) , Υ r g t j 1 r ( ε + ξ ) x ε t ; u + ε v , g t j 1 r ( ε ) , Υ r g t j 1 r ( ε ) ] + ε y ˜ i ( t , ε ) = lim ξ 0 g t j 1 r ( ε + ξ ) t 0 1 f x ( s , x ε ( s ; u + ε v , 0 , x 0 ) + θ ( x ε + ξ ( s ; u + ( ε + ξ ) v , 0 , x 0 ) x ε ( s ; u + ε v , 0 , x 0 ) ) ) x ε + ξ ( s ; u + ( ε + ξ ) v , 0 , x 0 ) x ε ( s ; u + ε v , 0 , x 0 ) ξ d θ d s + lim ξ 0 Υ r g t j 1 r ( ε + ξ ) , ε + ξ Υ r g t j 1 r ( ε ) , ε ξ + lim ξ 0 g t j 1 r ( ε + ξ ) t B ( s ) v ( s ) d s lim ξ 0 g t j 1 r ( ε ) g t j 1 r ( ε + ξ ) f s , x ( s ; u + ε v , 0 , x 0 ) + B ( s ) ( u ( s ) + ε v ( s ) ) d s ξ + ε y ˜ r ( t , ε ) = lim ξ 0 g t j 1 r ( ε + ξ ) t 0 1 f x ( s , x ε ( s ; u + ε v , 0 , x 0 ) + θ ( x ε + ξ ( s ; u + ( ε + ξ ) v , 0 , x 0 ) x ε ( s ; u + ε v , 0 , x 0 ) ) ) x ε + ξ ( s ; u + ( ε + ξ ) v , 0 , x 0 ) x ε ( s ; u + ε v , 0 , x 0 ) ξ d θ d s + g t j 1 r ( ε ) t B ( s ) v ( s ) d s + ψ g t j 1 r ( ε ) + ε y ˜ r ( t , ε ) + J r y ˜ r g t j 1 r ( ε ) , ε [ ψ g t j 1 r ( ε ) + g ˙ t j 1 r ( ε ) f g t j 1 r ( ε ) , y ˜ r g t j 1 r ( ε ) , ε + B g t j 1 r ( ε ) u g t j 1 r ( ε ) ] .
Moreover, one can see from (37) and the above equality that
G ε ( ε , t ) = ε y ˜ r ( t , ε ) + Ψ ε t , g t j 1 r ( ε ) J r y ˜ r g t j 1 r ( ε ) , ε [ ψ g t j 1 r ( ε ) + g ˙ t j 1 r ( ε ) f g t j 1 r ( ε ) , y ˜ r g t j 1 r ( ε ) , ε + B g t j 1 r ( ε ) u g t j 1 r ( ε ) ] + Ψ ε t , g t j 1 r ( ε ) ψ g t j 1 r ( ε ) + g t j 1 r ( ε ) t Ψ ε t , s B ( s ) v ( s ) d s .
Together with (36), by the implicit function theorem, we have
g ˙ t j i ( ε ) = Ψ ε 1 g t j i ( ε ) , g t j 1 r ( ε ) J r y ˜ r g t j 1 r ( ε ) , ε f 1 g t j i ( ε ) , x ε g t j i ( ε ) ; u + ε v , 0 , x 0 + B 1 g t j i ( ε ) u g t j i ( ε ) y ˙ i 1 ( g t j i ( ε ) ) · ψ g t j 1 r ( ε ) + g ˙ t j 1 r ( ε ) f g t j 1 r ( ε ) , y ˜ r g t j 1 r ( ε ) , ε + B g t j 1 r ( ε ) u g t j 1 r ( ε ) ε y ˜ r 1 g t j i ( ε ) , ε + Ψ ε 1 g t j i ( ε ) , g t j 1 r ( ε ) ψ g t j 1 r ( ε ) + g t j 1 r ( ε ) g t j i ( ε ) Ψ ε 1 t , s B ( s ) v ( s ) d s f 1 g t j i ( ε ) , x ε g t j i ( ε ) ; u + ε v , 0 , x 0 + B 1 g t j i ( ε ) u g t j i ( ε ) y ˙ i 1 ( g t j i ( ε ) ) .
Further, it follows from the above expression, (38) and Theorem 5 that
g ˙ t j i ( 0 ) = Ψ 1 t j i , t j 1 r J r y r t j 1 r f 1 t j i , x t j i ; u , 0 , x 0 + B 1 t j i u t j i y ˙ i 1 t j i · ψ t j 1 r + g ˙ t j 1 r ( 0 ) f t j 1 r , y r t j 1 r + B t j 1 r u t j 1 r Ψ 1 t j i , t j 1 r ψ t j 1 r + t j 1 r t j i Ψ 1 t j i , s B ( s ) v ( s ) d s f 1 t j i , x t j i ; u , 0 , x 0 + B 1 t j i u t j i y ˙ i 1 t j i , i Λ , j = 1 , 2 , , k .
Similar to (43), we can obtain
lim ε 0 x ε g t j i ( ε ) ; u + ε v , 0 , x 0 x t j i ; u , 0 , x 0 ε = ψ t j i + g ˙ t j i ( 0 ) f t j i , y i t j i + B t j i u t j i , i Λ , j = 1 , 2 , , k .
Together with assumption [J](2), (45) and (44), it follows that when  g t j i ( ε ) > t j i ,
ψ t j i + = lim ε 0 x ε g t j i ( ε ) + ; u + ε v , , x 0 x g t j i ( ε ) ; u , 0 , x 0 ε = lim ε 0 1 ε [ x ε g t j i ( ε ) ; u + ε v , 0 , x 0 + J i x ε g t j i ( ε ) ; u + ε v , 0 , x 0 x g t j i ( ε ) ; u , t j i , x t j i ; u , 0 , x 0 + J i x t j i ; u , 0 , x 0 ] = lim ε 0 1 ε [ x ε g t j i ( ε ) ; u + ε v , 0 , x 0 + J i x ε g t j i ( ε ) ; u + ε v , 0 , x 0 x t j i ; u , 0 , x 0 J i x t j i ; u , 0 , x 0 t j i g t j i ( ε ) [ f s , x s ; u , 0 , x 0 + B ( s ) u ( s ) ] d s ] = I + J i y i t j i ψ t j i + g ˙ t j i ( 0 ) f t j i , y i t j i + B t j i u t j i g ˙ t j i ( 0 ) f t j i , y i t j i + B t j i u t j i = ψ t j i + J i y i t j i ψ t j i + g ˙ t j i ( 0 ) f t j i , y i t j i + B t j i u t j i ,
and when  g t j i ( ε ) < t j i ,
ψ t j i + = lim ε 0 x ε t j i ; u + ε v , 0 , x 0 x t j i + ; u , 0 , x 0 ε = lim ε 0 1 ε [ x ε g t j i ( ε ) ; u + ε v , 0 , x 0 + J i x ε g t j i ( ε ) ; u + ε v , 0 , x 0 x t j i ; u , 0 , x 0 J i x t j i ; u , 0 , x 0 t j i g t j i ( ε ) f s , x ε s ; u + ε v , 0 , x 0 + B ( s ) ( u ( s ) + ε v ( s ) ) d s ] = ψ t j i + J i y i t j i ψ t j i + g ˙ t j i ( 0 ) f t j i , y i t j i + B t j i u t j i .
Consequently, we have
ψ t j i + = ψ t j i + J i y i t j i ψ t j i + g ˙ t j i ( 0 ) f t j i , y i t j i + B t j i u t j i
for  i Λ j = 1 , 2 , , k . Therefore, when  t t j i , t j + 1 r  ( j = 1 , 2, ⋯,  k 1 ) or  t t k r , T , it follows from assumption [F](3), (10), (35), (3), (17), (44) and (45) that
ψ ( t ) = lim θ 0 x θ ( t ; u + θ v , 0 , x 0 ) x ( t ; u , 0 , x 0 ) θ = lim θ 0 x θ ( t ; u + θ v , g t j i ( θ ) , Υ i ( g t j i ( θ ) ) ) x ( t ; u , t j i , Υ i ( t j i ) ) θ = lim θ 0 Υ i ( g t j i ( θ ) ) Υ i ( t j i ) ) θ + lim θ 0 g t j i ( θ ) t 0 1 f x ( s , x ( s ; u , 0 , x 0 ) + ξ ( x θ ( s ; u + θ v , 0 , x 0 ) x ( s ; u , 0 , x 0 ) ) ) x θ ( s ; u + θ v , 0 , x 0 ) x ( s ; u , 0 , x 0 ) θ d ξ d s + lim θ 0 g t j i ( θ ) t B ( s ) v ( s ) d s lim θ 0 1 θ t j i g t j i ( θ ) [ f ( s , x ( s ; u , 0 , x 0 ) ) + B ( s ) u ( s ) ] d s = ψ t j i + J i y i t j i ψ t j i + g ˙ t j i ( 0 ) f t j i , y i t j i + B t j i u t j i + t j i t B ( s ) v ( s ) d s + lim θ 0 g t j i ( θ ) t 0 1 f x ( s , x ( s ; u , 0 , x 0 ) + ξ ( x θ ( s ; u + θ v , 0 , x 0 ) x ( s ; u , 0 , x 0 ) ) ) x θ ( s ; u + θ v , 0 , x 0 ) x ( s ; u , 0 , x 0 ) θ d ξ d s .
Thus, it follows from (39), (42) and (46) that
ψ ˙ ( t ) = f x ( t , x ( t ; u , 0 , x 0 ) ) ψ ( t ) + B ( t ) v ( t ) , t ( 0 , T ] and t t j i , i Λ , j = 1 , 2 , , k , ψ ( 0 ) = 0 , ψ t j i + = I + J i y i t j i ψ t j i + g ˙ t j i ( 0 ) J i y i t j i f t j i , y i t j i + B t j i u t j i , j = 1 , 2 , , k .
This completes the proof of Theorem 6. □

8. Conclusions

In this paper, we proposed a class of widely applied impulsive differential systems and gave its qualitative theory under some weaker conditions, including the existence, uniqueness, and periodicity of the solution, as well as the continuous dependence and differentiability of the solution on the initial value. For the pulse phenomena of the solution, it is significant to give the sufficient and necessary conditions. It is very interesting that the pulse may destroy the intrinsic properties of the system, such as the existence, the continuous dependence, and differentiability of solution. Moreover, these results also lay a theoretical foundation for the optimal control problem given by impulsive different systems with impulses at variable times and the applications of such systems.

Author Contributions

Conceptualization, H.X., Y.P. and P.Z.; Methodology, Y.P. and P.Z.; Writing—original draft, H.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China No. 12061021 and No. 11161009).

Data Availability Statement

No new data were created or analyzed in this study.

Acknowledgments

The authors would like to thank the anonymous reviewers and the editor of this journal for their valuable time and their careful comments and suggestions because of which the quality of this paper has been improved.

Conflicts of Interest

The authors declare there are no conflicts of interest.

References

  1. Bensoussan, A.; Giuseppe, D.P.; Michel, C.D.; Sanjoy, K.M. Representation and Control of Infinite Dimensional Systems, 2nd ed.; Birkhäuser: Boston, MA, USA, 2007; p. 15. [Google Scholar]
  2. Bainov, D.; Simeonov, P. Impulsive Differential Equations: Periodic Solutions and Applications, 1st ed.; Longman Scientific and Technical: New York, NY, USA, 1993; pp. 39–58. [Google Scholar]
  3. Kobayashi, Y.; Nakano, H.; Saito, T. A simple chaotic circuit with impulsive switch depending on time and state. Nonlinear Dyn. 2006, 44, 73–79. [Google Scholar] [CrossRef]
  4. Touboul, J.; Brette, R. Spiking dynamics of Bidimensional integrate-and-fire neurons. SIAM J. Appl. Dyn. Syst. 2009, 8, 1462–1506. [Google Scholar] [CrossRef]
  5. Izhikevich, E.M. Dynamical Systems in Neuroscience, 1st ed.; MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
  6. Huang, M.; Li, J.; Song, X.; Guo, H. Modeling impulsive injections of insulin: Towards artificial pancreas. SIAM J. Appl. Math. 2012, 72, 1524–1548. [Google Scholar] [CrossRef]
  7. Zhang, Q.; Tang, B.; Cheng, T.; Tang, S. Bifurcation analysis of a generalized impulsive Kolmogorov model with applitions to pest and disease control. SIAM J. Appl. Math. 2020, 80, 1796–1819. [Google Scholar] [CrossRef]
  8. Catllá, A.J.; Schaeffer, D.G.; Witelski, T.P.; Monson, E.E. On spiking models for synaptic activity and impulsive differential equations. SIAM Rev. 2008, 50, 553–569. [Google Scholar]
  9. Akhmet, M.; Yilmaz, E. impulsive differential equations. In Neural Networks with Discontinuous/Impact Activations; Akhmet, M., Yilmaz, E., Eds.; Springer: New York, NY, USA, 2014; pp. 67–83. [Google Scholar]
  10. Lions, P.L.; Perthame, B. Quasi-variational inequalities and ergodic impulse control. SIAM J. Control Optim. 1986, 24, 604–615. [Google Scholar] [CrossRef]
  11. Wang, Y.; Lu, J. Some recent results of analysis and control for impulsive systems. Commun. Nonlinear Sci. Numer. Simulat. 2019, 80, 104862. [Google Scholar] [CrossRef]
  12. Krylov, M.M.; Bogolyubov, N.N. Introduction to Nonlinear Mechanics; Academiya Nauk Ukrin: Kiev, Ukraine, 1937. (In Russian) [Google Scholar]
  13. Samoilenko, A.M.; Perestyuk, N.A. The method of averaging in systems with an impulsive action. Ukr. Math. J. 1974, 26, 342–347. [Google Scholar] [CrossRef]
  14. Ahmed, N.U. Existence of optimal controls for a general class of impulsive systems on Banach spaces. SIAM J. Control Optim. 2003, 42, 669–685. [Google Scholar] [CrossRef]
  15. Peng, Y.; Xiang, X. A class of nonlinear impulsive differential equation and optimal controls on time scales. Discrete Cont. Dyn.-B 2011, 16, 1137–1155. [Google Scholar] [CrossRef]
  16. Nain, A.K.; Vats, R.K.; Kumar, A. Caputo-Hadamard fractional differential equation with impulsive boundary conditions. J. Math. Model. 2021, 9, 93–106. [Google Scholar]
  17. Malik, M.; Kumar, A. Existence and controllability results to second order neutral differential equation with non-instantaneous impulses. J. Control Decis. 2020, 7, 286–308. [Google Scholar] [CrossRef]
  18. Samoilenko, A.M.; Perestyuk, N.A. On stability of solutions of impulsive systems. Differ. Uravn. 1981, 17, 1995–2002. [Google Scholar]
  19. Bainov, D.D.; Dishliev, A.B. Sufficient conditions for absence of ”beating” in systems of differential equations with impulses. Appl. Anal. 1984, 18, 67–73. [Google Scholar]
  20. Hu, S.; Lakshmikantham, V. Periodic boundary value problem for second order impulsive differential systems. Nonlinear Anal. TMA 1989, 13, 75–85. [Google Scholar] [CrossRef]
  21. Lakshmikantham, V.; Bainov, D.D.; Simenonov, P.S. Theory of Impulsive Differential Equences, 1st ed.; World Scientific: Hong Kong, China, 1989. [Google Scholar]
  22. Samoilenko, A.M. Application of the method of averaging for the investigation of vibrations excited by instantaneous impulses in self-vibrating systems of the second order with a small parameter. Ukr. Math. J. 1961, 13, 103–108. [Google Scholar]
  23. Bainov, D.; Simeonov, P. Impulsive Differential Equations, 1st ed.; World Scientific: Singapore, 1995. [Google Scholar]
  24. Benchohra, M.; Henderson, J.; Ntouyas, S. Impulsive Differential Equations and Inclusions, 1st ed.; Hindawi Publishing Corporation: New York, NY, USA, 2006. [Google Scholar]
  25. Peng, Y.; Wu, K.; Qin, S.; Kang, Y. Properties of solution of linear controlled systems with impulses at variable times. In Proceedings of the 36th Chinese Control Conference, Dalian, China, 26–28 July 2017. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xia, H.; Peng, Y.; Zhang, P. Existence and Properties of the Solution of Nonlinear Differential Equations with Impulses at Variable Times. Axioms 2024, 13, 126. https://doi.org/10.3390/axioms13020126

AMA Style

Xia H, Peng Y, Zhang P. Existence and Properties of the Solution of Nonlinear Differential Equations with Impulses at Variable Times. Axioms. 2024; 13(2):126. https://doi.org/10.3390/axioms13020126

Chicago/Turabian Style

Xia, Huifu, Yunfei Peng, and Peng Zhang. 2024. "Existence and Properties of the Solution of Nonlinear Differential Equations with Impulses at Variable Times" Axioms 13, no. 2: 126. https://doi.org/10.3390/axioms13020126

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics