Next Article in Journal
Normal Toeplitz Operators on the Fock Spaces
Next Article in Special Issue
On Optimization Techniques for the Construction of an Exponential Estimate for Delayed Recurrent Neural Networks
Previous Article in Journal
Co-Compact Separation Axioms and Slight Co-Continuity
Previous Article in Special Issue
Exact Solutions and Continuous Numerical Approximations of Coupled Systems of Diffusion Equations with Delay
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Conditions to Guarantee the Existence of the Solution to Stochastic Differential Equations of Neutral Type

Department of mathematics, Changwon National University, Changwon 641-773, Korea
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(10), 1613; https://doi.org/10.3390/sym12101613
Submission received: 28 August 2020 / Revised: 23 September 2020 / Accepted: 27 September 2020 / Published: 29 September 2020
(This article belongs to the Special Issue Ordinary and Partial Differential Equations: Theory and Applications)

Abstract

:
The main purpose of this study was to demonstrate the existence and the uniqueness theorem of the solution of the neutral stochastic differential equations under sufficient conditions. As an alternative to the stochastic analysis theory of the neutral stochastic differential equations, we impose a weakened H o ¨ lder condition and a weakened linear growth condition. Stochastic results are obtained for the theory of the existence and uniqueness of the solution. We first show that the conditions guarantee the existence and uniqueness; then, we show some exponential estimates for the solutions.

1. Introduction

In the study of natural science systems, we assume that the system being researched is governed by the causes and results of the principle. A more realistic model would include some of the past and present values, but that involves derivatives with delays as well as the function of the system. These equations have historically been referred to as neutral stochastic functional differential equations, or neutral stochastic differential delay equations [1,2,3,4,5,6].
This kind of probability differential equation is not easy to obtain the solution, but often arises from the study of more than one simple electrodynamic or oscillating system with some interconnection. We ca not ignore the effect of the science systems with time delay. For example, when studying the collision problem in electrodynamics, Driver [7] considered the system of neutral type:
z ˙ ( t ) = f 1 ( z ( t ) , z ( δ ( t ) ) ) + f 2 ( z ( t ) , z ( δ ( t ) ) ) z ˙ ( δ ( t ) ) ,
where δ ( t ) t . Generally, a neutral functional differential equation has the form
d d t [ z ( t ) D ( z t ) ] = f ( z t , t ) .
Taking into account stochastic perturbations, we are led to a neutral stochastic functional differential equation
d [ z ( t ) D ( z t ) ] = f ( z t , t ) d t + g ( z t , t ) d B ( t )
Neutral stochastic functional differential equations (NSDEs) have been used to model problems in several areas of science and engineering. For instance, in 2007, Mao [5] published stochastic differential equations and applications. After that, the study of the existence and uniqueness theorem for stochastic differential equations (SDEs) and NSDEs developed into some new uniqueness theorems for SDEs and NSDEs under special conditions. See [1,2,6,8,9,10,11,12,13,14,15], and references therein for details.
A special example of this type equation study is the following findings: In 2010, Li and Fu [4] studied the stability of solution of stochastic functional differential equations and applied the results to the present neural networks. In 2019, Bae et al. [8] studied a theorem of existence and uniqueness of the solution to stochastic differential equations. Kim [1,2] considered the solution to the following neutral stochastic functional differential equations under different conditions:
d [ z ( t ) G ( z t , t ) ] = f ( z t , t ) d t + g ( z t , t ) d B ( t ) ,
where z t = { z ( t + θ ) : < θ 0 } .
Motivated by [1,5,8,11,12], we investigated the conditions that guarantee the existence and uniqueness theorem of the solution for NSDEs in a phase space M 2 [ t 0 , T ] ; R d in this paper. We still take t 0 R as our initial time throughout this paper and we aimed to prove our main results as follows: first, under the weakened H o ¨ lder condition and the weakened linear growth condition, we estimate the bounds of the solution for NSDEs. Next, we prove the existence and uniqueness theorem of the solution for NSDEs. Finally, we derive the estimate for the error between Picard iterations X n ( t ) and the unique solution X ( t ) to NSDEs.

2. Preliminary and Basic Lemmas

The symbol | · | represents the Euclidean norm in R n . If X is a random variable and is integrable with respect to the measure P, then the integral E X is called the expectation of X . The transpose of vector or matrix A is marked as A T ; if A is a matrix, its trace norm is denoted by | A | = trace ( A T A ) . B C ( ( , 0 ] ; R d ) denotes the family of bounded continuous R d -value functions φ defined on ( , 0 ] with norm φ = sup < θ 0 | φ ( θ ) | . M 2 ( ( , T ] ; R d ) denotes the family of all R d -valued measurable F t -adapted process ψ ( t ) = ψ ( t , w ) , t ( , T ] such that E T | ψ ( t ) | 2 d t < .
Let t 0 be a positive constant and ( Ω , F , P ) be a complete probability space with a filtration { F t } t t 0 satisfying the usual conditions (i.e., it is right continuous and F t 0 contains all P-null sets) throughout this paper unless otherwise specified.
An m-dimensional Brownian motion defined on complete probability space is denoted by B ( t ) , that is, B ( t ) = ( B 1 ( t ) , B 2 ( t ) , , B m ( t ) ) T .
For 0 t 0 T < , we define two Borel measurable mappings f : B C ( ( , 0 ] ; R d ) × [ t 0 , T ] R d and g : B C ( ( , 0 ] ; R d ) × [ t 0 , T ] R d × m and a continuous mapping D : B C ( ( , 0 ] ; R d ) R d .
With the above preparations, consider the following d-dimensional neutral SFDEs:
d [ X ( t ) D ( X t , t ) ] = f ( X t , t ) d t + g ( X t , t ) d B ( t ) , t 0 t T , ,
where X t = { X ( t + θ ) : < θ 0 } can be considered a B C ( ( , 0 ] ; R d ) -value stochastic process. The initial value of the system (1),
X t 0 = ξ = { ξ ( θ ) : < θ 0 } ,
is a F t 0 -measurable, B C ( ( , 0 ] ; R d ) - value random variable such that ξ M 2 ( ( , 0 ] ; R d ) .
To be more precise, we give the definition of the solution to Equation (1) with initial data Equation (2).
Definition 1
([6]). The R d -value stochastic process X ( t ) , which is defined on < t T , is called the solution of (1) with initial data (2) if X ( t ) has the following properties:
(i) 
X ( t ) is continuous and { X ( t ) } t 0 t T is F t -adapted;
(ii) 
{ f ( X t , t ) } L 1 ( [ t 0 , T ] ; R d ) and { g ( X t , t ) } L 2 ( [ t 0 , T ] ; R d × m ) ;
(iii) 
X t 0 = ξ , for each t 0 t T ,
X ( t ) = ξ ( 0 ) + D ( X t , t ) D ( ξ , t 0 ) + t 0 t f ( X s , s ) d s + t 0 t g ( X s , s ) d B ( s ) a . s .
X ( t ) is called a unique solution if any other solution x ¯ ( t ) is indistinguishable with X ( t ) , that is,
P { X ( t ) = X ¯ ( t ) , for any < t T } = 1 .
The following lemmas are known as special names for the integrals that appeared in [1,5,16] and play an important role in the next section.
Lemma 1
((Stachurska’s inequality) ([16])). Let u ( t ) and k ( t ) be nonnegative continuous functions for t α , and let u ( t ) a ( t ) + b ( t ) α t k ( s ) u p ( s ) d s , t J = [ α , β ) , where a / b is a nondecreasing function and 0 < p < 1 . Then,
u ( t ) a ( t ) 1 ( p 1 ) a ( t ) b ( t ) p 1 α t k ( s ) b p ( s ) d s 1 / ( p 1 ) .
Lemma 2
([1]). Let u ( t ) and a ( t ) be continuous functions on [ 0 , T ] . Let c 1 and 0 < p 1 be constants. If u ( t ) c + t 0 t a ( s ) u p ( s ) d s for t I , then
u ( t ) c exp t 0 t a ( s ) d s
for t I .
Lemma 3
((Hölder’s inequality) ([5,16])). If 1 p + 1 q = 1 for any p , q > 1 , f L p , and g L q , then f g L 1 and a b f g dx ( a b | f | p dx ) 1 / p ( a b | g | q dx ) 1 / q .
Lemma 4
((Bihari’s inequality) ([5,16])). Let x ( t ) and y ( t ) be non-negative continuous functions defined on R + . Let z ( u ) be a non-decreasing continuous function R + and z ( u ) > 0 on ( 0 , ) . If
x ( t ) a + 0 t y ( s ) z ( x ( s ) ) d s ,
for t R + , where a 0 is a constant, then for 0 t t 1 ,
x ( t ) L 1 L ( a ) + 0 t y ( s ) d s ,
where L ( r ) = r 0 r d s z ( s ) , r > 0 , r 0 > 0 , and L 1 is the inverse function of L, and t 1 R + is chosen so that L ( a ) + 0 t y ( s ) d s D o m ( L 1 ) for all t R + lying in the interval 0 t t 1 .
The following lemmas are known as special names for stochastic integrals that appear edin [5] and play an important role in the next section.
Lemma 5
((Moment inequality) ([5])). Let p 2 . Let f M 2 ( [ 0 , T ] ; R d × m ) such that
E 0 T | f ( s ) | p d s < .
Then,
E 0 T f ( s ) d B ( s ) p p ( p 1 ) 2 p 2 T p 2 2 E 0 T | f ( s ) | p d s .
Lemma 6
((Moment inequality) ([5])). If p 2 , g M 2 ( [ 0 , T ] ; R d × m ) such that E 0 T | g ( s ) | p d s < , then
E sup 0 t T 0 t g ( s ) d B ( s ) p p 3 2 ( p 1 ) p 2 T p 2 2 E 0 T | g ( s ) | p d s .

3. Results

To obtain main results of the solution to Equation (1), we impose following assumptions:
Hypothesis 1.
For any φ , ψ B C ( , 0 ] ; R d , and t [ t 0 , T ] , we assume that
| f ( φ , t ) f ( ψ , t ) | 2 | g ( φ , t ) g ( ψ , t ) | 2 κ | | φ ψ | | 2 α .
where 0 < α 1 , and κ ( · ) is a concave non-decreasing function from R + to R + such that κ ( 0 ) = 0 , κ ( u ) > 0 , for u > 0 and 0 + 1 κ ( u ) d u = .
Hypothesis 2.
For any t [ t 0 , T ] , it follows that f ( 0 , t ) , g ( 0 , t ) L 2 such that:
| f ( 0 , t ) | 2 | g ( 0 , t ) | 2 K 1 , ,
where K 1 is a positive constant.
Hypothesis 3.
Assuming there exists a positive number K 2 such that 0 < K 2 < 1 and for any φ , ψ B C ( ( , 0 ] ; R d ) , it follows that
| D ( φ ) D ( ψ ) | K 2 φ ψ .
To demonstrate the generality of our results, we provide an illustration using a concave function κ ( · ) . Let K > 0 and let δ ( 0 , 1 ) be sufficiently small. Define
κ 1 ( u ) = K u , u > 0 κ 2 ( u ) = u log ( u 1 ) , 0 u < δ δ log ( δ 1 ) + κ 2 ˙ ( δ ) ( u δ ) , u > δ κ 3 ( u ) = u log ( u 1 ) log log ( u 1 ) , 0 u < δ δ log ( δ 1 ) log log ( δ 1 ) + κ 3 ˙ ( δ ) ( u δ ) , u > δ
They are all concave nondecreasing functions satisfying κ i ( u ) > 0 , for u > 0 and 0 + 1 κ i ( u ) d u = . In particular, the condition in Bae et al. [8] is a special case of our proposed condition (4).
Since our goal was to demonstrate the existence and uniqueness theorem of the solution of the neutral stochastic differential Equation (1) under sufficient conditions, we start with following useful lemmas:
Lemma 7
([5]). For any x , y 0 and 0 < α < 1 , we have ( x + y ) 2 x 2 / α + y 2 / ( 1 α ) .
Lemma 8
([5]). Let p 2 and (6) hold. Then,
sup t 0 s t | X ( s ) | p λ 1 λ | | ξ | | p + 1 ( 1 λ ) p sup t 0 s t | X ( s ) D ( X s ) | p .
Lemma 9.
Assume that (4)–(6) hold and let M 2 1 . If X ( t ) is a solution of Equation (1) with initial data (2), then
E sup < t T | X ( t ) | 2 E | | ξ | | 2 + M 2 exp 6 b ( T t 0 ) ( T t 0 + 4 ) ( 1 K 2 ) 2 ,
where M 1 = 3 E | | ξ | | 2 + 6 ( T t 0 + 4 ) ( T t 0 ) ( a + K 1 ) and M 2 = [ K 2 E | | ξ | | 2 + M 1 + 6 b ( T t 0 + 4 ) E | | ξ | | 2 α ] / ( 1 K 2 ) 2 . In particular, X ( t ) belong to M 2 ( , T ] ; R d .
Proof. 
For each number n 1 , define the stopping time
τ n = T inf { t [ t 0 , T ] : | | X ( t ) | | n } .
As n , τ n T a.s. Let X n ( t ) = X ( t τ n ) , t [ t 0 , T ] . Then X n ( t ) satisfy the following equation:
X n ( t ) = D ( X t n ) D ( ξ ) + J n ( t ) ,
where:
J n ( t ) = ξ ( 0 ) + t 0 t f X s n , s I [ t 0 , τ n ] ( s ) d s + t 0 t g X s n , s I [ t 0 , τ n ] ( s ) d B ( s ) .
Applying Lemma 7 and condition (6) yields:
| X n ( t ) | 2 K 2 | | X t n | | 2 + K 2 1 K 2 | | ξ | | 2 + 1 1 K 2 | J n ( t ) | 2 .
Considering the expectation, we get:
E | X n ( t ) | 2 K 2 E | | X t n | | 2 + K 2 1 K 2 E | | ξ | | 2 + 1 1 K 2 E | J n ( t ) | 2 .
Noting that E sup < s t | X n ( s ) | 2 E | | ξ | | 2 + E sup t 0 s t | X n ( s ) | 2 , we see that:
E sup t 0 s t | X n ( s ) | 2 K 2 E sup t 0 s t | X n ( s ) | 2 + K 2 1 K 2 E | | ξ | | 2 + 1 1 K 2 E sup t 0 s t | J n ( s ) | 2 .
Consequently,
E sup t 0 s t | X n ( s ) | 2 K 2 ( 1 K 2 ) 2 E | | ξ | | 2 + 1 ( 1 K 2 ) ( 1 K 2 ) E sup t 0 s t | J n ( s ) | 2 .
Using the elementary inequality ( y + z + w ) 2 3 ( y 2 + z 2 + w 2 ) , Hölder’s inequality, and the moment inequality, we have:
E sup t 0 s t | J n ( t ) | 2 3 E | | ξ | | 2 + 3 ( T t 0 ) E t 0 t | f ( X s n , s ) f ( 0 , s ) + f ( 0 , s ) | 2 d s + 12 E t 0 t | g ( X s n , s ) g ( 0 , s ) + g ( 0 , s ) | 2 d s .
Using the elementary inequality ( y + z ) 2 2 y 2 + 2 z 2 , (4) and (5), we have:
E sup t 0 s t | J n ( t ) | 2 3 E | | ξ | | 2 + 6 ( t t 0 + 4 ) E t 0 t ( κ ( | | X s n | | 2 α ) + K 1 ) d s .
If κ ( · ) is concave and κ ( 0 ) = 0 , we can find the positive constants a and b such that κ ( u ) a + b u for all u 0 . So, we obtain:
E sup t 0 s t | J n ( s ) | 2 M 1 + 6 b ( T t 0 + 4 ) t 0 t E | | X s n | | 2 α d s ,
where M 1 = 3 E | | ξ | | 2 + 6 ( T t 0 + 4 ) ( T t 0 ) ( a + K 1 ) . Substituting this into (7) yields:
E sup t 0 s t | X n ( s ) | 2 M 2 + 6 b ( T t 0 + 4 ) ( 1 K 2 ) 2 t 0 t E sup t 0 r s | X n ( r ) | 2 α d s ,
where M 2 = [ K 2 E | | ξ | | 2 + M 1 + 6 b ( T t 0 + 4 ) E | | ξ | | 2 α ] / ( 1 K 2 ) 2 . Lemma 2 yields:
E sup t 0 s t | X n ( s ) | 2 M 2 exp 6 b ( T t 0 + 4 ) ( T t 0 ) ( 1 K 2 ) 2 .
Noting that E sup < s t | X n ( s ) | 2 E | | ξ | | 2 + E sup t 0 s t | X n ( s ) | 2 , we see that:
E sup < s t | X n ( s ) | 2 E | | ξ | | 2 + M 2 exp 6 b ( T t 0 + 4 ) ( T t 0 ) ( 1 K 2 ) 2 .
Letting n implies the following inequality:
E sup < t T | X ( t ) | 2 E | | ξ | | 2 + M 2 exp 6 b ( T t 0 + 4 ) ( T t 0 ) ( 1 K 2 ) 2 .
We obtain the required inequality. □
Now, we provide the uniqueness theorem to the solution of Equation (1) with initial data (2).
Theorem 1.
Assume that (4)–(6) hold and let M 2 1 . Let X ( t ) and X ¯ ( t ) be any two solutions of Equation (1) with initial value (2). Then, a unique solution X ( t ) exists to Equation (1). X ( t ) M 2 ( , T ] ; R d .
Proof. 
Let X ( t ) and X ¯ ( t ) be any two solutions of (1). From Lemma 8, X ( t ) , X ¯ ( t ) M 2 ( , T ] ; R d . Note that:
X ( t ) X ¯ ( t ) = D ( X t ) D ( X ¯ t ) + J ( t ) ,
where:
J ( t ) = t 0 t [ f ( X s , s ) f ( X ¯ s , s ) ] d s + t 0 t [ g ( X s , s ) g ( X ¯ s , s ) ] d B ( s ) .
By Lemma 7 and condition (6), we easily see that:
| X ( t ) X ¯ ( t ) | 2 K 2 | | X t X ¯ t | | 2 + 1 1 K 2 | J ( t ) | 2 .
Taking the expectation on both sides:
E sup t 0 s t | X ( s ) X ¯ ( s ) | 2 K 2 E sup t 0 s t | X ( s ) X ¯ ( s ) | 2 + 1 1 K 2 E sup t 0 s t | J ( s ) | 2 ,
which implies:
E sup t 0 s t | X ( s ) X ¯ ( s ) | 2 1 ( 1 K 2 ) 2 E sup t 0 s t | J ( s ) | 2 .
Using the elementary inequality ( y + z ) 2 2 y 2 + 2 z 2 , we have:
| J ( t ) | 2 2 t 0 t [ f ( X s , s ) f ( X ¯ s , s ) ] d s 2 + 2 t 0 t [ g ( X s , s ) g ( X ¯ s , s ) ] d B ( s ) 2 .
By the Hölder inequality, the moment inequality, and (4), we have:
E sup t 0 s t | J ( s ) | 2 2 ( T t 0 + 4 ) E t 0 t κ | | X s X ¯ s | | 2 α d s .
Since κ ( · ) is concave, by the Jensen Inequality, we have:
E κ | | X s X ¯ s | | 2 α κ E | | X s X ¯ s | | 2 α .
Consequently, for any ε > 0 :
E sup t 0 s t | X ( s ) X ¯ ( s ) | 2 ε + 2 ( T t 0 + 4 ) ( 1 K 2 ) 2 t 0 t κ E sup t 0 r s | X ( r ) X ¯ ( r ) | 2 α d s .
By the Bihari inequality, we deduce that for all sufficiently small ε > 0 :
E sup t 0 s t | J ( s ) | 2 G 1 [ G ( ε ) + 2 ( T t 0 + 4 ) ( T t 0 ) ( 1 K 2 ) 2 ] ,
where:
G ( r ) = 1 r 1 κ 1 ( u ) d u ,
with r > 0 , κ 1 ( u ) = κ ( u α ) , and z ( t ) = E sup t 0 s t | X ( s ) X ¯ ( s ) | 2 , and G 1 ( · ) is the inverse function of G ( · ) . By assuming 0 + 1 κ ( u ) d u = and the definition of κ ( · ) , we see that lim ε 0 G ( ε ) = . Then,
lim ε 0 G 1 [ G ( ε ) + 2 ( T t 0 + 4 ) ( T t 0 ) ( 1 K 2 ) 2 ] = 0 .
Therefore, letting ε 0 in (8) gives:
E sup t 0 s t | X ( s ) X ¯ ( s ) | 2 = 0 .
This implies that X ( t ) = X ¯ ( t ) for t 0 t T . Hence, for all < t T almost surely. The uniqueness has been proved. □
To obtain the existence of solutions to neutral SFDEs, define X t 0 0 = ξ and X 0 ( t ) = ξ ( 0 ) for t 0 t T . For each n = 1 , 2 , , set X t 0 n = ξ and, by the Picard iterations, define:
X n ( t ) D ( X t n 1 ) = ξ ( 0 ) D ( ξ ) + t 0 t f ( X s n 1 , s ) d s + t 0 t g ( X s n 1 , s ) d B ( s )
for t 0 t T .
Since our goal was to find the conditions that guarantee the existence of the solution to Equation (1), we start with following useful lemma:
Lemma 10.
Let the assumptions (4)–(6) hold. Let X n ( t ) be the Picard iteration defined by (9). Then,
E sup t 0 t T | X n ( t ) | 2 M 3 exp 6 b ( T t 0 + 4 ) ( T t 0 ) ( 1 K 2 ) ( 1 K 2 ) ,
where M 3 = [ 2 K 2 K 2 ] E | | ξ | | 2 / ( 1 K 2 ) 2 + [ 3 E | | ξ | | 2 + C 1 ] / ( 1 K 2 ) ( 1 K 2 ) , C 1 = 6 ( T t 0 + 4 ) ( T t 0 ) ( a + K 1 + ( 1 + b ) E | | ξ | | 2 α ) .
Proof. 
X 0 ( t ) M 2 ( , T ] ; R d . It is easy to find that X n ( t ) M 2 ( [ t 0 , T ] ; R d ) . Note that:
X n ( t ) = D ( X t n 1 ) D ( ξ ) + J n 1 ( t ) ,
where:
J n 1 ( t ) = ξ ( 0 ) + t 0 t f ( X s n 1 , s ) d s + t 0 t g ( X s n 1 , s ) d B ( s ) .
It follows from Lemma 7 that:
| X n ( t ) | 2 K 2 | | X t n 1 | | 2 + K 2 1 K 2 | | ξ | | 2 + 1 1 K 2 | J n 1 ( t ) | 2 .
Taking the expectation on both sides, we have:
E sup t 0 s t | X n ( s ) | 2 K 2 E sup t 0 s t | | X s n 1 | | 2 + K 2 1 K 2 E | | ξ | | 2 + 1 1 K 2 E sup t 0 s t | J n 1 ( s ) | 2 .
We have:
E sup t 0 s t | | X s n | | 2 E sup < s t | X n ( s ) | 2 E | | ξ | | 2 + E sup t 0 s t | X n ( s ) | 2 .
Combining (11) and (12), we obtain:
E sup t 0 s t | X n ( s ) | 2 K 2 1 K 2 E | | ξ | | 2 + K 2 E sup t 0 s t | X n 1 ( s ) | 2 + 1 1 K 2 E sup t 0 s t | J n 1 ( s ) | 2 .
Taking the maximum on both sides:
max 1 n r E sup t 0 s t | X n ( s ) | 2 2 K 2 K 2 ( 1 K 2 ) 2 E | | ξ | | 2 + 1 ( 1 K 2 ) ( 1 K 2 ) max 1 n r E sup t 0 s t | J n 1 ( s ) | 2 .
Using the elementary inequality ( y i ) p n p 1 y i p , when p 1 , we have:
| J n 1 ( t ) | 2 3 | ξ ( 0 ) | 2 + 3 t 0 t f ( X s n 1 , s ) d s 2 + 3 t 0 t g ( X s n 1 , s ) d B ( s ) 2 .
By Hölder’s inequality and the moment inequality, we have:
E | J n 1 ( t ) | 2 3 E | | ξ | | 2 + 3 ( t t 0 ) E t 0 t | f ( X s n 1 , s ) f ( 0 , s ) + f ( 0 , s ) | 2 d s + 12 E t 0 t | g ( X s n 1 , s ) g ( 0 , s ) + g ( 0 , s ) | 2 d s .
Using the elementary inequality ( y + z ) 2 2 y 2 + 2 z 2 , (4) and (5), we have:
E | J n 1 ( t ) | 2 3 E | | ξ | | 2 + 6 ( t t 0 + 4 ) E t 0 t ( κ ( | | X s n | | 2 α ) + K ) d s .
If κ ( · ) is concave and κ ( 0 ) = 0 , we can find the positive constants a and b such that κ ( u ) a + b u for all u 0 . So, we have:
E sup t 0 s t | J n 1 | 2 3 E | | ξ | | 2 + 6 ( T t 0 + 4 ) ( T t 0 ) ( a + K 1 ) + 6 b ( T t 0 + 4 ) t 0 t E | | X s n 1 | | 2 α d s .
Combining (13) and (14), we have:
max 1 n r E sup t 0 s t | X n ( s ) | 2 M 3 + 6 b ( T t 0 + 4 ) ( 1 K 2 ) ( 1 K 2 ) t 0 t max 1 n r E sup t 0 u s | X n ( u ) | 2 α d s ,
where M 3 = [ 2 K 2 K 2 ] E | | ξ | | 2 / ( 1 K 2 ) 2 + [ 3 E | | ξ | | 2 + C 1 ] / ( 1 K 2 ) ( 1 K 2 ) , C 1 = 6 ( T t 0 + 4 ) ( T t 0 ) ( a + K 1 + ( 1 + b ) E | | ξ | | 2 α ) . From Lemma 2, we have:
max 1 n r E sup t 0 s t | X n ( s ) | 2 M 3 exp 6 b ( T t 0 + 4 ) ( T t 0 ) ( 1 K 2 ) ( 1 K 2 ) ,
since r is arbitrary, we must have:
E sup t 0 t T | X n ( t ) | 2 M 3 exp 6 b ( T t 0 + 4 ) ( T t 0 ) ( 1 K 2 ) ( 1 K 2 ) .
The proof is complete. □
Now we outline the existence theorem to the solution of Equation (1) with initial data (2) using approximate solutions by means of Picard sequence (9).
Theorem 2.
Assume that (4)–(6) hold. Then a solution X ( t ) exists to Equation (1) with initial value (2).
Proof. 
We first show that { X n ( t ) } ( n 0 ) defined by (9) is a Cauchy sequence in B C ( [ t 0 , T ] , R d ) . For n 1 and t [ t 0 , T ] , it follows from (9) that:
X n + 1 ( t ) X n ( t ) = D ( X t n ) D ( X t n 1 ) + t 0 t [ f ( X s n , s ) f ( X s n 1 , s ) ] d s + t 0 t [ g ( X s n , s ) g ( X s n 1 , s ) ] d B ( s ) .
By the elementary inequality | y + z | 2 2 ( | y | 2 + | z | 2 ) , Hölder’s inequality, and Lemma 7, we have:
lim sup n E sup t 0 s t | X n + 1 ( s ) X n ( s ) | 2 K 2 lim sup n E sup t 0 s t | X n ( s ) X n 1 ( s ) | 2 + C 2 t 0 t κ lim sup n E sup t 0 r s | X n ( r ) X n 1 ( r ) | 2 α d s ,
where C 2 = 2 ( T t 0 + 4 ) / ( 1 K 2 ) . Therefore,
Z ( t ) C 2 ( 1 K 2 ) t 0 t κ lim sup n E sup t 0 r s | X n + 1 ( r ) X n ( r ) | 2 α d s ,
where Z ( t ) = lim sup n E sup t 0 s t | X n + 1 ( s ) X n ( s ) | 2 . From (15), for any ε > 0 , we obtain:
Z ( t ) ε + 2 ( T t 0 + 4 ) ( 1 K 2 ) 2 t 0 t κ 1 ( Z ( s ) ) d s ,
where κ 1 ( u ) = κ ( u α ) . By the Bihari inequality, we deduce that for all sufficiently small ε > 0 :
Z ( t ) G 1 G ( ε ) + 2 ( T t 0 + 1 ) ( 1 K 2 ) 2 ,
where:
G ( r ) = 1 r 1 κ 1 ( u ) d u
on r > 0 , and G 1 ( · ) is the inverse function of G ( · ) . By assumption, we obtain Z ( t ) = 0 . This shows the sequence { X n ( t ) , n 0 } is a Cauchy sequence in L 2 . Hence, as n , X n ( t ) X ( t ) , that is, E | X n ( t ) X ( t ) | 2 0 . Letting n in Lemma 10 then yields:
E sup t 0 t T | X ( t ) | 2 M 3 exp 6 b ( T t 0 + 4 ) ( T t 0 ) ( 1 K 2 ) ( 1 K 2 ) .
Therefore, X ( t ) M 2 ( , T ] ; R d . It remains to show that X ( t ) satisfies Equation (1). Note that:
E t 0 t [ f ( X s n , s ) f ( X s , s ) ] d s 2 + E t 0 t [ g ( X s n , s ) g ( X s , s ) ] d B ( s ) 2 ( T t 0 + 1 ) t 0 t κ E sup t 0 r s | X n ( r ) X ( r ) | 2 d s .
Noting that sequence X n ( t ) is uniformly converged on ( , T ] , it means that:
E sup t 0 r s | X n ( r ) X ( r ) | 2 0
as n , and
κ E sup t 0 r s | X n ( r ) X ( r ) | 2 0
as n . Hence, taking limits on both sides in the Picard sequence, we obtain:
X ( t ) D ( X t ) = ξ ( 0 ) D ( ξ ) + t 0 t f ( X s , s ) d s + t 0 t g ( X s , s ) d B ( s )
on < t T . The above stochastic process demonstrates that X ( t ) is the solution of Equation (1). So, the existence of the solution has been proved. □
The following lemma shows that the Picard sequence of Euation (1) is bounded under the new conditions.
Lemma 11.
Let the assumption (4)–(6) hold. Let X n ( t ) be the Picard iteration defined by (9). Then for all n 1 , it follows that:
E sup t 0 t T | X n ( t ) X n 1 ( t ) | 2 M 4 + M 5 1 2 b ( α 1 ) M 5 α 1 ( T t 0 ) ( T t 0 + 4 ) ( 1 K 2 ) 2 1 1 α ,
where M 4 = 4 K 2 E | | ξ | | 2 + [ 8 ( T t 0 + 4 ) ( T t 0 ) ( K 1 + a + b E | | ξ | | 2 α ) ] / ( 1 K 2 ) , M 5 = [ ( 1 K 2 ) K 2 M 4 + 2 ( a + b M 4 ) ( T t 0 + 4 ) ( T t 0 ) ] / ( 1 K 2 ) 2 .
Proof. 
By the elementary inequality | y + z | 2 2 ( | y | 2 + | z | 2 ) , Hölder’s inequality, and Lemma 7, we have:
E sup t 0 s t | X 1 ( s ) X 0 | 2 1 K 2 E | D ( X t 0 ) D ( ξ ) | 2 + 1 1 K 2 E sup t 0 r t t 0 r f ( X s 0 , s ) d s + t 0 r g ( X s 0 , s ) d B ( s ) 2 4 K 2 E | | ξ | | 2 + 8 ( T t 0 + 4 ) 1 K 2 E t 0 t ( κ ( | | X s 0 | | 2 α ) + K 1 ) d s M 4 ,
where M 4 = 4 K 2 E | | ξ | | 2 + [ 8 ( T t 0 + 4 ) ( T t 0 ) ( K 1 + a + b E | | ξ | | 2 α ) ] / ( 1 K 2 ) . Conversely,
X n + 1 ( t ) X n ( t ) = D ( X t n ) D ( X t n 1 ) + J n ( t ) ,
where J n ( t ) = t 0 t [ f ( X s n , s ) f ( X s n 1 , s ) ] d s + t 0 t [ g ( X s n , s ) g ( X s n 1 , s ) ] d B ( s ) . Taking the expectation on both sides and using Lemma 7 and (6), we have:
E sup t 0 s t | X n + 1 ( s ) X n ( s ) | 2 K 2 E sup t 0 s t | X n ( s ) X n 1 ( s ) | 2 + 1 1 K 2 E sup t 0 s t | J n ( s ) | 2 .
Taking the maximum on both sides, we have:
max 1 n k E sup t 0 s t | X n + 1 ( s ) X n ( s ) | 2 K 2 1 K 2 E sup t 0 s t | X 1 ( s ) X 0 | 2 + 1 ( 1 K 2 ) 2 max 1 n k E sup t 0 s t | J n ( s ) | 2 .
By the elementary inequality | y + z | 2 2 ( | y | 2 + | z | 2 ) , Hölder’s inequality, Lemma 5, Lemma 7, and (4), we have:
E sup t 0 s t | J n ( s ) | 2 2 ( T t 0 + 4 ) E t 0 t κ ( sup t 0 r s | X n ( r ) X n 1 ( r ) | 2 α ) d s .
Substituting this into (17) yields:
max 1 n k E sup t 0 s t | X n + 1 ( s ) X n ( s ) | 2 M 5 + 2 b ( T t 0 + 4 ) ( 1 K 2 ) 2 t 0 t max 1 n k E sup t 0 r s | X n + 1 ( r ) X n ( r ) | 2 α d s ,
where M 5 = [ ( 1 K 2 ) K 2 M 4 + 2 ( a + b M 4 ) ( T t 0 + 4 ) ( T t 0 ) ] / ( 1 K 2 ) 2 . Therefore, by Stachurska’s inequality, we see that:
max 1 n k E sup t 0 s t | X n + 1 ( s ) X n ( s ) | 2 M 5 1 2 b ( α 1 ) M 5 α 1 ( T t 0 ) ( T t 0 + 4 ) ( 1 K 2 ) 2 1 1 α .
That is:
max 1 n k E sup t 0 t T | X n ( t ) X n 1 ( t ) | 2 M 4 + M 5 1 2 b ( α 1 ) M 5 α 1 ( T t 0 ) ( T t 0 + 4 ) ( 1 K 2 ) 2 1 1 α ,
which is the required inequality. The proof is complete. □
An estimate of the difference between the approximate solution X ( n ( t ) by Picard iteration and the exact solution X ( t ) in the equation was demonstrated in the following theorem.
Theorem 3.
Let the assumption (4)–(6) hold. If X ( t ) is a solution of equation (1) and X n ( t ) be the Picard iteration defined by (9). Then,
E sup t 0 t T | X n ( t ) X ( t ) | 2 M 6 1 4 b ( α 1 ) M 6 α 1 ( T t 0 + 4 ) ( T t 0 ) ( 1 K 2 ) ( 1 K 2 ) 1 1 α ,
where M 6 = [ K 2 K 2 C 3 + 4 ( T t 0 + 4 ) ( T t 0 ) ( 2 a + b C 3 α ) ] / ( 1 K 2 ) ( 1 K 2 ) , C 3 is the right side of inequality (16).
Proof. 
For n 1 and t [ t 0 , T ] , it follows from (9) and the solution of Equation (1) that:
X n ( t ) X ( t ) = D ( X t ) D ( X t n 1 ) + J n 1 ( t ) ,
where J n 1 ( t ) = t 0 t [ f ( X s , s ) f ( X s n 1 , s ) ] d s + t 0 t [ g ( X s , s ) g ( X s n 1 , s ) ] d B ( s ) . Taking the expectation on both sides and using Lemma 7 and (6), we have:
E sup t 0 s t | X n ( s ) X ( s ) | 2 K 2 K 2 C 2 ( 1 K 2 ) ( 1 K 2 ) + 1 ( 1 K 2 ) ( 1 K 2 ) E sup t 0 s t | J n 1 ( s ) | 2 .
By the elementary inequality | y + z | 2 2 ( | y | 2 + | z | 2 ) , Hölder’s inequality, Lemma 5, Lemma 7, and (4), we have:
E sup t 0 s t | J n 1 ( s ) | 2 C 4 ( T t 0 ) ( 2 a + b M 7 α ) + b C 4 t 0 t E sup t 0 r s | X n ( r ) X ( r ) | 2 α d s ,
where C 4 = 4 ( T t 0 + 4 ) . Substituting this into (18) yields:
E sup t 0 s t | X n ( s ) X ( s ) | 2 M 6 + 4 b ( T t 0 + 4 ) ( 1 K 2 ) ( 1 K 2 ) t 0 t E sup t 0 r s | X n ( r ) X ( r ) | 2 α d s .
By Stachurska’s inequality, we have:
E sup t 0 t T | X n ( t ) X ( t ) | 2 M 6 1 4 b ( α 1 ) M 6 α 1 ( T t 0 + 4 ) ( T t 0 ) ( 1 K 2 ) ( 1 K 2 ) 1 1 α ,
which is the required inequality. The proof is complete. □

4. Discussion

System modeling, including the probability process, has become an important role in many areas of science and industry where we are increasingly encountering stochastic differential equations. The neutral stochastic functional differential equation is based on the postulates of some random environmental effects, and these equations can be applied to the perturbation theory when it is hard to find the exact solution for some potentials. These equations are not easy to obtain the solution, but often arises from the study of more than one simple electrodynamic or oscillating system with some interconnection.
In this study, we wanted to find new conditions that prove the existence and uniqueness of the solution of Equation (1). In Lemma 9, a weakened Hölder condition condition (4), a weakened linear growth condition (5), and a contractive condition (6) were used to demonstrate that the probability process is bounded. In Lemma 10, these conditions were used to demonstrate that the Picard iteration is bounded. Therefore, in Theorems 1 and 2, we have proved a existence and uniqueness of a solution to a neural stochastic differential equation in this paper. However, the weakened Hölder condition condition only guarantees the existence and uniqueness of the solution and, in general, the solution does not have an explicit expression except for the linear case. In practice, we therefore often seek the approximate rather than the accurate solution. The questions of continuity and approximate solution (for numerical methods) under a weaker condition of the solution were not addressed in this paper, but we think it may take some time to accomplish this. We want to leave this improvement as an open problem.

5. Conclusions

In the present paper, we proved a type of existence and uniqueness theorem of a solution of the neutral stochastic differential equation using the weakened conditions when the conditions are in the form of (4)–(6). Our main result does not cover the more general case of existence and uniqueness of the stochastic equation under some weakened conditions. Nevertheless, it is valuable that we showed a type of existence theorem of the solution of the stochastic differential equation with the expanded concept of ordinary differential equations.

Author Contributions

Conceptualization, M.-J.B., C.-H.P., and Y.-H.K.; validation, M.-J.B.; writing—original draft preparation, M.-J.B., C.-H.P., and Y.-H.K.; writing—review and editing, Y.-H.K.; funding acquisition, Y.-H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Changwon National University in 2019–2020.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kim, Y.-H. An existence of the solution to neutral stochastic functional differential equations under the Holder condition. Aust. J. Math. Anal. Appl. 2019, 16, 1–10. [Google Scholar]
  2. Kim, Y.-H. An existence of the solution to neutral stochastic functional differential equations under special conditions. J. Appl. Math. Inform. 2019, 37, 53–63. [Google Scholar]
  3. Kim, Y.-H. A note on the solutions of Neutral SFDEs with infinite delay. J. Inequal. Appl. 2013, 181, 1–12. [Google Scholar] [CrossRef] [Green Version]
  4. Li, X.; Fu, X. Stability analysis of stochastic functional differential equations with infinite delay and its application to recurrent neural networks. J. Comput. Appl. Math. 2010, 234, 407–417. [Google Scholar] [CrossRef] [Green Version]
  5. Mao, X. Stochastic Differential Equations and Applications; Horwood Publication Limited: West Sussex, UK, 2007; pp. 201–220. [Google Scholar]
  6. Wei, F.; Cai, Y. Existence, uniqueness and stability of the solution to neutral stochastic functional differential equations with infinite delay under non-Lipschitz conditions. Adv. Differ. Equ. 2013, 151, 1–12. [Google Scholar] [CrossRef] [Green Version]
  7. Driver, R.D. A functional differential system of neutral type arising in a two-body problem of classical electrodynamics. In International Symposium Nonlinear Differential Equation and Nonlinear Mechanics; Academic Press: New York, NY, USA, 1963; pp. 474–484. [Google Scholar]
  8. Bae, M.-J.; Park, C.-H.; Kim, Y.-H. An existence and uniqueness theorem of stochastic differential equations and the properties of their solution. J. Appl. Math. Inform. 2019, 37, 491–506. [Google Scholar]
  9. Kim, Y.-H. An exponential estimates of the solution for stochastic functional differential equations. J. Nonlinear Convex Anal. 2015, 16, 1861–1868. [Google Scholar]
  10. Kim, Y.-H. On the pth moment estimates for the solution of stochastic differential equations. J. Inequal. Appl. 2014, 395, 1–9. [Google Scholar]
  11. Park, C.-H.; Bae, M.-J.; Kim, Y.-H. Conditions to guarantee the existence and uniqueness of the solution to stochastic differential equations. Nonlinear Funct. Anal. Appl. 2020, 25, 587–603. [Google Scholar]
  12. Ren, Y.; Lu, S.; Xia, N. Remarks on the existence and uniqueness of the solutions to stochastic functional differential equations with infinite delay. J. Comput. Appl. Math. 2008, 220, 364–372. [Google Scholar] [CrossRef] [Green Version]
  13. Ren, Y.; Xia, N. Existence, uniqueness and stability of the solutions to neutral stochastic functional differential equations with infinite delay. Appl. Math. Comput. 2009, 210, 72–79. [Google Scholar] [CrossRef]
  14. Wei, F.; Wang, K. The existence and uniqueness of the solution for stochastic functional differential equations with infinite delay. J. Math. Anal. Appl. 2007, 331, 516–531. [Google Scholar] [CrossRef] [Green Version]
  15. Govindan, T.E. Stability of mild solution of stochastic evolution equations with variable delay. Stoch. Anal. Appl. 2003, 21, 1059–1077. [Google Scholar] [CrossRef]
  16. Bainov, D.; Simeonov, P. Integral Inequalities and Applications; Kluwer Academic Publisher: Dordrecht, The Netherlands; Boston, MA, USA; London, UK, 1992; pp. 40–66. [Google Scholar]

Share and Cite

MDPI and ACS Style

Bae, M.-J.; Park, C.-H.; Kim, Y.-H. Conditions to Guarantee the Existence of the Solution to Stochastic Differential Equations of Neutral Type. Symmetry 2020, 12, 1613. https://doi.org/10.3390/sym12101613

AMA Style

Bae M-J, Park C-H, Kim Y-H. Conditions to Guarantee the Existence of the Solution to Stochastic Differential Equations of Neutral Type. Symmetry. 2020; 12(10):1613. https://doi.org/10.3390/sym12101613

Chicago/Turabian Style

Bae, Mun-Jin, Chan-Ho Park, and Young-Ho Kim. 2020. "Conditions to Guarantee the Existence of the Solution to Stochastic Differential Equations of Neutral Type" Symmetry 12, no. 10: 1613. https://doi.org/10.3390/sym12101613

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop