Next Article in Journal
Fractal Calculus of Functions on Cantor Tartan Spaces
Previous Article in Journal
Mathematical Modeling of Solutes Migration under the Conditions of Groundwater Filtration by the Model with the k-Caputo Fractional Derivative
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximate Controllability of Semilinear Stochastic Integrodifferential System with Nonlocal Conditions

Department of Mathematics, PSG College of Arts and Science, Coimbatore 641 046, India
*
Author to whom correspondence should be addressed.
Fractal Fract. 2018, 2(4), 29; https://doi.org/10.3390/fractalfract2040029
Submission received: 8 October 2018 / Revised: 12 November 2018 / Accepted: 16 November 2018 / Published: 20 November 2018

Abstract

:
The objective of this paper is to analyze the approximate controllability of a semilinear stochastic integrodifferential system with nonlocal conditions in Hilbert spaces. The nonlocal initial condition is a generalization of the classical initial condition and is motivated by physical phenomena. The results are obtained by using Sadovskii’s fixed point theorem. At the end, an example is given to show the effectiveness of the result.

1. Introduction

Controllability is one of the essential concepts in mathematical control theory and plays a crucial role in each deterministic and stochastic control system. It has been properly documented that the controllability of deterministic systems is widely employed in many fields of science and technology. Any control system can be defined as controllable only if every state regarding that process is affected or controlled in the corresponding time by some control signals. In several projective systems, it is possible to guide or control the dynamical system from an imperiousinitial state to a peremptoryfinal state with the help of the set of admissible controls.
Kalman [1] introduced the idea of the controllability for finite-dimensional deterministic linear control systems. The fundamental ideas of control theory in finite and infinite-dimensional spaces were introduced in [2] and [3], respectively. However, in several cases, some reasonable randomness can appear in the problem, so that the system should be modeled by a stochastic form. Only a few authors have researched the extension of deterministic controllability ideas to stochastic control systems. Dauer and Mahmudov [4] studied the controllability of a semilinear stochastic system by using the Banach fixed point technique. In [5,6,7,8,9], Mahmudov et al. established results for the controllability of linear and semilinear stochastic systems in Hilbert space. On behalf of this, Sakthivel, Balachandran, and Dauer et al. deliberated on the approximate controllability of nonlinear stochastic systems in [4,10,11,12]. Sakthivel et al. studied the existence results for fractional stochastic differential equations; see [13,14,15,16,17,18,19,20] and the references therein.
On the other hand, only a few authors have investigated the controllability of neutral functional integrodifferential systems in Banach spaces by using semigroup theory. Recently, in [21,22,23], Balachendran and Karthikeyan et al. studied the controllability of stochastic integrodifferential systems in finite dimension spaces.
To date, from our simplest data, there are no results on the approximate controllability of semilinear stochastic integrodifferential systems with nonlocal conditions using Sadovskii’s fixed point theorem within the literature. Therefore, this paper is dedicated to the estimation of the approximate controllability of semilinear stochastic integrodifferential control systems with nonlocal conditions using Sadovskii’s fixed point theorem.
In this work, we shall study the approximate controllability of the following semilinear stochastic integrodifferential system:
d y ( t ) = [ A y ( t ) + B u ( t ) + f ( t , y ( t ) ) + 0 t g ( t , s , y ( s ) ) ] d t + σ ( t , x ( t ) ) d w ( t ) , t J .
y ( 0 ) = y 0 + h ( y ) .
where A : D ( A ) H H is a closed, linear, and densely-defined operator on H , which generates a compact semigroup T ( t ) : t J on H . Let B be a bounded linear operator from the Hilbert space U into H . The control u L 2 ( [ 0 , b ] , U ) ; f : J × H H ; g : J × J × H H ; σ : J × H L 2 0 ; are nonlinear suitable functions. x 0 is the 0 measurable H -valued random variable independent of w; g is a continuous function from C ( J , H ) H . For simplicity, we generally assume that the set of admissible controls is U a d = L 2 ( J , U ) .

2. Preliminaries

Let ( Ω , , P ) be a complete space with a normal filtration t , t J = [ 0 , b ] . Let H , U , and E be the separable Hilbert spaces and W be a Q-Wiener process on ( Ω , b , P ) with the covariance operator Q such that t r Q < . We assume that there exists a complete orthonormal system e n in E, a bounded sequence of nonnegative real numbers λ n such that Q e n = λ n e n , n = 1 , 2 , 3 , and a sequence β n of independent Brownian motions such that:
W ( t ) = n = 1 λ n β n ( t ) e n , t J .
Let H 2 = C 2 ( [ 0 , b ] ; H ) and t = t w , where t w is the σ -algebra generated by W . Let L 2 0 = L 2 ( Q 1 / 2 E ; H ) be the space of all Hilbert–Schmidt operators from Q 1 / 2 E to H with the norm ζ = t r [ ζ Q ζ * ] . Let L 2 ( J , H ) be the space of all t -adapted, H -valued measurable square integrable processes on J × Ω .
Let C ( [ 0 , b ] ; L 2 ( , H ) ) be the Banach space of continuous maps from [ 0 , b ] into L 2 ( , H ) satisfying the condition:
sup t J E y ( t ) 2 < .
Let H 2 = C 2 ( [ 0 , b ] ; H ) . Now, H 2 is the closed subspace of C ( [ 0 , b ] ; L 2 ( , H ) ) consisting of measurable and t -adapted H -valued processes ϕ C ( [ 0 , b ] ; L 2 ( , H ) ) endowed with the norm:
φ H 2 = sup t [ 0 , b ] E φ ( t ) H 2 1 / 2 .
Definition 1.
A stochastic process y H 2 is a mild solution of (1)–(2) if for each u L 2 ( [ 0 , b ] , U ) , it satisfies the following integral equation:
y ( t ) = T ( t ) y 0 + g ( y ) + 0 t T ( t s ) B u ( s ) + f ( s , y ( s ) ) d s + 0 t T ( t s ) 0 s g ( s , r , y ( r ) ) d r d s + 0 t T ( t s ) σ ( s , y ( s ) ) d w ( s )
Let us introduce the succeeding operators and sets [24] L b L ( L 2 ( J × Ω , U ) , L 2 ( Ω , b , H ) ) defined by:
L b u = 0 b T ( b s ) B u ( s ) d s ,
where L ( X , Y ) denotes the set of bounded linear operators from X to Y . Then, its adjoint operator L b * : L 2 ( Ω , b , H ) L 2 ( J × Ω , U ) is given by:
L b * z = B * T * ( b t ) E z | t .
The set of all states reachable in time b from initial state y ( 0 ) = y 0 L 2 ( Ω , 0 , X ) using admissible controls is defined as:
R b ( U a d ) = y ( b ; y 0 , u ) L 2 ( Ω , b , H ) : u U a d , y ( b ; y 0 , u ) = T ( b ) y 0 + h ( y ) + 0 b T ( b s ) B u ( s ) d s + 0 b T ( b s ) f ( s , y ( s ) ) d s + 0 b T ( b s ) 0 s g ( s , r , y ( r ) ) d r d s + 0 b T ( b s ) σ ( s , y ( s ) ) d w ( s )
Let us introduce the linear controllability operator Π 0 b L ( L 2 ( Ω , b , H ) , L 2 ( Ω , b , H ) ) as follows:
Π 0 b · = L b ( L b ) * · = 0 b T ( b t ) B B * T * ( b t ) E . | t d t .
The corresponding controllability operator for the deterministic model is:
Γ s b = L b ( s ) L b * ( s ) = s b T ( b t ) B B * T * ( b t ) d t .
Definition 2.
The stochastic system (1)–(2) is approximately controllable on [ 0 , b ] if R ( b ) ¯ = L 2 ( Ω , b , H ) , where R ( b ) = y ( b ; u ) : u L 2 ( Ω , b , H ) : u U a d , and L 2 ( [ 0 , b ] , U ) is the closed subspace of L 2 ( [ 0 , b ] × Ω , U ) , consisting of all t -adapted, U-valued stochastic processes.
Lemma 1.
[25] Let σ : J × Ω L 2 0 be a strongly-measurable mapping such that 0 b E σ ( t ) L 2 0 p < . Then:
E 0 t σ ( s ) d w ( s ) p L σ 0 t E σ ( s ) L 2 0 p d s
for all t J and p 2 , where L σ is the constant involving p and b.
Lemma 2.
(Sadovskii’s fixed point theorem) Suppose that N is a nonempty, closed, bounded, and convex subset of a Banach space H and B : N H H is a condensing operator. Then, the operator B has a fixed point in N.

3. Main Result

To prove our main results, we list the following hypotheses:
Hypothesis 1 (H1).
A is the infinitesimal generator of a compact semigroup T ( t ) : t 0 on H .
Hypothesis 2 (H2).
The function f : J × H H satisfies linear growth and Lipschitz conditions, i.e., there exist positive constants C 1 , C 2 such that:
f ( t , y 1 ) f ( t , y 2 ) 2 C 1 y 1 y 2 2 f ( t , y ) 2 C 2 ( 1 + y 2 ) .
Hypothesis 3 (H3).
The function σ : J × H L 2 0 satisfies linear growth and Lipschitz conditions, i.e., there exist positive constants N 1 , N 2 such that:
σ ( t , y 1 ) σ ( t , y 2 ) 2 N 1 y 1 y 2 2 σ ( t , y ) 2 N 2 ( 1 + y 2 ) .
Hypothesis 4 (H4).
The function g : J × J × H H satisfies linear growth and Lipschitz conditions, i.e., there exist positive constants K 1 , K 2 such that:
0 t [ g ( t , s , y 1 ( s ) ) g ( t , s , y 2 ( s ) ) ] d s 2 K 1 y 1 y 2 2 0 t g ( t , s , y ( s ) ) d s 2 K 2 y 1 y 2 2 .
Hypothesis 5 (H5).
The function h is a continuous function, and there exists some positive constants M g such that:
h ( y 1 ) h ( y 2 ) 2 M g y 1 y 2 2 h ( y ) 2 M g ( 1 + y 2 ) ,
for all y 1 , y 2 C ( J , H ) .
Hypothesis 6 (H6).
For each 0 t b , the operator α ( α I + Γ t b ) 1 0 in the strong operator topology as α 0 + , where:
Γ t b = t b T ( b s ) B B * T * ( b s ) d s
is the controllability Gramian.
Observe that the linear deterministic system corresponding to (1)–(2):
d y ( t ) = A y ( t ) + B u ( t ) d t , t J y ( 0 ) = y 0
is approximately controllable on [ t , b ] iff the operator α ( α I + Γ t b ) 1 0 strongly as α 0 + . For simplicity, let us take:
M B = m a x B .
Two lemmas, as far as approximate controllability is concerned, will be utilized in the result. The accompanying lemma is needed to define the control function.
Lemma 3.
[7] For any y b L 2 ( Ω , b , H ) , there exists ϕ L 2 ( J , L 2 0 ) such that:
y b = E y b + 0 b ϕ ( s ) d w ( s ) .
Now, for any α > 0 and y b L 2 ( Ω , b , H ) , we define the control function in the form below:
U α ( t , y 1 ) = B * T * ( b t ) ( α I + Ψ 0 b ) 1 ( E y b T ( b ) ( y 0 + h ( y 1 ) ) ) + 0 t ( α I + Ψ s b ) 1 ϕ ( s ) d w ( s ) B * T * ( b t ) 0 t ( α I + Ψ s b ) 1 T ( b s ) f ( s , y 1 ( s ) ) d s B * T * ( b t ) 0 t ( α I + Ψ s b ) 1 T ( b s ) 0 s g ( s , r , y 1 ( r ) ) d r d s B * T * ( b t ) 0 t ( α I + Ψ s b ) 1 T ( b s ) σ ( s , y 1 ( s ) ) d w ( s )
Lemma 4.
There exists a positive constant M u such that for all y 1 , y 2 H 2 , we have:
E U α ( t , y 1 ) U α ( t , y 2 ) 2 M u α 2 y 1 y 2 2 ,
E U α ( t , y 1 ) 2 M u α 2 ( 1 + y 1 2 ) .
Proof. 
Let y 1 , y 2 H 2 . From Holder’s inequality, Lemma 1, and the presumption on the data, we obtain:
E U α ( t , y 1 ) U α ( t , y 2 ) 2 4 E B * T * ( b t ) ( α I + Ψ 0 b ) 1 T ( b ) [ h ( y 1 ) h ( y 2 ) ] 2 + 4 E B * T * ( b t ) 0 t ( α I + Ψ s b ) 1 T ( b s ) [ f ( s , y 1 ( s ) ) f ( s , y 2 ( s ) ) ] d s 2 + 4 E B * T * ( b t ) 0 t ( α I + Ψ s b ) 1 T ( b s ) [ 0 s [ g ( s , r , y 1 ( r ) ) g ( s , r , y 2 ( r ) ) ] d τ ] d s 2 + 4 E B * T * ( b t ) 0 t ( α I + Ψ s b ) 1 T ( b s ) [ σ ( s , y 1 ( s ) ) σ ( s , y 2 ( s ) ) ] d w ( s ) 2 4 α 2 M B 2 M 4 M g y 1 y 2 H 2 2 + 4 α 2 M B 2 M 4 b 0 t C 1 E y 1 ( s ) y 2 ( s ) H 2 d s + 4 α 2 M B 2 M 4 b 0 t K 1 E y 1 ( s ) y 2 ( s ) H 2 d s + 4 α 2 M B 2 M 4 L σ 0 t N 1 E y 1 ( s ) y 2 ( s ) H 2 d s 4 α 2 M B 2 M 4 M g + C 1 b 2 + L σ N 1 b + b 2 K 1 y 1 y 2 H 2 2 = M u α 2 y 1 y 2 H 2 2 ,
where M u = 4 M B 2 M 4 M g + C 1 b 2 + L σ N 1 b + b 2 K 1 . When u α ( t , y 2 ) = 0 , the second inequality can be proven in the same approach. □
Theorem 1.
If the hypothesis (H1)–(H6) are fulfilled, then the system (1)–(2) has a mild solution on [ 0 , b ] provided that:
12 M 2 M g + 6 M 2 6 M B 2 b 2 M u α 2 + b 2 C 2 + L G N 2 b + b K 2 + L 2 b + K < 1 .
5 M 2 M B 2 b M u α 2 + 5 M 2 b C 1 + 5 M 2 L G N 1 b + 5 M 2 K 1 b + 5 M 2 L 1 b + 5 M 2 K b < 1 .
Proof. 
The proof of this theorem is classified into three steps: For any α > 0 , define the operator Φ α : H 2 H 2 by:
( Φ α y ) ( t ) = T ( t ) [ y 0 + g ( y ) ] + 0 t T ( t s ) [ B u α ( s , y ) + f ( s , y ( s ) ) ] d s + 0 t T ( t s ) 0 s g ( s , r , y ( r ) ) d r d s + 0 t T ( t s ) σ ( s , y ( s ) ) d w ( s ) .
Step 1.
For any y H 2 , Φ α ( y ) ( t ) is continuous on J in the L p -sense. Let 0 t 1 t 2 b . Then, for any fixed y H 2 , it follows from Holder’s inequality, Lemma 1, and presume for the theorem that:
E ( Φ α y ) ( t 2 ) ( Φ α y ) ( t 1 ) 2 9 [ E ( T ( t 2 ) T ( t 1 ) ) [ y 0 + h ( y ) ] 2 + E 0 t 1 [ T ( t 2 s ) T ( t 1 s ) ] f ( s , y ( s ) ) d s 2 + E t 1 t 2 T ( t 2 s ) f ( s , y ( s ) ) d s 2 + E 0 t 1 [ T ( t 2 s ) T ( t 1 s ) ] 0 s g ( s , r , y ( r ) ) d r d s 2 + E t 1 t 2 T ( t 2 s ) 0 s g ( s , r , y ( r ) ) d r d s 2 + E 0 t 1 [ T ( t 2 s ) T ( t 1 s ) ] σ ( s , y ( s ) ) d w ( s ) 2 + E t 1 t 2 T ( t 2 s ) σ ( s , y ( s ) ) d w ( s ) 2 + E 0 t 1 [ T ( t 2 s ) T ( t 1 s ) ] B U α ( s , y ) d s 2 + E t 1 t 2 T ( t 2 s ) B U α ( s , y ) d s 2 ] 9 [ 2 E ( T ( t 2 ) T ( t 1 ) ) y 0 2 + E ( T ( t 2 ) T ( t 1 ) ) h ( y ) 2 + t 1 0 t 1 E [ T ( t 2 s ) T ( t 1 s ) ] f ( s , y ( s ) ) 2 d s + M 2 ( t 2 t 1 ) t 1 t 2 E f ( s , y ( s ) ) 2 d s + t 1 0 t 1 E [ T ( t 2 s ) T ( t 1 s ) ] 0 s g ( s , r , y ( r ) ) d r 2 d s + M 2 ( t 2 t 1 ) t 1 t 2 E 0 s g ( s , r , y ( r ) ) d r 2 d s + L σ 0 t 1 E [ T ( t 2 s ) T ( t 1 s ) ] σ ( s , y ( s ) ) 2 d s + M 2 L σ t 1 t 2 E σ ( s , y ( s ) ) 2 d s + t 1 0 t 1 E [ T ( t 2 s ) T ( t 1 s ) ] B U α ( s , y ) 2 d s + B 2 M 2 ( t 2 t 1 ) t 1 t 2 E U α ( s , y ) 2 d s ]
Thus, utilizing LDCT, we infer that the right-hand side of the above inequality tends to zero as t 2 t 1 0 . Accordingly, we conclude that Φ α ( y ) ( t ) is continuous from the right in [ 0 , b ) . A comparative contention demonstrates that it is likewise continuous from the left in ( 0 , b ] . Consequently, Φ α ( y ) ( t ) is continuous on J in the L p -sense.
Step 2.
For each positive integer q, let B q = y H 2 : E y ( t ) H 2 q , then the set B q is clearly a bounded, closed, and convex set in H 2 :
From Lemma 1, Holder’s inequality, and the assumption (H1), we have:
E 0 t T ( t s ) f ( s , y ( s ) ) d s H 2 E 0 t T ( t s ) f ( s , y ( s ) ) H d s 2 M 2 E 0 t f ( s , y ( s ) ) H d s 2 M 2 b 0 t C 2 ( 1 + E y ( s ) H 2 ) d s M 2 b C 2 0 t ( 1 + sup s [ 0 , b ] E y ( s ) H 2 ) d s M 2 b 2 C 2 ( 1 + y H 2 ) .
which deduces that T ( t s ) f ( s , y ( s ) ) is integrable on J, and by Bochner’s theorem, Φ α is well defined on B q . Next, from the assumption (H4), it follows that,
E 0 t T ( t s ) 0 s g ( s , r , y ( r ) ) d r d s 2 b M 2 0 t E 0 s g ( s , r , y ( r ) ) d r 2 d s b M 2 0 t K 2 ( 1 + E y ( s ) H 2 ) d s b M 2 K 2 0 t 1 + sup s [ 0 , b ] E y ( s ) H 2 ) d s b 2 M 2 K 2 ( 1 + y H 2 )
Similarly from the assumption (H2) and Lemma 1, we have:
E T ( t s ) σ ( s , y ( s ) ) d w ( s ) L σ 0 t E T ( t s ) σ ( s , y ( s ) ) L 2 0 2 d s L σ M 2 0 t E σ ( s , y ( s ) ) L 2 0 2 d s L σ M 2 N 2 0 t ( 1 + sup s [ 0 , b ] E y ( s ) H 2 ) d s L σ M 2 N 2 b ( 1 + y H 2 2 ) .
Now, we claim that there exists a positive number q such that Φ α ( B q ) B q .
If this is not true, then for each positive number q, there is a function y q ( · ) B q , but Φ α y q does not belong to B q , that is E Φ α y q ( t ) H 2 > q for some t J . On the other hand, from the assumptions (H2), (H3), and Lemma 4, we have:
q E Γ α y q ( t ) H 2 = 5 E T ( t ) [ y 0 + h ( y ) ] H 2 + 6 E 0 t T ( t s ) B U α ( s , y ) H 2 + 5 E 0 t T ( t s ) f ( s , y ( s ) ) d s H 2 + 5 E 0 t T ( t s ) 0 s g ( s , r , y ( r ) ) d r d s H 2 + 5 E 0 t T ( t s ) σ ( s , y ( s ) ) d w ( s ) H 2 5 M 2 2 E y 0 2 + 2 E h ( y ) 2 + 5 M 2 M B 2 b 2 M u α 2 ( 1 + y H 2 ) + 5 M 2 b 2 C 2 ( 1 + y H 2 ) + 5 M 2 b 2 K 2 ( 1 + y H 2 ) + 5 L σ M 2 N 2 b ( 1 + y H 2 ) 10 M 2 E y 0 2 + 10 M 2 M g ( 1 + q ) + 5 M 2 M B 2 b 2 M u α 2 ( 1 + q ) + 5 M 2 b 2 C 2 ( 1 + q ) + 5 M 2 b 2 K 2 ( 1 + q ) + 5 L G M 2 N 2 b ( 1 + q ) ( 10 M 2 E y 0 2 + 10 M 2 M g + 5 M 2 M B 2 b 2 M u α 2 + 5 M 2 b 2 C 2 + 5 L G M 2 N 2 b + 5 M 2 b K 2 ) + ( 10 M 2 M g + 5 M 2 M B 2 b 2 M u α 2 + 5 M 2 b 2 C 2 + 5 L G M 2 N 2 b + 5 M 2 b K 2 ) q .
Dividing both sides by q and taking the limit as q , we get:
10 M 2 M g + 5 M 2 M B 2 b 2 M u α 2 + b 2 C 2 + L σ N 2 b + b 2 K 2 > 1 .
This contradicts with Condition (5). Hence, for some positive number q, Φ α B q B q .
Step 3.
Define the operators Φ α 1 and Φ α 2 as:
( Φ α 1 y ) ( t ) = T ( t ) [ y 0 + h ( y ) ] , ( Φ α 2 y ) ( t ) = 0 t T ( t s ) [ B U α ( s , y ) + f ( s , y ( s ) ) ] d s + 0 t T ( t s ) 0 s g ( s , r , y ( r ) ) d r d s + 0 t T ( t s ) σ ( s , y ( s ) ) d w ( s ) , t J
Now, we will prove that Φ α 1 is completely continuous, while Φ α 2 is a contraction operator.
It is clear that Φ α 1 is completely continuous by the assumption (H3). To prove Φ α 2 is a contraction, let us take y 1 , y 2 B q . Then, from the assumptions (H2), (H3), and for each t J , we have:
E ( Φ α 2 y 1 ) ( t ) ( Φ α 2 y 2 ) ( t ) H 2 4 E 0 t T ( t s ) B [ U α ( s , x ) U α ( s , y ) ] d s H 2 + 4 E 0 t T ( t s ) [ f ( s , x ( s ) ) f ( s , y ( s ) ) ] d s H 2 + 4 E 0 t T ( t s ) 0 s g ( s , r , y 1 ( r ) ) g ( s , r , y 2 ( r ) ) d r d s H 2 + 4 E 0 t T ( t s ) [ σ ( s , x ( s ) ) σ ( s , y ( s ) ) ] d w ( s ) H 2 4 M 2 M B 2 b M u α 2 y 1 y 2 H 2 + 4 M 2 b C 1 y 1 y 2 H 2 + 4 M 2 K 1 b y 1 y 2 H 2 + 4 M 2 L σ N 1 b y 1 y 2 H 2 4 M 2 M B 2 b M u α 2 + 4 M 2 b C 1 + 4 M 2 L σ N 1 b + 4 M 2 K 1 b x y H 2 .
Therefore,
E ( Φ α 2 y 1 ) ( t ) ( Φ α 2 y 2 ) ( t ) H 2 K 0 y 1 y 2 H 2
where:
K 0 = 4 M 2 M B 2 b M u α 2 + 4 M 2 b C 1 + 4 M 2 L σ N 1 b + 4 M 2 K 1 b < 1 .
Thus, Φ α 2 is a contraction mapping.
Now, we have that Φ α = Φ α 1 + Φ α 1 is a condensing map on B q , so Sadovskii’s fixed point theorem is satisfied. Hence, we conclude that there exists a fixed point y ( · ) for Φ α on B q , which is the mild solution of (1)–(2). □
Theorem 2.
IF the assumptions (H1)–(H6) are fulfilled and if f , σ , and g are uniformly bounded, then the system (1)–(2) is approximately controllable on [ 0 , b ] .
Proof. 
Let y α be a fixed point of Φ α in H 2 . By using the stochastic Fubini theorem, it is easy to see that:
y α ( b ) = y b α ( α I + Γ 0 b ) 1 ( E y b T ( b ) ( y 0 + h ( y ) ) ) + α 0 b ( α I + Γ s b ) 1 T ( b s ) f ( s , y α ( s ) ) d s + α 0 b ( α I + Γ s b ) 1 T ( b s ) 0 s g ( s , r , y α ( r ) ) d r d s + α 0 b ( α I + Γ s b ) 1 [ T ( b s ) σ ( s , y α ( s ) ) ϕ ( s ) ] d w ( s )
By the assumption that f , σ , and g are uniformly bounded, there exists C > 0 such that:
f ( s , y α ( s ) ) 2 + σ ( s , y α ( s ) ) 2 + 0 s g ( s , r , y α ( s ) ) d r 2 C
in [ 0 , b ] × Ω .
Then, there is a subsequence denoted by:
f ( s , y α ( s ) ) , σ ( s , y α ( s ) ) , 0 s g ( s , r , y α ( s ) ) d r
weakly converging to say f ( s , ω ) , σ ( s , ω ) in H × L 2 0 and 0 s g ( s , r , y α ) in H × H × L 2 0 . The compactness of S ( t ) implies that:
T ( b s ) f ( s , y α ( s ) ) T ( b s ) f ( s ) , T ( b s ) σ ( s , y α ( s ) ) T ( b s ) σ ( s ) , T ( b s ) g ( s , r , y α ( s ) ) T ( b s ) g ( s , r , y ) . in J × Ω .
On the other hand, by the assumption (H6), for all 0 s b , the operator:
α ( α I + Γ s b ) 1 0 strongly as α 0 +
and moreover:
α ( α I + Γ s b ) 1 1 .
Hence, by the Lebesgue dominated convergence theorem, we obtain:
E x α ( b ) x b 8 α ( α I + Γ 0 b ) 1 E x b T ( b ) [ x 0 + g ( x ) ] 2 + 8 E 0 b α ( α I + Γ 0 b ) 1 ϕ ( s ) L 2 0 2 d s + 8 E 0 b α ( α I + Γ s b ) 1 T ( b s ) [ f ( s , x α ( s ) ) f ( s ) ] d s 2 + 8 E 0 b α ( α I + Γ s b ) 1 T ( b s ) f ( s ) d s 2 + 8 E 0 b α ( α I + Γ s b ) 1 T ( b s ) [ σ ( s , x α ( s ) ) σ ( s ) ] L 2 0 2 d s + 8 E 0 b α ( α I + Γ s b ) 1 T ( b s ) σ ( s ) L 2 0 2 d s + 8 E 0 b α ( α I + Γ s b ) 1 T ( b s ) 0 s g ( s , r ) d r d s 2 + 8 E 0 T α ( α I + Γ s b ) 1 T ( b s ) 0 s [ g ( s , r , y α ) g ( s , r ) ] d r d s 2 0 as α 0 + .
This results in the approximate controllability. □

4. Example

Consider the stochastic control system:
d y ( t , θ ) = [ y θ θ + B u ( t , θ ) + p ( t , y ( t ) ) + 0 t q ( t , s , y ( s ) ) d s ] d t + k ( t , y ( y ( t ) ) ) d w ( t )
y ( t , 0 ) = y ( t , π ) = 0 , t [ 0 , T ] , 0 < θ < π
y ( 0 , θ ) + i = 1 n α i y ( t i , θ ) = y 0 ( θ )
Let X = L 2 [ 0 , π ] . Here, B is a bounded linear operator from a Hilbert space U into X , and f : J × X X , σ : J × X L 2 0 , and q : J × J × X X are all continuous and uniformly bounded; u ( t ) is a feedback control; and w is a Q-Wiener process.
Let A : X X be an operator defined by:
A y = y θ θ
with domain:
D ( A ) = y X : y , y θ are absolutely continuous , y θ θ X , y ( 0 ) = y ( π ) = 0
Let f : J × X X ,
f ( t , y ) ( θ ) = p ( t , y ( θ ) ) , ( t , y ) J × X , θ [ 0 , π ] .
Let σ : J × X L 2 0 ,
σ ( t , y ) ( θ ) = k ( t , y ( θ ) ) ,
Let g : J × J × X X ,
g ( t , s , y ) ( θ ) = q ( t , s , y ( θ ) ) ,
The function s : C ( J , X ) X ,
s ( y ) ( θ ) = i = 1 n α i y ( t i , θ ) ,
for 0 < t i < T and θ [ 0 , π ] .
With this option of A , B , f , σ , g , and s, (1)–(2) is the abstract formulation of (7)–(9), such that the conditions in (H1) and (H2) are fulfilled. Then:
A y = n = 1 e n 2 t ( y , e n ) e n ( θ ) , y X .
For the time being, define an infinite-dimensional space:
U = u : u = n = 2 u n e n ( θ ) | n = 2 u n 2 <
with the norm defined by:
u U = n = 2 u u 2 1 / 2
and a linear continuous mapping B from U X as follows:
B u = 2 u 2 e 1 ( θ ) + n = 2 u n ( t ) e n ( θ ) .
It is well known that for u ( t , θ , w ) = n = 2 u n ( t , w ) e n ( θ ) L 2 ( J , U ) :
B u ( t ) = 2 u 2 ( t ) e 1 ( θ ) + n = 2 u n ( t ) e n ( θ ) L 2 ( J , X ) .
Moreover,
B * ν = ( 2 ν 1 + ν 2 ) e 2 ( θ ) + n = 3 ν n e n ( θ ) , B * S * ( t ) z = ( 2 z 1 e t + z 2 e 4 t ) e 2 ( θ ) + n = 3 z n e n 2 t e n ( θ ) ,
for ν = n = 1 ν n e n ( θ ) and z = n = 1 z n e n ( θ ) .
Now, let B * S * ( t ) z = 0 , t [ 0 , T ] . It follows that:
2 z 1 e t + z 2 e 4 t 2 + n = 3 z n e n 2 t 2 = 0 , t [ 0 , T ] z n = 0 , n = 1 , 2 , 3 , Z = 0
Consequently, by Theorem 4.1.7 [1], the deterministic linear system with reference to (7)–(9) is approximately controllable on [ 0 , T ] . Hence, the system (7)–(9) is approximately controllable provided that f , σ , g , and I k satisfy the assumptions (H1)–(H4).

5. Conclusions

In this paper, we study the approximate controllability of a semilinear stochastic integrodifferential system with nonlocal conditions in Hilbert spaces. The nonlocal initial condition is a generalization of the classical initial condition and is motivated by physical phenomena. The results are obtained by using Sadovskii’s fixed point theorem.
In future work, we intend to extend these results to a new class of stochastic differential equations driven by fractional Brownian motion.

Author Contributions

A.A. and K.R. conceived and designed the control problem experiment and studied the approximate controllability of semilinear stochastic integrodifferential system with nonlocal conditions in Hilbert spaces. The nonlocal initial condition is the generalization of the classical initial condition and is motivated by physical phenomena. A.A. and K.R. analyzed the paper and contributed materials/analysis tools.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kalman, R.E. Controllability of linear systems. Contrib. Differ. Equ. 1963, 1, 190–213. [Google Scholar]
  2. Barnett, S. Introduction to Mathematical Control Theory; Clarendon Press: Oxford, UK, 1975. [Google Scholar]
  3. Curtain, R.F.; Zwart, H. Introduction to infinite dimensional linear systems theory. In Texts in Applied Mathematics; Springer: New York, NY, USA, 1995. [Google Scholar]
  4. Dauer, J.P.; Mahmudov, N.I. Approximate Controllability of semilinear function equations in Hilbert spaces. J. Math. Anal. Appl. 2002, 273, 310–327. [Google Scholar] [CrossRef]
  5. Mahmudov, N.I. Controllability of linear stochastic systems. IEEE Trans. Autom. Control 2001, 46, 724–731. [Google Scholar] [CrossRef]
  6. Mahmudov, N.I. Approximate controllability of semilinear deterministic and stochastic evolution equations in abstract spaces. SIAM J. Control Optim. 2003, 45, 1604–1622. [Google Scholar] [CrossRef]
  7. Mahmudov, N.I.; Zorlu, S. Controllability of nonlinear stochastic systems. J. Control 2003, 76, 95–104. [Google Scholar] [CrossRef]
  8. Mahmudov, N.I. Controllability of linear stochastic systems in Hilbert spaces. J. Math. Anal. Appl. 2001, 259, 64–82. [Google Scholar] [CrossRef]
  9. Mahmudov, N.I. Controllability of semilinear stochastic systems in Hilbert spaces. J. Math. Anal. Appl. 2003, 288, 197–211. [Google Scholar] [CrossRef]
  10. Bashirov, A.E.; Kerimov, K.R. On controllability conception for stochastic systems. SIAM J. Control Optim. 1997, 35, 384–398. [Google Scholar] [CrossRef]
  11. Balachandran, K.; Dauer, J.P. Controllability of nonlinear systems in Banach spaces. J. Optim. Theory Appl. 2002, 115, 7–28. [Google Scholar] [CrossRef]
  12. Sakthivel, R.; Kim, J.H.; Mahmudov, N.I. On Controllability of nonlinear stochastic systems. Rep. Math. Phys 2006, 58, 433–443. [Google Scholar] [CrossRef]
  13. Chang, Y.K.; Nieto, J.J.; Li, W.S. Controllability of semilinear differential systems with nonlocal initial conditions in Banach spaces. J. Optim. Theory Appl. 2009, 142, 267–273. [Google Scholar] [CrossRef]
  14. Shukla, A.; Sukavanam, N.; Pandey, D.N. Approximate controllability of fractional semilinear stochastic system of order α ∈ (1,2]. Nonlinear Stud. 2015, 22, 131–138. [Google Scholar] [CrossRef]
  15. Kumar, S.; Sukavanam, N. On the Approximate Controllability of a Functional Order Control Systems with Delay. Nonlinear Dyn. Syst. Theory 2013, 13, 69–78. [Google Scholar]
  16. Ito, K. On stochastic differential equations. Mem. Am. Math. Soc. 1951, 4, 1–51. [Google Scholar] [CrossRef]
  17. Sakthivel, R.; Revathi, P.; Ren, Y. Existence of solutions for nonlinear fractional stochastic differential equations. Nonlinear Anal. TMA 2013, 81, 70–86. [Google Scholar] [CrossRef]
  18. Sakthivel, R.; Anandhi, E.R. Approximate controllability of impulsive differential equations with state-depenent delay. Int. J. Control 2010, 83, 387–393. [Google Scholar] [CrossRef]
  19. Shukla, A.; Sukavanam, N.; Pandey, D.N. Controllability of semilinear stochastic system with multiple delays in control. Adv. Control Optim. Dyn. Syst. 2014, 3, 306–312. [Google Scholar] [CrossRef]
  20. Sakthivel, R.; Nieto, J.J.; Mahmudov, N.I. Approximate controllability of nonlinear deterministic and stochastic systems with unbounded delay. Taiwan. J. Math 2010, 14, 1777–1797. [Google Scholar] [CrossRef]
  21. Balachendran, K.; Karthikeyan, S. Controllability of stochastic integrodifferential systems. Int. J. Control 2007, 80, 486–491. [Google Scholar] [CrossRef]
  22. Balachendran, K.; Sakthivel, R. Controllability of integrodifferential systems in Banach spaces. Appl. Math. Comput. 2001, 118, 63–71. [Google Scholar] [CrossRef]
  23. Balachendran, K.; Kim, J.H.; Karthikeyan, S. Controllability of semilinear stochastic integrodifferential systems. Kybernetika 2007, 43, 31–44. [Google Scholar]
  24. Luo, J.; Liu, K. Stability of infinite dimensional stochastic evolution equations with memory and markovian jumps. Stoch. Process. Appl. 2008, 118, 864–895. [Google Scholar] [CrossRef]
  25. Da Prato, G.; Zabczyk, J. Stochastic Equations in Infinite Dimensions. In Encyclopedia of Mathematics and Its Applications; Cambridge University Press: Cambridge, UK, 1992. [Google Scholar]

Share and Cite

MDPI and ACS Style

Anguraj, A.; Ramkumar, K. Approximate Controllability of Semilinear Stochastic Integrodifferential System with Nonlocal Conditions. Fractal Fract. 2018, 2, 29. https://doi.org/10.3390/fractalfract2040029

AMA Style

Anguraj A, Ramkumar K. Approximate Controllability of Semilinear Stochastic Integrodifferential System with Nonlocal Conditions. Fractal and Fractional. 2018; 2(4):29. https://doi.org/10.3390/fractalfract2040029

Chicago/Turabian Style

Anguraj, Annamalai, and K. Ramkumar. 2018. "Approximate Controllability of Semilinear Stochastic Integrodifferential System with Nonlocal Conditions" Fractal and Fractional 2, no. 4: 29. https://doi.org/10.3390/fractalfract2040029

Article Metrics

Back to TopTop