Some Results of Stochastic Differential Equations

: In this paper, there are two aims: one is Schauder and Sobolev estimates for the one-dimensional heat equation; the other is the stabilization of differential equations by stochastic feedback control based on discrete-time state observations. The nonhomogeneous Poisson stochastic process is used to show how knowing Schauder and Sobolev estimates for the one-dimensional heat equation allows one to derive their multidimensional analogs. The properties of a jump process is used. The stabilization of differential equations by stochastic feedback control is based on discrete-time state observations. Firstly, the stability results of the auxiliary system is established. Secondly, by comparing it with the auxiliary system and using the continuity method, the stabilization of the original system is obtained. Both parts focus on the impact of probability theory.


Introduction
For the classical theory of partial differential equations, the Schauder and Sobolev estimates are important issues; see the book [1].The regularity of partial differential equations has been intensively studied.There is a lot of research concerning this part, and thus, we do not review it here.The regularity of stochastic partial differential equations has been also studied by many authors, e.g., stochastic evolution Equations [2,3], stochastic parabolic Equations [4,5], stochastic kinetic Equations [6], and so on.For more information on the regularity of stochastic process and random attractor, we refer the reader to [7][8][9][10].In [11], Krylov and Priola used the Poisson stochastic process to obtain the Schauder and Sobolev estimates of multidimensional heat equation from the one-dimensional case.More precisely, they first obtained the Schauder and Sobolev estimates for the following equation: x u(t, x) + f (t, x), t ∈ (0, T), x ∈ R, u(0, x) = 0, x ∈ R, (1) then, they derived the Schauder and Sobolev estimates for the multidimensional case, among others.Following [11], in this paper, a probability method is used to study the regularity of parabolic equations; see [12] for a similar method.For more information on the study of jump process, see [13,14].
On the other hand, noise exists in the real world and has some advantages.In the last century, some authors realized this point and conducted further research in this field; see [15].Recently, Mao [16,17] obtained the stabilization by discrete observation in view of control theory.There are a lot of authors that have considered a similar question.For example, You et al. [18] obtained the stabilization of hybrid systems by feedback control based on discrete-time state observations and they considered many kinds of stability including H ∞ stability and asymptotic stability; Dong et al. [19] obtained the almost sure exponential stabilization by stochastic feedback control based on discrete-time observations; Li-Mao [20] obtained the stabilization of highly nonlinear hybrid stochastic differential delay equations by delay feedback control; Fei et al. [21] considered the stabilization of highly nonlinear hybrid systems by feedback control based on discrete-time state observations; Liu-Wu [22] obtained the intermittent stochastic stabilization based on discrete-time observation with time delay; Shen et al. [23,24] obtained the stabilization for hybrid stochastic systems by aperiodically intermittent control and stabilization of stochastic differential equations driven by G-Levy process with discrete-time feedback control; and Mao et al. [25] obtained the stabilization by intermittent control for hybrid stochastic differential delay equations.Guo et al. [26] generalized the results of [16,17] to the polynomial case similar to [27].Recently, Lu et al. [28] obtained the stabilization of differently structured hybrid neutral stochastic systems by delay feedback control under highly nonlinear condition.Global stabilization via output feedback of stochastic nonlinear time-delay systems with time-varying measurement error is established by [29].Just recently, Zhao and Zhu [30,31] considered the stabilization of highly nonlinear neutral stochastic systems.The stabilization of the stochastic system is a hot issue.
However, there are some cases which have not been considered.For example, in the papers [17][18][19], the authors all assume that every term keep uniform for the stochastic system.In other words, for the stochastic system: where Moreover, Mao assumed that there exists positive constants ρ 1 , ρ 2 such that: It follows from the assumptions that every term in (2) must be uniform, that is, for the term X i t , the drift term f i and the diffusion term are almost surely linear term of X i t .If there exists a term X i t where the diffusion term disappear, the results of [17] will not hold.If the drift term does not satisfy the linear assumption, then there are few results concerning with this issue.Fei et al. [21] obtained the stabilization of highly nonlinear hybrid systems by feedback control based on discrete-time state observations, and we remark that they did not use the advantage of the noise.Moreover, from the point of saving cost, is it possible to lay the observations on part of system or not?
In Ref. [17], Mao used two important properties of stochastic system: one is the positivity of solution, which can assure that one can use the Itô formula for the small order moment (0 < p < 1); the other is the Markov property, which can assure that one only seek a point k such that the Lemma 3.6 of [17] holds.In this paper, we will use the similar trick to deal with some simple question.In addition, Li et al. [32] solved the problem of when the drift term does not satisfy the linear growth, such as In the present paper, the first aim is to use the nonhomogeneous Poisson stochastic process to find some new results.The main difference between this paper and [11] is that the nonhomogeneous Poisson stochastic process is used in this paper but Krylov and Priola used the homogeneous Poisson stochastic process.The method used in [11] is probability, and the results are interesting.
Motivated by [17,21], in this paper, the second aim is to compare the observations of the system.The coupling system with different feedback control is considered.Furthermore, the two elements have different stability properties, which is different from the earlier results.Throughout this paper, T is a fixed positive number, R d denotes Euclidean space and C γ (R d ), γ ∈ (0, 1) is the space of all real-valued functions f on R d with the following norm: As usual, we denote C 2+γ (R d ) as the space of real-valued twice continuously differentiable functions f on R d with the following norm: where D f is the gradient of f and D 2 f is its Hessian.
The rest of this paper is arranged as follows.In Section 2, some preliminaries and main result are presented.Sections 3 and 4 focus on the proof of main result.

Preliminaries
Consider the following Cauchy problem: where •) are bounded on (0, T), and the supports of φ(t, •) belong to the same ball.
It follows from [33] that if f belongs to B c ((0, T), C ∞ 0 (R)), then (3) has a solution u(t, x) satisfying: (i) u is a continuous function in [0, T] × R; (ii) for any fixed t ∈ [0, T], u belongs to C 2+α (R) and has the following estimate: Moreover, there exists only one solution u satisfying the following properties: sup where L p -space is defined as usual.Now, we recall some knowledge of Poisson stochastic process.A nonhomogeneous Poisson process π(t, ω) (π t for short) is a Poisson process with rate parameter λ(t) such that the rate parameter of the process is a function of time.The significant difference between the homogeneous and nonhomogeneous Poisson process is that the latter case is not a stationary process.Thus, the nonhomogeneous Poisson process can not be wrote as the sum of a sequence which is an i.i.d (independently identically distribution) random variables.
As usual, π t is a counting process with the following properties: For simplicity, in this paper, only the two-dimensional heat equation is considered and the finite dimensional case can be similarly dealt with.For x, y ∈ R, we set z , where i, j = 1, 2 and x 1 = x, x 2 = y.We obtain the following result.

Main Results
In this section, we prove the main results.

Regularity of Parabolic Equations by Using Probability Method
Similar to [11], we consider the following equation: where a(t) > 0 is a bounded Borel measurable function and h ∈ R is a parameter.As usual in probability theory, we do not indicate the dependence on ω in the sequence.From the result of one-dimensional case, we get that there exists a unique solution u(t, x, y), depending on y and ω as parameters.Furthermore, estimates ( 4)-( 7) hold for each ω ∈ Ω and y ∈ R if we replace u(t, x) and f (t, x) with u(t, x, y) and f (t, x, y − hπ t ), respectively.The solution of ( 8) can be written as: is the jump of the process u(t, x, y + hπ t ) as a function of t at moment s if π t has a jump at s. Here, π s− is the left-continuous w.r.t.s.
In order to prove the main result, the function g should be studied.
Notice that: Eξ n = Since the nonhomogeneous Poisson process is an independent increment process, the expectations of the products on the right in (11) are equal to the products of expectations, and since Eπ t = m(t), we arrive at: Noting that for any s > 0, we have π s = π s− almost surely, and thus: The proof is complete.
Taking expectations on both sides of (9), we obtain the following result.
Then, there exists a unique continuous function v(t, x, y), t ∈ [0, T], x, y ∈ R, satisfying the equation: for t ∈ (0, T), x, y ∈ R, with zero initial condition and such that v(t, •, y) ∈ C 2+α (R) for any t ∈ (0, T), y ∈ R and: Furthermore: The proof of this lemma is similar to [11] (Lemma 2.2) and the details are omitted here.Next, similar to the method of dealing with (3), Equation ( 12) is studied.More precisely, we consider v(t, x, y) depending on ω as a unique solution of: with zero initial condition.Then, it follows from the above computations, we have the function w(t, x, y) = Ev(t, x, y − hπ t ), which satisfies: Furthermore, w(t, x, y) has the same estimates as in Lemma 2.
Proof of Theorem 1. Taking λ(t) = a(t)/h 2 in ( 13) and letting h → 0, we have the solution w = w h of (13) will converge to a function v(t, x, y), which satisfies Equation ( 14).Furthermore, v is continuous in [0, T] × R 2 , and is infinitely differentiable w.r.t.(x, y) for any t ∈ (0, T) and all the estimates in Lemma 2 hold true.Therefore, the following estimate obviously holds: Next, the rotation invariant of Laplacian operator and the estimates of Lemma 2 will be used to derive the desired results.In order to do that, define S as an orthogonal transformation of R 2 : Se i = l i , i = 1, 2, where e i is the standard basis in R 2 , l i is a unit vector in R 2 and l 2 is orthogonal to l 1 .Set: x, y) = f (t, S(x, y)), u(t, x, y) = v(t, S(x, y)), then u satisfies ∂ t u(t, x, y) = a(t)∆u(t, x, y) + g(t, x, y), where the rotation invariant of Laplacian operator is used.It follows from Lemma 2 that: Notice that: and using the fact that the solution v of ( 14) has continuous second-order derivatives w.r.t.(x, y), we have, for any unit vector l ∈ R 2 : That is to say, we get: In particular, if we choose z = 0, we get the estimate: Since the Jacobian of S(x, y) equals to 1, then we have for any unit vector l ∈ R 2 : The proof is complete.
Remark 1.The results in this section are slightly different from those in [11].If a(t) = 1, that is, λ(t) ≡ λ, then Theorem 2 is exactly the second part of [11].The big difference is that we can assume λ(t) = h 2 a(t) and then the equation will keep the same form as the dimensional case.Of course, in [11] (Section 3), Krylov and Priola used a suitable transform to consider the problem (3).
Here, we emphasize that we can use another stochastic process to deal with the problem (3).One can use renew process to study the regularity of parabolic equations.The difference is that in the following Lemma 1, E[π (k+1)2 −n − π k2 −n ] will be different.But for the parabolic equation, the Poisson process is the best choice.

Stabilization of Differential Equations Based on Discrete-Time Observation
In this section, a special system is studied which can be regarded as one-side coupling system.Our motivation is that is it possible to lay discrete-time observation on the system.More precisely, consider the deterministic differential system: where α, β i > 0, i = 1, 2. It is easy to see that the solution of ( 15) is: where, obviously, (X(t), Y(t)) → (∞, ∞) as t → ∞.Now, we want to get (X(t), Y(t)) → (0, 0) as t → ∞ based on discrete-time observation.Firstly, if the system (15) is treated as in [17], it is easy to see that the term X(t)(β 2 Y(t) − β 1 ) does not satisfy the assumptions of [17], Therefore, the results of [17] can not be used directly.However, the first result of [17] for the second equation can be used.In order to do so, we recall the first result of [17].Consider the scalar linear stochastic equation: on t ≥ 0 with initial value x(0) = x 0 ∈ R, where τ is a positive constant.In fact, Equation ( 16) can be regarded as a stochastic differential delay equation if one define δ : [0, ∞) → [0, τ) by δ(t) = t − kτ for t ∈ [kτ, (k + 1)τ), k = 0, 1, 2, . . .For more information on geometric Brownian motion, see [34] for details.
Theorem 2. If α − σ 2 2 < 0, then there is a positive number τ * such that for any initial value x 0 ∈ R, the solution of (15) satisfies: Proof.Firstly, it follows from Proposition 1 that: That is to say, there exists a positive constant λ such that |Y(t)| ≤ Ce −λt for some positive constant C. Submitting this into the first equation of (15), we get: which implies that: The proof is complete.□ Remark 2. System ( 17) is often called a nonstrict system because the variable Y does not depend on X.Similarly, one can deal with nonautonomous differential system by using the result of [26]: Obviously, the solution of ( 19) is (X(t), Y(y) = (x 0 e (2y 0 −1)t , y 0 (1 + t)), which will go to (∞, ∞) as t → ∞.It follows from the results of [26] that the solution of the following: will decay polynomially provided that the τ is sufficiently small.Thus, the solution of ( 19) will go to zero as time goes to infinity.
Next, consider the following stochastic system: where f 1 and f 2 are continuous functions.It is hard to get the exact form of the solution, but one can assume that the solutions of (20) will not decay.The aim here is to design a feedback control (u(X([t/τ 1 ]τ 1 )), v(Y([t/τ 2 ]τ 2 ))) so that the controlled system: becomes asymptotically stable, where τ 1 , τ 2 > 0. For simplicity, we will add the linear feedback control: where B(t) and B(t) are independent Brownian motion, a, b ∈ R. Note that the feedback control is different from [17,32] and in earlier results, the authors often assume that the system has the uniform form.The method used in [17,32] will not be suitable for system (21).We need the following assumptions.
(A1) Assume that f is globally Lipschitz continuous with respect to any fixed x ∈ R: where α > 0. We also assume that f (x, 0) = 0 for any fixed x ∈ R.
For the second equation of ( 21), the following result holds.
Lemma 3. Let Assumption (A1) hold and α < b 2 /2.Then, there is a positive number τ * 2 such that for any initial value y 0 ∈ R, the solution of second equation to (21)  ).In practice, we can choose a pair of constants p, ε ∈ (0, 1) such that τ * 2 is the unique root to the following equation: where Proof.The proof of this lemma is similar to that of [17] (Theorem 3.3).In order to do that, consider the auxiliary equation: Under the Assumption (A1), it follows from [35] (Lemam 5.1) that P( Ỹ(t; y 0 ) ̸ = 0) = 1.