Next Article in Journal
A Novel System Reliability Modeling of Hardware, Software, and Interactions of Hardware and Software
Next Article in Special Issue
Generalized Integral Inequalities for Hermite–Hadamard-Type Inequalities via s-Convexity on Fractal Sets
Previous Article in Journal
New Quantum Estimates of Trapezium-Type Inequalities for Generalized ϕ-Convex Functions
Previous Article in Special Issue
On a Generalized Langevin Type Nonlocal Fractional Integral Multivalued Problem
Article

A Mollification Regularization Method for the Inverse Source Problem for a Time Fractional Diffusion Equation

1
Faculty of Natural Sciences, Thu Dau Mot University, Thu Dau Mot City 820000, Binh Duong Province, Vietnam
2
Faculty of Information Technology, Macau University of Science and Technology, Macau 999078, China
3
Faculty of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, China
4
Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam
5
Applied Analysis Research Group, Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City 700000, Vietnam
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(11), 1048; https://doi.org/10.3390/math7111048
Received: 7 September 2019 / Revised: 15 October 2019 / Accepted: 29 October 2019 / Published: 4 November 2019

Abstract

We consider a time-fractional diffusion equation for an inverse problem to determine an unknown source term, whereby the input data is obtained at a certain time. In general, the inverse problems are ill-posed in the sense of Hadamard. Therefore, in this study, we propose a mollification regularization method to solve this problem. In the theoretical results, the error estimate between the exact and regularized solutions is given by a priori and a posteriori parameter choice rules. Besides, the proposed regularized methods have been verified by a numerical experiment.
Keywords: time-fractional diffusion equation; inverse problem; ill-posed problem; convergence estimates time-fractional diffusion equation; inverse problem; ill-posed problem; convergence estimates

1. Introduction

In this work, we study an inverse source problem for the time-fractional diffusion equation in a infinite domain as follows:
β u ( x , t ) t β = u x x ( x , t ) + ϕ ( t ) f ( x ) , ( x , t ) R × ( 0 , T ] , u ( x , 0 ) = 0 , x R , u ( x , T ) = g ( x ) , x R ,
where the fractional derivative β u t β is the Caputo derivative of order β ( 0 < β < 1 ) as defined by
d β f ( t ) d t β = 1 Γ ( 1 - β ) 0 t d f ( s ) d s d s ( t - s ) β ,
and Γ ( · ) denotes the standard Gamma function.
The biggest motivation for developing the problem (1) is the inverse problems for the heat equation; we recover the unknown source function under different assumptions on the smoothness of input data, which were proposed by Igor Malyshev in Reference [1]. The inverse problems of the restoration of a source function in the heat equation with the classical derivative are studied by many researchers, that is, Geng [2] and Shidfar [3].
The mathematical model (1) arising in control theory, physical, generalized voltage divider, elasticity and the model of neurons in biology is studied in References [4,5,6].
According to our search, the fractional inverse source problems (1) are the subject of very few works, for example, Sakamoto et al. [7] used the data u ( x 0 , t ) ( x 0 R ) to determine ϕ ( t ) once f ( x ) was given, where the authors obtained a Lipschitz stability for ϕ ( t ) . In Problem (1) for a one-dimensional problem with special coefficients, Wei et al. [8] used the Fourier truncation method to solve an inverse source problem with ϕ ( t ) = 1 . In Reference [9], using the mollification regularization method, Yang and Fu determined the inverse spatial-dependent heat source problem. In Reference [10], Wei and Wang considered a modified quasi boundary value regularization method for identifying this problem. In Reference [11], using the quasi-reversibility regularization method, Yang and his group identified the unknown source for a time fractional diffusion equation. In Reference [12], with the quasi-reversibility regularization method, Wei and her group investigated a space-dependent source for the time fractional diffusion equation. Actually, to our knowledge, in the case ϕ ( t ) , dependent on time, the results of the inverse source problem for the time-fractional diffusion equation still has a limited achievement, if ϕ ( t ) 0 , we know Huy and his group investigated this problem by way of the Tikhonov regularization method, see Reference [13]. In these regularization methods, the priori parameter choice rule depends on the noise level and the priori bound. But in practice, to know exactly this is very difficult. In the above research, by using Morozov’s Discrepancy Principle for choosing the regularization parameter in Tikhonov regularization, the authors show error estimation for both the priori choice rule parameter and the posteriori choice rule parameter.
In this paper, we use the mollification method to solve the inverse source problem. Instead of receiving the correct input data, we only get the approximate input data. We assume that the measured data in functions couple ( g ε ( x ) L 2 ( R ) , ϕ ε ( t ) C [ 0 , T ] ) satisfies
g - g ε L 2 ( R ) ε , ϕ - ϕ ε C [ 0 , T ] ε ,
where the constant ε > 0 represents a noise level. It is known that the inverse source problem mentioned above is ill-posed in the sense of Hadamard, that is, a solution of this problem (1) does not always exist, if the solution does exist, it is not dependent continuously on the given data, meaning that the error of the initial data is small, the error of the solution will be large. This makes trouble for the numerical solution; here a regularization is required. The Fourier transform of a function F is defined by
F ^ ( ξ ) = 1 2 π R e - i ξ x F ( x ) d x .
We imposed an a priori bound on the input data, that is,
F H k ( R ) M , k > 0 ,
where M 0 is a constant, · H k ( R ) denotes the norm in Sobolev space H k ( R ) is defined
H k ( R ) = F L 2 ( R ) , F k , a n d F H k ( R ) = R | ( 1 + ξ 2 ) k / 2 F ^ ( ξ ) | 2 d ξ 1 2 ·
The outline of this paper is divided into the following sections: Section 2 gives some auxiliary results. In Section 3, by the priori bound assumption of the exact solution and the priori parameter choice rule, we present the convergence rate. In Section 4, we show the convergence rate between the exact and regularized solutions under the posteriori parameter choice rule. Next, a numerical example is proposed to show the illustration of the results in theory in Section 5. Finally, a conclusion is given in Section 6.

2. Some Auxiliary Results

Before showing some lemmas, we recall the Mittag-Leffler function which is defined by
E β , κ ( z ) = k = 0 z k Γ ( β k + κ ) , z C ,
where β > 0 and κ R are arbitrary constant. In Reference [14], the properties of the Mittag-Leffler function are discussed. Hereby, we present the following Lemmas of the Mittag-Leffler function which can be found in Reference ([14], Chapter 1).
Lemma 1.
Let 0 < β 0 < β 1 < 1 . Then there exist the constants B ¯ 1 , B ¯ 2 , B ¯ 3 depending only on β 0 , β 1 such that for all β [ β 0 , β 1 ] ,
B ¯ 1 Γ ( 1 - β ) 1 1 - x E β , 1 ( x ) B ¯ 2 Γ ( 1 - β ) 1 1 - x , E β , α ( x ) B ¯ 3 1 - x , x 0 , α R .
These estimates are uniform for all β [ β 0 , β 1 ] ·
Lemma 2.
(see Reference [7]) For 0 < β < 1 , we have:
E β , β ( - ζ ) 0 , ζ 0 .
Proof. 
As for the proof, see Miller and Samko [15]. □
Lemma 3.
(see Reference [7]) For ξ > 0 , α > 0 and a positive integer n N , we have:
d n d t n E β , 1 ( - ξ 2 t β ) = - ξ 2 t β - n E β , β - n + 1 ( - ξ 2 t β ) , t > 0 , d d t ( t E β , 2 ( - ξ 2 t β ) ) = E β , 1 ( - ξ 2 t β ) , t 0 .
Lemma 4.
(see Reference [7]) By Lemma 2 and Lemma 3, we have
0 ϱ | t γ - 1 E β , β ( - ξ 2 t γ ) | d t = 0 ϱ t γ - 1 E β , β ( - ξ 2 t γ ) d t = - 1 ξ 2 0 ϱ d d t E β , 1 ( - ξ 2 t γ ) d t = 1 ξ 2 1 - E α , 1 ( - ξ 2 ϱ α ) , ϱ > 0 .
Lemma 5.
(see Reference [16]) For 0 < α < 1 , ξ R , the following inequalities hold:
sup ξ R | 1 + ξ 2 - k 1 - e - α 2 ξ 2 4 | max α 2 k , α 2 .
Proof. 
The proof can be found in Reference [9]. □
Lemma 6.
Let β ( 0 , 1 ) and ξ R , the following estimate holds
0 T s β - 1 E β , β ( - ξ 2 s β ) d s - 1 = ξ 2 1 - E β , 1 ( - ξ 2 T β ) ξ 2 1 - E β , 1 ( - T β ) , if | ξ | 1 , 1 1 - E β , 1 ( - T β ) , if | ξ | < 1 .
Proof. 
If | ξ | 1 then since E β , 1 ( - y ) for 0 < β < 1 is a decreasing function for y > 0 , we get E β , 1 ( - ξ 2 T β ) E β , 1 ( - T β ) . Whereupon
0 T s β - 1 E β , β ( - ξ 2 s β ) d s - 1 = ξ 2 1 - E β , 1 ( - ξ 2 T β ) ξ 2 1 - E β , 1 ( - T β ) , for | ξ | 1 .
If | ξ | 1 then since E β , β ( - y ) with 0 < β < 1 is a decreasing function for y > 0 , we get E β , β ( - ξ 2 s β ) E β , β ( - s β ) , so
0 T s β - 1 E β , β ( - ξ 2 s β ) d s - 1 0 T s β - 1 E β , β ( - s β ) d s - 1 = 1 1 - E β , 1 ( - T β ) , for | ξ | 1 .
Lemma 7.
For α ( 0 , 1 ) and ξ R , from Lemma 6, one has:
1 0 T s β - 1 E β , β ( - ξ 2 s β ) d s e α 2 ξ 2 4 = ξ 2 1 - E β , 1 ( - ξ 2 T β ) e α 2 ξ 2 4 ξ 2 α 2 4 1 - E β , 1 ( - T β ) e α 2 ξ 2 4 4 α 2 1 ( 1 - E β , 1 ( - T β ) , if | ξ | 1 , 1 1 - E β , 1 ( - T β ) e α 2 ξ 2 4 4 α 2 1 1 - E β , 1 ( - T β ) , if | ξ | < 1 .
This gives
1 0 T s β - 1 E β , β ( - ξ 2 s β ) d s e α 2 ξ 2 4 4 α 2 1 1 - E β , 1 ( - T β ) .

3. The Priori Parameter Choice

Next, the error estimate of the mollification regularization method will be derived under the priori parameter choice rule in this section. We consider the Gauss function
ρ α ( x ) : = 1 α π e - x 2 α 2 ,
as the mollifer kernel, where α is a positive constant.
We define an operator K α as
K α f ( x ) : = ρ α f ( x ) : = R ρ α ( t ) f ( x - t ) d t = R ρ α ( x - t ) f ( t ) d t ,
for f ( x ) L 2 ( R ) . The original ill-posed problem is replaced by a new problem of searching its approximation f ε , α ( x ) which is defined by
f ε , α ( x ) : = 1 2 π R e i x ξ ρ α f ε ( ξ ) ^ d ξ ,

The Inverse Source Problem

By using the Fourier transform, the problem (1) is formulated in the following frequency space
β u ^ ( ξ , t ) t β + ξ 2 u ^ ( ξ , t ) = ϕ ( t ) f ^ ( ξ ) , ( ξ , t ) R × ( 0 , T ] , u ^ ( ξ , 0 ) = 0 , ξ R , u ^ ( ξ , T ) = g ^ ( ξ ) , ξ R .
From the equation and the initial value in (20), we obtain
u ^ ( ξ , t ) = 0 t ( t - s ) β - 1 E β , β ( - ξ 2 ( t - s ) β ) ϕ ( s ) f ^ ( ξ ) d s .
Or equivalently,
u ( x , t ) = 1 2 π R e i ξ x 0 t ( t - s ) β - 1 E β , β ( - ξ 2 ( t - s ) β ) ϕ ( s ) d s f ^ ( ξ ) d ξ .
Set
D β ( ξ , t - s ) = ( t - s ) β - 1 E β , β ( - ξ 2 ( t - s ) β ) .
And u ^ ( ξ , T ) = g ^ ( ξ ) in (20), one has
f ^ ( ξ ) = g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s ·
Using the inverse Fourier transform, then we obtain the formula of the source function f
f ( x ) = 1 2 π R e i ξ x g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s d ξ .
On the other hand, if ϕ ( t ) is bounded by inf t [ 0 , T ] | ϕ ( t ) | ϕ ( t ) sup t [ 0 , T ] | ϕ ( t ) | = ϕ C [ 0 , T ] , we have 0 T D β ( ξ , T - s ) ϕ ( s ) d s - 1 can be written then 1 inf t [ 0 , T ] | ϕ ( t ) | ξ 2 1 - E β , 1 ( - ξ 2 T β ) . The unbounded function ξ 2 1 - E β , 1 ( - ξ 2 T β ) can be seen as an amplification when ξ . From now on, putting inf t [ 0 , T ] | ϕ ( t ) | = A 0 , inf t [ 0 , T ] | ϕ ε ( t ) | = A 1 , sup t [ 0 , T ] | ϕ ( t ) | = ϕ C [ 0 , T ] = Φ . From (19) with α is a regularization parameter and α depends on ε , we found the regularized solution
f ^ ε , α ( ξ ) = g ε ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ε ( s ) d s e α 2 ξ 2 4 ·
Using inverse Fourier transform, we get
f ε , α ( x ) = 1 2 π R g ε ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ε ( s ) d s e α 2 ξ 2 4 e i ξ x d ξ ·
The main conclusion of this section are given below.
Theorem 1.
Let f ( x ) , given by (24), be the exact solution of (1) with exact data g L 2 ( R ) , and f ε , α ( x ) is approximation solution of f ( x ) with measured data g ε L 2 ( R ) . Then we obtain
a. If 0 < k < 1 , and choosing α ( ε ) = ε M 1 2 ( k + 1 ) , we have a convergence estimate
f ( . ) - f ε , α ( . ) L 2 ( R ) ε k k + 1 M 1 k + 1 max 1 , ε M 1 - k k + 1 + R ( A 0 , A 1 , g ^ ) ·
b. If k > 1 , by choosing α ( ε ) = ε M 1 4 , we have a convergence estimate
f ( . ) - f ε , α ( . ) L 2 ( R ) ε 1 2 M 1 2 1 + R ( A 0 , A 1 , g ^ ) ,
in which
R ( A 0 , A 1 , g ^ ) = 4 1 - E β , 1 ( - T β ) 1 A 1 + g ^ L 2 ( R ) A 1 A 0 .
Proof. 
From (24) and (26), by the Parseval formula, the triangle inequality, we obtain
f ( . ) - f ε , α ( . ) L 2 ( R ) = f ^ ( . ) - f ε , α ^ ( . ) L 2 ( R ) = g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s - g ε ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ε ( s ) d s e α 2 ξ 2 4 L 2 ( R ) = I 1 L 2 ( R ) + I 2 L 2 ( R ) + I 3 L 2 ( R ) ,
in which
I 1 = g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s - g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s e α 2 ξ 2 4 , I 2 = g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ε ( s ) d s e α 2 ξ 2 4 - g ε ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ε ( s ) d s e α 2 ξ 2 4 , I 3 = g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s e α 2 ξ 2 4 - g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ε ( s ) d s e α 2 ξ 2 4 ·
Next, we estimate the error by diving it into three steps as follows
Step 1: Estimate for I 1 L 2 ( R ) 2 , we have
I 1 L 2 ( R ) 2 = g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s 1 - e - α 2 ξ 2 4 L 2 ( R ) 2 = ( 1 + ξ 2 ) - k ( 1 - e - α 2 ξ 2 4 ) ( 1 + ξ 2 ) k f ^ ( ξ ) L 2 ( R ) 2 sup ξ R | ( 1 + ξ 2 ) - k ( 1 - e - α 2 ξ 2 4 ) | 2 f H k ( R ) 2 M 2 max α 4 k , α 4 ·
Hence,
I 1 L 2 ( R ) M max α 2 k , α 2 ·
Step 2: Estimate for I 2 L 2 ( R ) 2 , we get
I 2 L 2 ( R ) 2 = g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ε ( s ) d s e α 2 ξ 2 4 - g ε ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ε ( s ) d s e α 2 ξ 2 4 L 2 ( R ) 2 A 1 - 2 g ^ ( ξ ) - g ε ^ ( ξ ) L 2 ( R ) 2 sup ξ R | 0 T ( T - s ) β - 1 E β , β ( - ξ 2 ( T - s ) β ) d s e α 2 ξ 2 4 | - 2 A 1 - 2 g ^ ( ξ ) - g ε ^ ( ξ ) L 2 ( R ) 2 sup ξ R | ξ 2 1 - E β , 1 ( - ξ 2 T β e α 2 ξ 2 4 | 2 16 ε 2 α 4 A 1 1 - E β , 1 ( - T β ) - 2 ·
Hence, we conclude that
I 2 L 2 ( R ) 4 ε α 2 A 1 1 - E β , 1 ( - T β ) - 1 .
Step 3: Estimate for I 3 L 2 ( R ) 2 , we have
I 3 L 2 ( R ) 2 = g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s e α 2 ξ 2 4 - g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ε ( s ) d s e α 2 ξ 2 4 L 2 ( R ) 2 = g ^ ( ξ ) e α 2 ξ 2 4 0 T D β ( ξ , T - s ) ϕ ε ( s ) - ϕ ( s ) d s 0 T D β ( ξ , T - s ) ϕ ε ( s ) d s 0 T D β ( ξ , T - s ) ϕ ( s ) d s L 2 ( R ) 2 ·
From (35), we get
I 3 L 2 ( R ) 2 = A 1 - 2 ϕ ε - ϕ C [ 0 , T ] 2 g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s e α 2 ξ 2 4 L 2 ( R ) 2 A 1 - 2 ϕ ε - ϕ C [ 0 , T ] 2 g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s e α 2 ξ 2 4 L 2 ( R ) 2 ( A 0 A 1 ) - 2 ϕ ε - ϕ C [ 0 , T ] 2 | ξ 2 ( 1 - E β , 1 ( - ξ 2 T β ) ) e α 2 ξ 2 4 | 2 R | g ^ ( ξ ) | 2 d ξ 16 α 4 A 0 A 1 1 - E β , 1 ( - T β ) - 2 ϕ ε - ϕ C [ 0 , T ] 2 R | g ^ ( ξ ) | 2 d ξ ·
Hence,
I 3 L 2 ( R ) 4 ε α 2 A 0 A 1 1 - E β , 1 ( - T β ) - 1 g ^ L 2 ( R ) .
Combining (32), (34) and (36), we received
(a) If 0 k 1 by choosing α ( ε ) = ε M 1 2 ( k + 1 ) , we have a convergent estimation:
f ( . ) - f ε , α ( . ) L 2 ( R ) is of order ε k k + 1 .
(b) If k > 1 , by choosing α ( ε ) = ε M 1 4 , we have a convergent estimation:
f ( . ) - f ε , α ( . ) L 2 ( R ) is of order ε 1 2 .

4. The Discrepancy Principle

Now, we present the posteriori regularization parameter choice rule. The most general of the posteriori rules is the Morozov discrepancy principle [17]. Choosing the regularization parameter α as the solution of the equation
l ( α ) = e - α 2 ξ 2 4 g ε ^ ( ξ ) - g ε ^ ( ξ ) L 2 ( R ) = ε + η log log T ε - 1 ,
where η > 1 is a constant.
Remark 1.
To ensure the existence and uniqueness, we can choose η such that
0 < ε + η log log T ε - 1 < g ε ^ L 2 ( R ) .
To establish the existence and uniqueness of the solution of Equation (40), we consider the following lemmas
Lemma 8.
If ε > 0 then there holds:
(a) 
l ( α ) is a continous function.
(b) 
lim α 0 + l ( α ) = 0 .
(c) 
lim α + l ( α ) = g ε ^ L 2 ( R ) .
(d) 
l ( α ) is a strictly increasing function.
The proof is very easy and we omit it here.
Lemma 9.
The following inequality holds:
e - α 2 ξ 2 4 g ε ^ ( ξ ) - g ^ ( ξ ) L 2 ( R ) 2 ε + η log log T ε - 1 ·
Proof. 
Applying the triangle inequality and (40), we have
e - α 2 ξ 2 4 g ε ^ ( ξ ) - g ^ ( ξ ) L 2 ( R ) e - α 2 ξ 2 4 g ε ^ ( ξ ) - g ε ^ ( ξ ) L 2 ( R ) + g ε ^ ( ξ ) - g ^ ( ξ ) L 2 ( R ) e - α 2 ξ 2 4 g ε ^ ( ξ ) - g ε ^ ( ξ ) L 2 ( R ) + g ε ^ ( ξ ) - g ^ ( ξ ) L 2 ( R ) 2 ε + η log log T ε - 1 .
Lemma 10.
For any 0 ξ R , let s , t [ 0 , T ] such that 0 s t T , making the substitution ξ 2 and using the inequality: B ¯ 3 s β - 1 1 + ξ 2 s β B ¯ 3 s β - 1 , we have the following estimate
0 T D β ( ξ , T - s ) d s = 0 T ( T - s ) β - 1 E β , β ( - ξ 2 ( T - s ) β ) d s B ¯ 3 T β β ·
Lemma 11.
If α is the solution of Equation (40), then the following inequality also holds:
4 α 2 H β ( B ¯ 3 , T , Φ , M ) η ( log log T ε ·
whereby M f H k ( R ) ·
Proof. 
Due to (40), we receive
ε + η log log T ε - 1 = ( 1 - e - α 2 ξ 2 4 ) g ε ^ ( ξ ) L 2 ( R ) ( 1 - e - α 2 ξ 2 4 ) g ε ^ ( ξ ) - ( 1 - e - α 2 ξ 2 4 ) g ^ ( ξ ) + ( 1 - e - α 2 ξ 2 4 ) g ^ ( ξ ) L 2 ( R ) ε + ( 1 - e - α 2 ξ 2 4 ) ( 1 + ξ 2 ) - k 0 T D β ( ξ , T - s ) ϕ ( s ) d s ( 1 + ξ 2 ) k f ^ ( ξ ) L 2 ( R ) ε + sup ξ R | ( 1 - e - α 2 ξ 2 4 ) ( 1 + ξ 2 ) - k 0 T D β ( ξ , T - s ) ϕ ( s ) d s | M ε + α 2 4 H β ( B ¯ 3 , T , Φ , M ) ,
whereby
H β ( B ¯ 3 , T , Φ , M ) = ( β ) - 1 Φ B ¯ 3 T β M .
So
4 α 2 H β ( B ¯ 3 , T , Φ , M ) η log log T ε .
Lemma 12.
For 0 < α < 1 , using the Lemma 7, the following inequality holds:
sup ξ R | ξ 2 1 - E β , 1 ( - ξ 2 T β ) k + 1 e - α 2 ξ 2 4 | k + 1 1 - E β , 1 ( - T β ) k + 1 4 α 2 k + 1 .
The proof is similar to Lemma 7 and we omit it here.
Next, the main results of this section are shown under Theorem.
Theorem 2.
Assume the condition g ε - g ε where . denotes the L 2 ( R ) -norm with ε > 0 is a noise level and the condition (5) holds, then there holds the following error estimate
f ( . ) - f ε , α ( . ) L 2 ( R ) = f ^ ( . ) - f ε , α ^ ( . ) L 2 ( R ) ( ε ( log log T ε k + 1 L β ( k , T ) H β ( B ¯ 3 , T , Φ ) A 0 η k + 1 M k + Φ A 0 k + 1 1 1 - E β , 1 ( - T β ) k ) 1 k + 1 M 1 k + 1 . 2 ε + η log log T ε - 1 k k + 1 + ε log log T ε H β ( B ¯ 3 , T , Φ , M ) η A 0 1 - E β , 1 ( - T β ) + H β ( B ¯ 3 , T , Φ , M ) g ^ L 2 ( R ) η 1 - E β , 1 ( - T β ) A 0 A 1 .
Proof. 
By the Parseval formula, we get
f ( . ) - f ε , α ( . ) L 2 ( R ) = f ^ ( . ) - f ε , α ^ ( . ) L 2 ( R ) e - α 2 ξ 2 4 g ε ^ ( ξ ) - g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s L 2 ( R ) + g ε ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s e α 2 ξ 2 4 - g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s e α 2 ξ 2 4 L 2 ( R ) + g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s e α 2 ξ 2 4 - g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ε ( s ) d s e α 2 ξ 2 4 L 2 ( R ) J 1 L 2 ( R ) + J 2 L 2 ( R ) + J 3 L 2 ( R ) ,
We can divide the proof into three steps as follows:
Step 1: Estimate for J 1 L 2 ( R ) 2 , using the Hölder inequality, we obtain
J 1 L 2 ( R ) 2 = 1 0 T D β ( ξ , T - s ) ϕ ( s ) d s 2 e - α 2 ξ 2 4 g ε ^ ( ξ ) - g ^ ( ξ ) L 2 ( R ) 2 ξ 2 A 0 ( 1 - E β , 1 ( - ξ 2 T β ) ) e - α 2 ξ 2 4 g ^ ε ( ξ ) - g ^ ( ξ ) L 2 ( R ) 2 ( C 1 2 ) 1 k + 1 × ( C 2 2 ) k k + 1 ,
whereby
C 1 2 = R ξ 2 A 0 ( 1 - E β , 1 ( - ξ 2 T β ) ) 2 e - α 2 ξ 2 4 g ^ ε ( ξ ) - g ^ ( ξ ) 2 k + 1 k + 1 d ξ , C 2 2 = R e - α 2 ξ 2 4 g ^ ε ( ξ ) - g ^ ( ξ ) 2 k k + 1 k + 1 d ξ ·
From (52), we can check that C 2 2 k k + 1 as follows
( C 2 2 ) k k + 1 = R e - α 2 ξ 2 4 g ^ ε ( ξ ) - g ^ ( ξ ) 2 d ξ k k + 1 = e - α 2 ξ 2 4 g ^ ε ( ξ ) - g ^ ( ξ ) L 2 ( R ) 2 k k + 1 2 ε + η log log T ε - 1 2 k k + 1 .
On the other hand, we deduce
( C 1 2 ) 1 k + 1 - + ξ 2 A 0 1 - E β , 1 ( - ξ 2 T β ) 2 ( k + 1 ) e - α 2 ξ 2 4 g ^ ε ( ξ ) - g ^ ( ξ ) 2 d ξ 1 k + 1 = ξ 2 A 0 1 - E β , 1 ( - ξ 2 T β ) k + 1 e - α 2 ξ 2 4 g ^ ε ( ξ ) - g ^ ( ξ ) L 2 ( R ) 2 k + 1 ( ξ 2 A 0 1 - E β , 1 ( - ξ 2 T β ) k + 1 e - α 2 ξ 2 4 g ^ ε ( ξ ) - g ^ ( ξ ) L 2 ( R ) + ξ 2 A 0 1 - E β , 1 ( - ξ 2 T β ) k + 1 e - α 2 ξ 2 4 g ^ ( ξ ) - g ^ ( ξ ) L 2 ( R ) ) 2 k + 1 .
To estimate C 1 , we give two Lemmas as follows:
Lemma 13.
Assume that the condition g ^ ε ( ξ ) - g ^ ( ξ ) L 2 ( R ) ε holds. Then we have the following estimate
ξ 2 A 0 1 - E β , 1 ( - ξ 2 T β ) k + 1 e - α 2 ξ 2 4 g ^ ε ( ξ ) - g ^ ( ξ ) L 2 ( R ) ε ( log log T ε k + 1 L β ( k , T ) H β ( B ¯ 3 , T , Φ , M ) A 0 η k + 1 .
Proof. 
Using the Lemma 12 and setting L β ( k , T ) = k + 1 ( 1 - E β , 1 ( - T β ) ) , we get
ξ 2 A 0 1 - E β , 1 ( - ξ 2 T β ) k + 1 e - α 2 ξ 2 4 g ^ ε ( ξ ) - g ^ ( ξ ) L 2 ( R ) ε 4 α 2 k + 1 k + 1 A 0 1 - E β , 1 ( - T β ) k + 1 ε ( log log T ε k + 1 L β ( k , T ) H β ( B ¯ 3 , T , Φ , M ) A 0 η k + 1
in which H β ( B ¯ 3 , T , Φ , M ) is defined in Lemma 11. □
Lemma 14.
Let ξ R and exist M is a positive constant such that M f H k ( R ) , we get
ξ 2 A 0 1 - E β , 1 ( - ξ 2 T β ) k + 1 e - α 2 ξ 2 4 g ^ ( ξ ) - g ^ ( ξ ) L 2 ( R ) Φ A 0 k + 1 1 1 - E β , 1 ( - T β ) k M .
Proof. 
Applying the Lemma 4, we receive
ξ 2 A 0 1 - E β , 1 ( - ξ 2 T β ) k + 1 e - α 2 ξ 2 4 g ^ ( ξ ) - g ^ ( ξ ) L 2 ( R ) = ξ 2 1 - e - α 2 ξ 2 4 1 k + 1 A 0 1 - E β , 1 ( - ξ 2 T β ) k + 1 ( 1 + ξ 2 ) - k ( 1 + ξ 2 ) k f ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s L 2 ( R ) Φ A 0 sup ξ R | ξ 2 A 0 ( 1 - E β , 1 ( - ξ 2 T β ) ) k ( 1 - e - α 2 ξ 2 4 ) ( 1 + ξ 2 ) k | M Φ A 0 k + 1 sup ξ R | ξ 2 ( 1 + ξ 2 ) ( 1 - E β , 1 ( - ξ 2 T β ) ) k ( 1 - e - α 2 ξ 2 4 ) | M Φ A 0 k + 1 1 1 - E β , 1 ( - T β ) k M .
Combining (54), (56) and (58), we have estimate ( C 1 2 ) 1 k + 1 as follows
( C 1 2 ) 1 k + 1 ( ε ( log log T ε k + 1 L β ( k , T ) H β ( B ¯ 3 , T , Φ ) A 0 η k + 1 M k + Φ A 0 k + 1 1 1 - E β , 1 ( - T β ) k ) 2 k + 1 M 2 k + 1 .
From (51) to (59), so
J 1 L 2 ( R ) ( ε ( log log T ε k + 1 L β ( k , T ) H β ( B ¯ 3 , T , Φ ) A 0 η k + 1 M k + Φ A 0 k + 1 1 1 - E β , 1 ( - T β ) k ) 1 k + 1 M 1 k + 1 . 2 ε + η log log T ε - 1 k k + 1 .
Step 2: Estimate for J 2 L 2 ( R ) 2 , we have
J 2 L 2 ( R ) 2 g ^ ε ( ξ ) - g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s e α 2 ξ 2 4 L 2 ( R ) 2 = ξ 2 A 0 1 - E β , 1 ( - ξ 2 T β ) e - α 2 ξ 2 4 g ^ ε ( ξ ) - g ^ ( ξ ) L 2 ( R ) 2 16 α 4 g ^ ε - g ^ L 2 ( R ) 2 A 0 2 1 1 - E β , 1 ( - T β ) 2 ·
Applying the Lemmas 11 and 12 in case k = 0 , we know that
J 2 L 2 ( R ) 2 ε log log T ε 2 H β ( B ¯ 3 , T , Φ , M ) η A 0 1 - E β , 1 ( - T β ) 2 .
Hence, we conclude that
J 2 L 2 ( R ) ε log log T ε H β ( B ¯ 3 , T , Φ , M ) η A 0 1 - E β , 1 ( - T β ) .
Step 3: Estimate for J 3 L 2 ( R ) 2 , we have:
J 3 L 2 ( R ) 2 g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ( s ) d s e α 2 ξ 2 4 - g ^ ( ξ ) 0 T D β ( ξ , T - s ) ϕ ε ( s ) d s e α 2 ξ 2 4 L 2 ( R ) 2 = e - α 2 ξ 2 4 g ^ ( ξ ) 0 T D β ( ξ , T - s ) ( ϕ ε ( s ) - ϕ ( s ) ) d s 0 T D β ( ξ , T - s ) ϕ ( s ) d s 0 T D β ( ξ , T - s ) ϕ ε ( s ) d s L 2 ( R ) 2 .
From (64), it gives
J 3 L 2 ( R ) 2 ϕ ε - ϕ C [ 0 , T ] 0 T D β ( ξ , T - s ) d s A 0 A 1 0 T D β ( ξ , T - s ) d s 2 e - α 2 ξ 2 4 g ^ ( ξ ) L 2 ( R ) 2 ϕ - ϕ ε C [ 0 , T ] 2 A 0 2 A 1 2 ξ 2 1 - E β , 1 ( - ξ 2 T β ) e - α 2 ξ 2 4 g ^ ( ξ ) L 2 ( R ) 2 .
Applying Lemma 12 with k = 0 and Lemma 11, we know that
J 3 L 2 ( R ) 2 16 α 4 1 1 - E β , 1 ( - T β ) 2 ϕ - ϕ ε C [ 0 , T ] 2 A 0 2 A 1 2 g ^ L 2 ( R ) 2 ε log log T ε 2 H β ( B ¯ 3 , T , Φ , M ) g ^ L 2 ( R ) η 1 - E β , 1 ( - T β ) A 0 A 1 2 ·
Therefore,
J 3 L 2 ( R ) ε log log T ε H β ( B ¯ 3 , T , Φ , M ) g ^ L 2 ( R ) η 1 - E β , 1 ( - T β ) A 0 A 1 .
Combining (60), (63) and (67), we get:
f ( . ) - f ε , α ( . ) L 2 ( R ) = f ^ ( . ) - f ε , α ^ ( . ) L 2 ( R ) ( ε ( log log T ε k + 1 L β ( k , T ) H β ( B ¯ 3 , T , Φ ) A 0 η k + 1 M k + Φ A 0 k + 1 1 1 - E β , 1 ( - T β ) k ) 1 k + 1 M 1 k + 1 . 2 ε + η log log T ε - 1 k k + 1 + ε log log T ε H β ( B ¯ 3 , T , Φ , M ) η A 0 1 - E β , 1 ( - T β ) + H β ( B ¯ 3 , T , Φ , M ) g ^ L 2 ( R ) η 1 - E β , 1 ( - T β ) A 0 A 1 .
Nohing that
lim ε 0 ε log log T ε = 0 , lim ε 0 ε log log T ε k + 1 = 0 .
Combining (68) and (69), we conclude that
f ( . ) - f ε , α ( . ) L 2 ( R ) = f ^ ( . ) - f ε , α ^ ( . ) L 2 ( R ) 0 , as ε 0 .
The proof of Theorem 2 is completed. □

5. Numerical Experiments

In this section, in order to illustrate the usefulness of the proposed methods, we consider the numerical examples intended. We carry out numerically above regularization methods to verify our proposed methods. The numerical examples with T = 1 , and β = 0.4 , β = 0.95 are shown in this section, respectively. In the following, we give an example which had the exact expression of the solutions ( u ( x , t ) , f ( x ) ) . We use the computations in Matlab codes which are given by Podlubny [18] for computing the generalized Mittag-Leffler function and the accuracy control in computing is 10 - 10 . We will do the numerical tests with x [ - 7 , 7 ] and η = 1.1 . The couple of ( ϕ ε , g ε ) , which are determined below, play as measured data with a random noise as follows:
ϕ ε ( · ) = ϕ ( · ) + ε 2 rand ( . ) - 1 , g ε ( · ) = g ( · ) + ε 2 rand ( . ) - 1 .
Following Reference [9], we know the function r a n d ( · ) generates arrays of random numbers whose elements are normally distributed with mean 0, variance σ 2 = 1 and standard deviation σ = 1 , and it gives rand(size(.)) and rand(size(.)) returns an array of random entries that is the same size as g and ϕ , respectively. We can easily verify the validity of the inequality:
ϕ ε - ϕ C [ 0 , T ] ε , g ε - g L 2 ( R ) ε .
In this example, we consider particularly a one-dimensional case of the problem (1) for f is an exact data function.
β u ( x , t ) t β = u x x ( x , t ) + ϕ ( t ) f ( x ) , ( x , t ) R × ( 0 , T ] , u ( x , 0 ) = 0 , x R , u ( x , 1 ) = g ( x ) , x R ·
In this example, we choose the following solution
u ( x , t ) = E β , 1 ( t β ) - E β , 1 ( - t β ) sin x 2 .
Then a simple computation yields
ϕ ( t ) = 5 4 E β , 1 ( t β ) + 3 4 E β , 1 ( - t β ) .
and f ( x ) = sin x 2 . Moreover, we have u ( x , 0 ) = u 0 ( x ) = 0 and
u ( x , 1 ) = u 1 ( x ) = g ( x ) = E β , 1 ( 1 ) - E β , 1 ( - 1 ) sin x 2 .
Next, for computing the integral in the latter equality, see Reference [19], we use the fact that
0 x u κ - 1 E β , κ ( y u β ) ( x - u ) β - 1 E β , β ( z ( x - u ) β ) d u = y E β , κ + β ( y x β ) - z E β , β + κ ( z x β ) y - z x β + κ + 1 ·
From ϕ ε ( . ) = ϕ ( . ) + ε 2 rand ( . ) - 1 , we have
0 1 s β - 1 E β , β ( - ξ 2 s β ) ϕ ε ( 1 - s ) d s = 0 1 s β - 1 E β , β ( - ξ 2 s β ) ϕ ( 1 - s ) d s + ε 2 rand ( . ) - 1 0 1 s β - 1 E β , β ( - ξ 2 s β ) d s ·
Combining (72), (75) and (78), we have
0 1 s β - 1 E β , β ( - ξ 2 s β - 1 ) ϕ ε ( 1 - s ) d s = 5 4 E β , β + 1 ( 1 ) + ξ 2 E β , β + 1 ( - ξ 2 ) 1 + ξ 2 - 3 4 E β , β + 1 ( - 1 ) - ξ 2 E β , β + 1 ( - ξ 2 ) - 1 + ξ 2 + ε 2 rand ( . ) - 1 ξ 2 1 - E β , 1 ( - ξ 2 ) .
In general, the numerical methods referenced by References [20,21] are summarized in three steps as follows.
Step 1: Choose N to generate the spatial and temporal discretization in such a manner as:
x i = i Δ x , Δ x = π N , i = 0 , N ¯ .
Obviously, the higher value of N will provide numerical results that are more accurate and stable. Here we choose N = 100 is satisfied.
Step 2: Setting f ε , α ( x i ) = f ε , α i and f ( x i ) = f i , constructing two vectors contained all discrete values of f ε , α and f denoted by Λ ε , α and Ψ , respectively.
Λ ε , α = [ f ε , α 0 f ε , α 1 . . . f ε , α Q ] R Q + 1 , Ψ = [ f 0 f 1 . . . f Q - 1 f Q ] R Q + 1 ·
Step 3: The estimation is defined:
Relative error estimation:
E 1 = i = 1 N | f ε , α ( x i ) - f ( x i ) | L 2 ( - 7 , 7 ) 2 i = 1 N | f ( x i ) | L 2 ( - 7 , 7 ) 2 ·
Absolute error estimation:
E 2 = 1 N i = 1 N | f ε , α ( x i ) - f ( x i ) | L 2 ( - 7 , 7 ) 2 ·
Figure 1 shows a 2D figures between the sought and its regularized solutions for N = 100 and β = 0.95 . All figures are presented with ε = 0.1 , ε = 0.01 and ε = 0.001 , respectively.
In Table 1 and Table 2 of this example, we show the error estimation both the priori and the posteriori within case N = 100 , that is, in Table 1 we show the error estimation for both the priori and the posteriori at β = 0.95 with ε { 0.1 , 0.01 , 0.001 } , respectively. In Table 2, we show the relative error estimation and absolute error estimation both the priori and the posteriori with ε = 0.01 with the different values of β { 0.2 , 0.4 , 0.6 , 0.8 } , when ε is fixed and the mesh resolutions are increased, the regularized solution convergence is better than that of the exact solution. From observing the results from the tables and figures above, we conclude that when ε tends to zero, the regularized solution approaches the exact solution.

6. Conclusions

In this study, by using the mollification regularization method, we solved the inverse problem and recovered the source term for time fractional diffusion equation with the time dependent coefficient. In the theoretical results, which we have shown, we obtained the error estimates of both a priori and a posteriori parameter choice rule methods based on a priori condition. In addition, in the numerical results, it shows that the regularized solutions are converged to the exact solution. Furthermore, it also shows that the smaller error of the input data, the better the convergence results.

Author Contributions

Project administration, Y.Z.; Resources, T.T.B.; Methodology, L.D.L.; Writing-review, editing and software, N.C.

Funding

The work was supported by the Fundo para o Desenvolvimento das Ciências e da Tecnologia (FDCT) of Macau under Grant 0074/2019/A2 and NNSF of China (11671339).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Malyshev, I. An inverse source problem for heat equation. J. Math. Anal. Appl. 1989, 142, 206–218. [Google Scholar] [CrossRef]
  2. Geng, F.; Lin, Y. Application of the variational iteration method to inverse heat source problems. Comput. Math. Appl. 2009, 58, 2098–2102. [Google Scholar] [CrossRef]
  3. Shidfar, A.; Babaei, A.; Molabahrami, A. Solving the inverse problem of identifying an unknown source term in a parabolic equation. Comput. Math. Appl. 2010, 60, 1209–1213. [Google Scholar] [CrossRef]
  4. Barkai, E.; Metzler, R.; Klafter, J. From continuous time random walks to the fractional Fokker-Planck equation. Phys. Rev. E 2000, 61, 132–138. [Google Scholar] [CrossRef] [PubMed]
  5. Chaves, A. Fractional diffusion equation to describe Levy flights. Phys. Lett. A 1998, 239, 13–16. [Google Scholar] [CrossRef]
  6. Gorenflo, R.; Mainardi, F.; Scalas, E.; Raberto, M. Fractional calculus and continuoustime finance, the diffusion limit. In Mathematical Finance. Trends in Mathematics; Birkhuser: Basel, Switzerland, 2001; pp. 171–180. [Google Scholar]
  7. Sakamoto, K.; Yamamoto, M. Initial value/boundary value problems for fractional diffusion-wave equations and applications to some inverse problems. J. Math. Anal. Appl. 2011, 382, 426–447. [Google Scholar] [CrossRef]
  8. Zhang, Z.Q.; Wei, T. Identifying an unknown source in time-fractional diffusion equation by a truncation method. Appl. Math. Comput. 2013, 219, 5972–5983. [Google Scholar] [CrossRef]
  9. Yang, F.; Fu, C.L. The quasi-reversibility regularization method for identifying the unknown source for time fractional diffusion equation. Appl. Math. Modell. 2015, 39, 1500–1512. [Google Scholar] [CrossRef]
  10. Wei, T.; Wang, J.G. A modified Quasi-Boundary Value method for an inverse source problem of the time fractional diffusion equation. Appl. Numer. Math. 2014, 78, 95–111. [Google Scholar] [CrossRef]
  11. Yang, F.; Fu, C.L.; Li, X.X. A mollification regularization method for identifying the time-dependent heat source problem. J. Engine Math. 2016, 100, 67–80. [Google Scholar] [CrossRef]
  12. Wei, T.; Wang, J.G. Quasi-reversibility method to identify a space-dependent source for the time-fractional diffusion equation. Appl. Math. Modell. 2015, 39, 6139–6149. [Google Scholar] [CrossRef]
  13. Nguyen, H.T.; Le, D.L.; Nguyen, V.T. Regularized solution of an inverse source problem for the time fractional diffusion equation. Appl. Math. Modell. 2016, 40, 8244–8264. [Google Scholar] [CrossRef]
  14. Podlubny, I. Fractional Differential Equations; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  15. Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives: Theory and Application; Gordon and Breach: New York, NY, USA, 1993. [Google Scholar]
  16. Yang, F.; Fu, C.L. A mollification regularization method for the inverse spatial-dependent heat source problem. J. Comput. Appl. Math. 2014, 255, 555–567. [Google Scholar] [CrossRef]
  17. Kirsch, A. An Introduction to the Mathematical Theory of Inverse Problem; Applied Mathematical Sciences; Springer Science Business Media: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  18. Podlubny, I.; Kacenak, M. Mittag-Leffler Function. The MATLAB Routine. 2006. Available online: http://www.mathworks.com/matlabcentral/fileexchange (accessed on 19 September 2019).
  19. Mathai, A.M.; Haubold, H.J. Mittag Leffler function and Fractional Calculus. In Special Functions for Applied Scientists; Springer: New York, NY, USA, 2008; Chapter 2. [Google Scholar]
  20. Meerschaert, M.; Tadjeran, C. Finite difference approximations for two-sided spacefractional partial differential equations. Appl. Numer. Math. 2006, 56, 80–90. [Google Scholar] [CrossRef]
  21. Bu, W.; Liu, X.; Tang, Y.; Jiang, Y. Finite element multigrid method for multiterm time fractional advection-diffusion equations. Int. J. Model. Simul. Sci. Comput. 2015, 6, 1540001. [Google Scholar] [CrossRef]
Figure 1. A comparison between the exact and regularized solutions for k = 1 , β = 0.95 with N = 100 . (a) ε = 0.1 . (b) ε = 0.01 . (c) ε = 0.001 .
Figure 1. A comparison between the exact and regularized solutions for k = 1 , β = 0.95 with N = 100 . (a) ε = 0.1 . (b) ε = 0.01 . (c) ε = 0.001 .
Mathematics 07 01048 g001
Table 1. The error estimation between the exact and regularized solutions of this example at β = 0.95 with N = 100 .
Table 1. The error estimation between the exact and regularized solutions of this example at β = 0.95 with N = 100 .
ε E 1 β pri E 1 β pos E 2 β pri E 2 β pos
0.10.2796601418308800.1634525316643220.1882569919006350.110030273632189
0.010.1671305134503320.1460775548130550.1125061566191840.098334073898654
0.0010.1440542120783750.1445991580661800.0969720334794470.097338871212350
Table 2. The error estimation between the exact and regularized solutions with the different values of β , ε = 0.01 and N = 100 .
Table 2. The error estimation between the exact and regularized solutions with the different values of β , ε = 0.01 and N = 100 .
β E 1 β pri E 1 β pos E 2 β pri E 2 β pos
0.20.1564016725754360.1760790164709400.0789629196384160.092189970426402
0.40.1463643583051960.1651536715895250.0738953546497860.086469770247512
0.60.1363381648321190.1534131644881680.0688334042469730.080322774289912
0.80.1246921721302270.1403168832682020.0629536615902210.073465933522836
Back to TopTop