Next Article in Journal
Single-Machine Scheduling with Rejection and an Operator Non-Availability Interval
Next Article in Special Issue
Correction: Zhang, H.; Zhang, X. Generalized Tikhonov Method and Convergence Estimate for the Cauchy Problem of Modified Helmholtz Equation with Nonhomogeneous Dirichlet and Neumann Datum. Mathematics 2019, 7, 667
Previous Article in Journal
Non-Stationary Fractal Interpolation
Previous Article in Special Issue
Solutions of Direct and Inverse Even-Order Sturm-Liouville Problems Using Magnus Expansion
 
 
Correction published on 24 September 2019, see Mathematics 2019, 7(10), 888.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Tikhonov Method and Convergence Estimate for the Cauchy Problem of Modified Helmholtz Equation with Nonhomogeneous Dirichlet and Neumann Datum

1
School of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China
2
Development Center of Teachers’ Teaching, North Minzu University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2019, 7(8), 667; https://doi.org/10.3390/math7080667
Submission received: 18 June 2019 / Revised: 16 July 2019 / Accepted: 23 July 2019 / Published: 25 July 2019
(This article belongs to the Special Issue Numerical Analysis: Inverse Problems – Theory and Applications)

Abstract

:
We investigate a Cauchy problem of the modified Helmholtz equation with nonhomogeneous Dirichlet and Neumann datum, this problem is ill-posed and some regularization techniques are required to stabilize numerical computation. We established the result of conditional stability under an a priori assumption for an exact solution. A generalized Tikhonov method is proposed to solve this problem, we select the regularization parameter by a priori and a posteriori rules and derive the convergence results of sharp type for this method. The corresponding numerical experiments are implemented to verify that our regularization method is practicable and satisfied.

1. Introduction

In some practical and theoretical application fields, such as Debye–Huckel theory, implicit marching strategies of the heat equation, the linearization of the Poisson–Boltzmann equation, etc., modified Helmholtz equation has many important applications (please see [1,2,3,4]). It is also because of this, in the past century the forward problem for it caused extensive attention and has been studied deeply. However, in some science research, there exist some inverse problems for this equation. For instance, we usually do not know the data of the entire boundary, the data of the partial boundary, or certain internal spots of one domain can merely be received. The Cauchy problem of the modified Helmholtz equation belongs to this kind of inverse problem. In the present article, the Cauchy problem, outlined in Equation (1), of the modified Helmholtz equation is studied.
Δ w ( y , x ) k 2 w ( y , x ) = 0 , x ( 0 , π ) , y ( 0 , T ) , w ( 0 , x ) = φ ( x ) , x [ 0 , π ] , w y ( 0 , x ) = ψ ( x ) , x [ 0 , π ] , w ( y , 0 ) = w ( y , π ) = 0 , y [ 0 , T ] ,
where k is a positive real number. Since (1) is a linear problem, we can divide it into two problems, i.e., the Cauchy problem with nonhomogeneous Dirichlet data
Δ u ( y , x ) k 2 u ( y , x ) = 0 , x ( 0 , π ) , y ( 0 , T ) , u ( 0 , x ) = φ ( x ) , x [ 0 , π ] , u y ( 0 , x ) = 0 , x [ 0 , π ] , u ( y , 0 ) = u ( y , π ) = 0 , y [ 0 , T ] ,
and the Cauchy problem with inhomogeneous Neumann data
Δ v ( y , x ) k 2 v ( y , x ) = 0 , x ( 0 , π ) , y ( 0 , T ) , v ( 0 , x ) = 0 , x [ 0 , π ] , v y ( 0 , x ) = ψ ( x ) , x [ 0 , π ] , v ( y , 0 ) = v ( y , π ) = 0 , y [ 0 , T ] .
From the principle of linear superposition, we know that the solution of (1) can be expressed as w = u + v . Based on this, we only require to investigate (2) and (3), respectively.
Problems (2) and (3) are both the ill-posed problems, where a small disturbance on the given data can produce a considerable error in the solution [5,6,7], so some regularization techniques are required to overcome its ill-posedness and stabilize numerical computations, please see some regularized strategies in [8,9]. In the past years, we notice that many papers have researched the Cauchy problem of the modified Helmholtz equation and designed some meaningful regularization methods and numerical techniques, such as quasi-reversibility type method [10,11,12,13,14], filtering method [15], iterative method [16], mollification method [17,18], spectral method [19,20], alternating iterative algorithm [21,22], modified Tikhonov method [20,23], Fourier truncation method [12,24], novel trefftz method [25], weighted generalized Tikhonov method [26], and so on.
This paper establishes the conditional stabilities of problems (2) and (3), and constructs a kind of generalized Tikhonov regularization method to solve these two problems (see Section 3). Our work is not only an extension for the boundary (or revised) Tikhonov method [20], but also is a supplement for the one in [27]. In [27], the author presented a generalized Tikhonov method to solve an abstract Cauchy problem with inhomogeneous Dirichlet and Neumann datum in bounded domain, and derive the a priori convergence results for regularized solutions, the author has not established the a posteriori convergence estimates. In this work, we shall derive some a priori and a posteriori sharp convergence results for our regularization solutions, and give an a posteriori selection rule for the regularization parameter which is relatively rare in solving the Cauchy problem of modified Helmholtz equation.
The paper is organized as follows: Section 2 derives the conditional stabilities for (2) and (3). Section 3 constructs the regularization methods, Section 4 states some preparation knowledge. In Section 5, the a priori and a posteriori convergence estimates of sharp type are established. The numerical experiments are done to verify the computation effect of regularized solution in Section 6. Section 7 makes some conclusions and the corresponding discussion.

2. Conditional Stability

We know that (2) and (3) are all ill-posed in the sense of Hadamard, their solutions discontinuity depends on the given Cauchy data. However in the research of inverse problems, by assuming certain a priori condition on the solution, we can often obtain the stability of the considered problem, i.e., the conditional stability (see [28,29,30]). Below, we give and proof of the conditional stabilities for problems (2) and (3). Define
D γ ξ = ξ L 2 ( 0 , π ) ; n = 1 ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 < ξ , X n > 2 < + , γ 1 ,
where · , · denotes the inner product in L 2 ( 0 , π ) , X n : = X n ( x ) = 2 π sin ( n x ) is the eigenfunctions of the space L 2 ( 0 , π ) . According to (4), we define the norm for the space D γ ξ as
ξ D γ ξ = n = 1 ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 < ξ , X n > 2 1 / 2 , γ 1 .
Using the separation of variables, the solutions of (2) and (3) can be expressed as
u ( y , x ) = n = 1 cosh n 2 + k 2 y φ n X n , φ n = φ , X n .
v ( y , x ) = n = 1 sinh ( n 2 + k 2 y ) n 2 + k 2 ψ n X n , ψ n = ψ , X n .
Theorem 1.
Let E > 0 , K = 1 + k 2 , u ( T , x ) satisfy an a priori bound condition
u ( T , x ) D γ u E ,
then for each fixed 0 < y T , it holds that
u ( y , x ) L 2 ( 0 , π ) 2 y 2 T K γ e K T y 2 T E y 2 T φ L 2 ( 0 , π ) 1 y 2 T .
Proof of Theorem 1.
Note that, for 0 < y T , n 1 , e n 2 + k 2 y / 2 cosh ( n 2 + k 2 y ) e n 2 + k 2 y , n 2 + k 2 1 + k 2 , then from (6), (8) and Hölder inequality, we have
u ( y , x ) L 2 ( 0 , π ) = n = 1 cosh n 2 + k 2 y φ n X n L 2 ( 0 , π ) n = 1 cosh 2 ( n 2 + k 2 y ) φ n 2 = n = 1 cosh 2 ( n 2 + k 2 y ) φ n y T φ n 2 y T n = 1 ( cosh ( n 2 + k 2 y ) ) 4 T y φ n 2 y 2 T n = 1 φ n 2 1 y 2 T n = 1 ( e n 2 + k 2 y ) 4 T y φ n 2 y 2 T n = 1 φ n 2 1 y 2 T = n = 1 e 4 T n 2 + k 2 φ n 2 y 2 T n = 1 φ n 2 1 y 2 T = n = 1 ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 cosh 2 ( n 2 + k 2 T ) φ n 2 ( n 2 + k 2 ) 2 γ cosh 2 ( n 2 + k 2 T ) y 2 T n = 1 φ n 2 1 y 2 T = 4 K 2 γ e 2 K T y 2 T × n = 1 ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 | < u ( T , x ) , X n ( x ) > | 2 y 2 T n = 1 φ n 2 1 y 2 T 2 K γ e K T y 2 T E y 2 T φ L 2 ( 0 , π ) 1 y 2 T .
Theorem 2.
Suppose that v ( T , x ) satisfies the a priori condition
v ( T , x ) D γ v E ,
then for the fixed 0 < y T , we have
v ( y , x ) L 2 ( 0 , π ) 2 y 2 T K 1 2 γ T y y 2 T e K T 1 e 2 K T y 2 T E y 2 T ψ L 2 ( 0 , π ) 1 y 2 T .
Proof of Theorem 2.
For n 1 , we notice that sinh ( n 2 + k 2 y ) e n 2 + k 2 y , and n 2 + k 2 1 + k 2 : = K , sinh ( n 2 + k 2 y ) e K y ( 1 e 2 K y ) / 2 , then from (7), (10) and Hölder inequality, we have
v ( y , x ) L 2 ( 0 , π ) n = 1 sinh ( n 2 + k 2 y ) n 2 + k 2 ψ n X n L 2 ( 0 , π ) n = 1 sinh 2 ( n 2 + k 2 y ) ( n 2 + k 2 ) 2 ψ n 2 = n = 1 sinh 2 ( n 2 + k 2 y ) ( n 2 + k 2 ) 2 ψ n y T ψ n 2 y T n = 1 sinh ( n 2 + k 2 y ) n 2 + k 2 4 T y ψ n 2 y 2 T n = 1 ψ n 2 1 y 2 T n = 1 e n 2 + k 2 y n 2 + k 2 4 T y ψ n 2 y 2 T n = 1 ψ n 2 1 y 2 T = n = 1 e 4 T n 2 + k 2 ψ n 2 1 n 2 + k 2 4 T y y 2 T n = 1 ψ n 2 1 y 2 T = n = 1 n 2 + k 2 2 · ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 ( n 2 + k 2 ) 2 γ sinh 2 ( n 2 + k 2 T ) sinh 2 ( n 2 + k 2 T ) ( n 2 + k 2 ) 2 ψ n 2 1 n 2 + k 2 4 T y y 2 T × n = 1 ψ n 2 1 y 2 T n = 1 4 ( K ) 2 4 T y K 2 γ e 2 T K 1 e 2 T K 2 ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 | < v ( T , x ) , X n ( x ) > | 2 y 2 T × n = 1 ψ n 2 1 y 2 T 2 K 1 2 T y K γ e T K 1 e 2 T K y T n = 1 ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 | < v ( T , x ) , X n ( x ) > | 2 y 2 T × n = 1 ψ n 2 1 y 2 T 2 y 2 T K 1 2 γ T y y 2 T e K T 1 e 2 K T y 2 T E y 2 T ψ L 2 ( 0 , π ) 1 y 2 T .
From the inequality above, we can derive the conditional stability result (11). □
In considering an inverse problem, it is very necessary to research the conditional stability, and it has important theoretical significance. For instance, by the result of stability, we can often obtain the uniqueness of a solution and the convergence estimate of one regularization method. The common stability result is f ω ( g ) , here ω is called the stability function, it is monotonically increasing nonnegative, and satisfies ω ( δ ) 0 ( δ 0 ) , and the form of stability result mainly have two types: (1) Hölder type ( ω ( δ ) = δ θ , θ ( 0 , 1 ) ), (2) logarithmic type ( ω ( δ ) = ( ln ( 1 / δ ) ) 1 ). We know that the stability result of Hölder type can tend to zero quickly as δ 0 , but the logarithmic type result is relatively slow.
Now, we interpret the conditional stability results of Theorems 1 and 2 in details. We point out that, in establishing a result of conditional stability, the a priori assumption should be imposed appropriately. This is because: if the a priori condition is too strong, then the derived result extremely depends on the a priori information of the solution; if it is too weak, then we cannot derive the estimate of condition stability easily. From (9) we notice that, by imposing the a priori assumption (8), the solution u depends continuously on the Cauchy data φ ; (11) indicates that the solution v depends continuously on the data ψ under the a priori condition (10); meanwhile the occurring constants are in relation to γ , y , K , T . According the description of the preceding paragraph, we know that the stability results (9) and (11) both belong to the Hölder type, and based on these two estimates we will derive the a-posterior convergence estimates for regularization methods in Section 5.

3. Regularization Method

We found that there is a large amount of recent papers on conditional stability estimates in combination with variational regularization methods based more or less on reference [29]. In recent years, there have been some new works and results in this field, such as in Hilbert spaces, Hilbert scales, and Banach space settings, please see [31,32,33,34,35], etc.
From (6) and (7), we know that cosh ( n 2 + k 2 y ) , sinh ( n 2 + k 2 y ) n 2 + k 2 are unbounded as n , which can enlarge the errors of measured datum, so problems (2) and (3) are both ill-posed problems. In the following, we design the regularized methods to restore the stability of solutions given by (6) and (7). Our method focuses on the generalized Tikhonov regularization under conditional stability estimates.

3.1. Regularization Method for Problem (2)

For all k > 0 , based on the mentality of [27], we can transform (2) into the operator equation equivalently
A 1 ( y ) u ( y , x ) = φ ( x ) ,
here, A 1 ( y ) = 1 / cosh ( L x y ) : L 2 ( 0 , π ) L 2 ( 0 , π ) is a linear self-adjoint and bounded compact operator, its eigenvalue is 1 / cosh ( n 2 + k 2 y ) , the eigenfunction is X n , L x : L 2 ( 0 , π ) L 2 ( 0 , π ) is a linear self-adjoint positive defined operator, the eigenvalue and eigenfunction are n 2 + k 2 and X n , respectively.
Let u δ ( 0 , x ) = φ δ ( x ) be the error data, setting γ 1 , we solve the following minimization problem to construct a generalized Tikhonov regularized solution u α δ ( y , x )
min u L 2 ( 0 , π ) J α ( u ) , J α ( u ) = A 1 ( y ) u φ δ ( x ) L 2 ( 0 , π ) 2 + α L x γ 2 cosh ( L x T ) cosh ( L x y ) u L 2 ( 0 , π ) 2 ,
hence u α δ ( y , x ) is the solution of Euler equation
1 cosh 2 ( L x y ) + α L x γ cosh 2 ( L x T ) cosh 2 ( L x y ) u α δ ( y , x ) = 1 cosh ( L x y ) φ δ ( x ) .
From (14), the regularization solution of (2) can be written as
u α δ ( y , x ) = n = 1 cosh ( n 2 + k 2 y ) φ n δ X n ( x ) 1 + α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) ,
where φ n δ = φ δ , X n L 2 ( 0 , π ) , the error data φ δ satisfies
φ δ φ L 2 ( 0 , π ) δ ,
δ denotes the bound of measured error, α is the regularization parameter. Note that, as γ = 0 , (15) is a boundary (or revised) Tikhonov solution (see [20], etc.), so our work is an extension on predecessors’ ones.

3.2. Regularization Method for Problem (3)

As in Section 3.1, for all k > 0 , we can convert (3) into the operator equation below
A 2 ( y ) v ( y , x ) = v y ( 0 , x ) = ψ ( x ) ,
where A 2 ( y ) = L x / sinh ( L x y ) , A 2 ( y ) : L 2 ( 0 , π ) L 2 ( 0 , π ) is a linear self-adjoint and bounded compact operator, whose eigenvalue is n 2 + k 2 / sinh ( n 2 + k 2 y ) , the eigenfunction is X n .
Now let v y δ ( 0 , x ) = ψ δ ( x ) be the noisy data, and γ 1 . We solve the minimization problem below to design a generalized Tikhonov regularized solution of (3)
min v L 2 ( 0 , π ) J β ( v ) , J β ( v ) = A 2 ( y ) v ψ δ ( x ) L 2 ( 0 , π ) 2 + β L x γ 2 sinh ( L x T ) sinh ( L x y ) v L 2 ( 0 , π ) 2 ,
using the first order essential condition, we can obtain that the regularization solution v β δ ( y , x ) satisfies Euler equation
L x sinh 2 ( L x y ) + β L x γ sinh 2 ( L x T ) sinh 2 ( L x y ) v β δ ( y , x ) = L x sinh ( L x y ) ψ δ ( x ) ,
from (19), we can define the regularization solution of (3) as
v β δ ( y , x ) = n = 1 sinh ( n 2 + k 2 y ) ψ n δ X n ( x ) n 2 + k 2 1 + β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) ,
here, ψ n δ = ψ δ , X n L 2 ( 0 , π ) , the error data ψ δ satisfies
ψ δ ψ L 2 ( 0 , π ) δ ,
δ denotes the bound of measured error, and β is the regularization parameter.

4. Preparation Knowledge

Let α , β , k > 0 , γ 1 , K = 1 + k 2 , n 1 , for each fixed 0 < y T , we define
H 1 ( n ) = e ( 2 T y ) n 2 + k 2 α 4 ( n 2 + k 2 ) γ + e 2 T n 2 + k 2 ,
H 2 ( n ) = e ( 2 T y ) n 2 + k 2 K β ( n 2 + k 2 ) γ 1 1 e 2 K T 2 2 + e 2 T n 2 + k 2 .
We need the following function that is given in [36]
H ( ζ ) = ζ ζ ( 1 ζ ) 1 ζ , ζ ( 0 , 1 ) , 1 , ζ = 0 , 1 ,
it can easily be verified that H ( ζ ) 1 .
Lemma 1.
[36] Suppose that 0 r s < , s 0 , ν > 0 , then there holds that
ν e r ν + e s H r s ν r s .
Theorem 3.
Let α > 0 , H 1 ( n ) is defined in (22), then for each 0 < y T , we can get that
H 1 ( n ) 2 α y 2 T .
Proof of Theorem 3.
Apply Lemma 1 with ν = α ( n 2 + k 2 ) γ 4 , r = ( 2 T y ) n 2 + k 2 , s = 2 T n 2 + k 2 , and from H ( η ) 1 , we have
H 1 ( n ) = e ( 2 T y ) n 2 + k 2 α 4 ( n 2 + k 2 ) γ + e 2 T n 2 + k 2 = 1 α 4 ( n 2 + k 2 ) γ α 4 ( n 2 + k 2 ) γ · e ( 2 T y ) n 2 + k 2 α 4 ( n 2 + k 2 ) γ + e 2 T n 2 + k 2 α ( n 2 + k 2 ) γ 4 1 · H 2 T y 2 T α ( n 2 + k 2 ) γ 4 2 T y 2 T = 1 y 2 T 1 y 2 T y 2 T y 2 T α ( n 2 + k 2 ) γ 2 2 y 2 T = 2 y T ( ( n 2 + k 2 ) γ ) y 2 T 1 y 2 T 1 y 2 T y 2 T y 2 T α y 2 T 2 ( ( n 2 + k 2 ) γ ) y 2 T α y 2 T .
Note that, ( ( n 2 + k 2 ) γ ) y 2 T ( K γ ) y 2 T , K = 1 + k 2 > 1 , ( K γ ) y 2 T < 1 , thus H 1 ( n ) 2 α y 2 T . □
Theorem 4.
Let β > 0 , H 2 ( n ) is defined by (23), then for each fixed 0 < y T , we can obtain that
H 2 ( n ) 2 C 1 β y 2 T , C 1 = K y 2 T 1 2 1 e 2 K T y T .
Proof of Theorem 4.
We take ν = β ( n 2 + k 2 ) γ 1 1 e 2 K T 2 2 , r = ( 2 T y ) n 2 + k 2 , s = 2 T n 2 + k 2 in Lemma 1, and from H ( η ) 1 , the inequality (27) can be derived. □

5. Convergence Estimate

This section respectively selects the regularization parameter by the a priori and a posteriori rules, and derives the convergence estimates of sharp type for our method.

5.1. Convergence Estimate for the Method of Problem (2)

5.1.1. a priori Convergence Estimate

Theorem 5.
Let the exact solution of (2) be given by (6), and the regularized solution u α δ is defined by Equation (15), φ δ is the measured data and satisfies (16). We assume that
u ( T , · ) D γ u 2 = n = 1 ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 | u ( T , · ) , X n | 2 E 2 ,
the regularized parameter α is selection as
α = δ / E ,
then, it can be obtained the following convergence result
u α δ ( y , · ) u ( y , · ) 4 E y 2 T δ 1 y 2 T .
Proof of Theorem 5.
Using triangle inequalities, we get that
u α δ u u α δ u α + u α u ,
where u α is the solution of (15) for the exact data φ . For 0 < y T , as n 1 , e n 2 + k 2 y / 2 cosh ( n 2 + k 2 y ) e n 2 + k 2 y , from (15), (16) and (26), we note that
u α δ ( y , · ) u α ( y , · ) n = 1 cosh ( n 2 + k 2 y ) 1 + α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 2 φ n δ φ n 2 n = 1 e ( 2 T y ) n 2 + k 2 α ( n 2 + k 2 ) γ 4 + e 2 T n 2 + k 2 2 φ n δ φ n 2 2 δ α y 2 T .
On the other hand, by (6), (15), (26) and (28), we have
u α ( y , · ) u ( y , · ) = n = 1 α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 1 + α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) cosh ( n 2 + k 2 y ) φ n X n n = 1 α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 1 + α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 2 cosh ( n 2 + k 2 T ) φ n 2 α n = 1 e ( 2 T y ) n 2 + k 2 α ( n 2 + k 2 ) γ 4 + e 2 T n 2 + k 2 2 ( n 2 + k 2 ) 2 γ e 2 n 2 + k 2 ( 2 T y ) | u ( T , · ) , X n | 2 α n = 1 H 1 2 ( n ) ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 | u ( T , · ) , X n | 2 2 α 1 y 2 T E .
Finally, the proof can be completed by (29) and (31)–(33). □

5.1.2. A Posteriori Convergence Estimate

In Theorem 5, the regularized parameter α is selected by (29), this is an a priori selection rule that needs to know a bound E of exact solution. But in practice we can not acquire the a priori bound easily, so it is unrealistic. Below, we adopt a kind of a posteriori rule to select α , and this method need not know a bound of the solution, and the parameter α depend on the measured data φ δ and measured error bound δ . The reference [37] describes the a posteriori rule in selecting the regularization parameter.
We select the regularization parameter α by the following equation
u α δ ( 0 , x ) φ δ ( x ) = τ δ ,
here, the constant τ > 1 . We give and proof two Lemmas, which are necessary in establishing the convergence results of a posteriori form.
Lemma 2.
Define ρ ( α ) = u α δ ( 0 , x ) φ δ ( x ) , then we have the following conclusions:
(a) 
The function ρ ( α ) is continuous;
(b) 
lim α 0 ρ ( α ) = 0 ;
(c) 
lim α + ρ ( α ) = φ δ ;
(d) 
For α ( 0 , + ) , the function ρ ( α ) is strictly monotonous increasing.
Proof of Lemma 2.
It can be easily proven by setting
ρ ( α ) = n = 1 α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 1 + α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 2 φ n δ 2 1 / 2 .
According to the intermediate value theorem of continuous function on closed interval, we know that (34) has a unique solution when φ δ > τ δ > 0 .
Lemma 3.
For τ > 1 , the regularized solution (15) together with a posteriori rule (34) determine that the regularization parameter α = α ( δ , φ δ ) satisfies α ( τ 1 ) e K T 2 δ E .
Proof. 
From (34), there holds
τ δ = n = 1 α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 1 + α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) φ n δ X n ( x ) n = 1 α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 1 + α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) ( φ n δ φ n ) X n ( x ) + n = 1 α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 1 + α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) φ n X n ( x ) δ + n = 1 α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 1 + α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) φ n X n ( x ) ,
and
n = 1 α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 1 + α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) φ n X n ( x ) n = 1 α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 1 + α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 2 φ n 2 1 / 2 n = 1 α 2 ( n 2 + k 2 ) 2 γ cosh 4 ( n 2 + k 2 T ) φ n 2 1 / 2 n = 1 α 2 cosh 2 ( n 2 + k 2 T ) · ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 cosh 2 ( n 2 + k 2 T ) φ n 2 1 / 2 n = 1 4 α 2 e 2 n 2 + k 2 T · ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 | u ( T , · ) , X n | 2 1 / 2 ( 2 / e K T ) α E ,
from (36) and (37), we get that ( τ 1 ) δ ( 2 / e K T ) α E . The proof is completed. □
Theorem 6.
Let the exact solution u of (2) be given by (6), the regularization solution u α δ is defined by (15), the noisy data φ δ satisfies (16). Suppose that u satisfies the a priori bound (28), the parameter α is chosen by (34), then we can obtain the following convergence result
u α δ ( y , · ) u ( y , · ) C E y 2 T δ 1 y 2 T ,
where C = max 2 ( τ 1 ) e K T / 2 y 2 T , 2 y 2 T K γ e K T y 2 T ( τ + 1 ) 1 y 2 T .
Proof of Theorem 6.
Similar to (31), we have
u α δ ( y , · ) u ( y , · ) u α δ ( y , · ) u α ( y , · ) + u α ( y , · ) u ( y , · ) .
By (32) and Lemma 3, we get
u α δ ( y , · ) u α ( y , · ) 2 δ α y 2 T 2 ( τ 1 ) e K T / 2 y 2 T E y 2 T δ 1 y 2 T .
On the other hand, for fixed 0 < y T , note that
A 1 ( y ) ( u α ( y , · ) u ( y , · ) ) = A 1 ( y ) n = 1 α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) cosh ( n 2 + k 2 y ) φ n X n ( x ) 1 + α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) = n = 1 α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 1 + α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) φ n X n ( x ) = n = 1 α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 1 + α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) ( φ n δ φ n ) X n ( x ) + n = 1 α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 1 + α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) φ n δ X n ( x ) ,
using (16), (34) and (41), we can obtain that
A 1 ( y ) u α ( y , · ) u ( y , · ) δ + τ δ = ( τ + 1 ) δ .
Meanwhile, according to the definition in (5) and a priori condition (28), we have
u α ( y , · ) u ( y , · ) D γ u α u = n = 1 ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 1 + α ( n 2 + k 2 ) γ cosh 2 ( n 2 + k 2 T ) 2 cosh 2 ( n 2 + k 2 y ) φ n 2 1 2 n = 1 ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 cosh 2 ( n 2 + k 2 T ) φ n 2 1 2 E ,
then, using the result of conditional stability in (9), we can derive that
u α ( y , · ) u ( y , · ) 2 y 2 T K γ e K T y 2 T ( τ + 1 ) 1 y 2 T E y 2 T δ 1 y 2 T .
Finally, combining (40) with (44), we can obtain the convergence estimate (38). □

5.2. Convergence Estimate for the Method of Problem (3)

5.2.1. A Priori Convergence Estimate

Theorem 7.
Let the exact solution of (3) is given in (7), the regularization solution v β δ is defined by (20), the error data ψ δ satisfies (21). We suppose that v satisfies
v ( T , · ) D γ v 2 = n = 1 ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 | v ( T , · ) , X n | 2 E 2 ,
and β is taken as
β = δ / E ,
then, for 0 < y T , we can establish the error estimate
v β δ ( y , · ) v ( y , · ) 2 C 1 1 + 1 / ( K e K y ) E y 2 T δ 1 y 2 T ,
where C 1 is given in Theorem 4.
Proof of Theorem 7.
We know that
v β δ v v β δ v β + v β v .
For 0 < y T , as n 1 , sinh ( n 2 + k 2 y ) e n 2 + k 2 y , sinh ( n 2 + k 2 y ) e n 2 + k 2 y ( 1 e 2 K y ) / 2 , from (20), (21) and (27), we note that
v β δ ( y , · ) v β ( y , · ) n = 1 sinh ( n 2 + k 2 y ) n 2 + k 2 1 + β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 2 ψ n δ ψ n 2 n = 1 e ( 2 T y ) n 2 + k 2 K β ( n 2 + k 2 ) γ 1 1 e 2 K T 2 2 + e 2 T n 2 + k 2 2 ψ n δ ψ n 2 2 C 1 δ β y 2 T .
On the other hand, by (7), (20), (27) and (45), we have
v β ( y , · ) v ( y , · ) n = 1 β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 1 + β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 2 sinh ( n 2 + k 2 T ) n 2 + k 2 ψ n 2 β n = 1 e n 2 + k 2 y ( n 2 + k 2 ) e n 2 + k 2 y 1 + β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 2 × ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 sinh ( n 2 + k 2 T ) n 2 + k 2 ψ n 2 β n = 1 e n 2 + k 2 y ( n 2 + k 2 ) γ e 2 T n 2 + k 2 | v ( T , · ) , X n | K e K y 1 + β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 2 β E K e K y n = 1 e ( 2 T y ) n 2 + k 2 K β ( n 2 + k 2 ) γ 1 1 e 2 K T 2 2 + e 2 T n 2 + k 2 2 2 K e K y C 1 β 1 y 2 T E .
From (46), (48)–(50), the convergence result (47) can be derived. □

5.2.2. A Posteriori Convergence Estimate

We find β such that
( v β δ ) y ( 0 , x ) ψ δ ( x ) = τ δ ,
here, τ > 1 is a constant.
Lemma 4.
Let ϱ ( β ) = ( v β δ ) y ( 0 , x ) ψ δ ( x ) , then,
(a)
The function ϱ ( β ) is continuous;
(b)
lim β 0 ϱ ( β ) = 0 ;
(c)
lim β + ϱ ( β ) = ψ δ ;
(d)
For β ( 0 , + ) , the function ϱ ( β ) is strictly monotonous increasing.
Proof of Lemma 4.
We can easily proof this Lemma by setting
ϱ ( β ) = n = 1 β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 1 + β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 2 ψ n δ 2 1 / 2 .
According to the intermediate value theorem of continuous function on closed interval, we know that (51) exists a unique solution as ψ δ > τ δ > 0 .
Lemma 5.
For τ > 1 , Equation (20) together with a posteriori rule (51) determine that the regularization parameter β = β ( δ , ψ δ ) satisfies β K sinh ( K T ) ( τ 1 ) δ E .
Proof of Lemma 5.
From (51), there holds
τ δ = n = 1 β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 1 + β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) ψ n δ X n ( x ) n = 1 β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 1 + β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) ( ψ n δ ψ n ) X n ( x ) + n = 1 β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 1 + β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) ψ n X n ( x ) δ + n = 1 β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 1 + β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) ψ n X n ( x ) ,
and
n = 1 β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 1 + β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) ψ n X n ( x ) n = 1 β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 1 + β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 2 ψ n 2 1 / 2 n = 1 β 2 ( n 2 + k 2 ) 2 γ 2 sinh 4 ( n 2 + k 2 T ) ψ n 2 1 / 2 n = 1 β 2 ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 ( n 2 + k 2 ) sinh 2 ( n 2 + k 2 T ) · sinh 2 ( n 2 + k 2 T ) ( n 2 + k 2 ) 2 ψ n 2 1 / 2 n = 1 β 2 K sinh 2 ( K T ) · ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 | v ( T , · ) , X n | 2 1 / 2 ( 1 / ( K sinh ( K T ) ) ) β E ,
combing with (53) and the estimate above, we obtain that ( τ 1 ) δ ( 1 / ( K sinh ( K T ) ) ) β E . □
Theorem 8.
Let the exact solution of (3) is given in (7), the regularization solution v β δ is defined in (20), the error data φ δ satisfies (21). We assume v satisfies the a priori bound (45), and the regularization parameter is chosen by an a posteriori rule (51), then we have
v β δ ( y , · ) v ( y , · ) C 2 E y 2 T δ 1 y 2 T ,
where
C 2 = max 2 C 1 K sinh ( K T ) ( τ 1 ) y 2 T , 2 K 1 2 γ T y e K T ( 1 e 2 K T ) 1 y 2 T ( τ + 1 ) 1 y 2 T ,
C 1 is given in Theorem 4.
Proof of Theorem 8.
Notice that
v β δ ( y , · ) v ( y , · ) v β δ ( y , · ) v β ( y , · ) + v β ( y , · ) v ( y , · ) .
By (49) and Lemma 5, we get
v β δ ( y , · ) v β ( y , · ) 2 C 1 δ β y 2 T 2 C 1 K sinh ( K T ) ( τ 1 ) y 2 T E y 2 T δ 1 y 2 T .
Below, we do the estimate for the second term of (55). For fixed 0 < y T , we have
A 2 ( y ) v β ( y , · ) v ( y , · ) = A 2 ( y ) n = 1 β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) sinh ( n 2 + k 2 y ) n 2 + k 2 1 + β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) ψ n X n ( x ) = n = 1 β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 1 + β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) ( ψ n δ ψ n ) X n ( x ) + n = 1 β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 1 + β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) ψ n δ X n ( x ) ,
using (21), (51) and (57), we can obtain
A 2 ( y ) v β ( y , · ) v ( y , · ) δ + τ δ = ( τ + 1 ) δ .
Meanwhile, according to the definition in (5) and the a priori bound condition (45), we have
v β ( y , · ) v ( y , · ) D γ v β v = n = 1 ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 1 + β ( n 2 + k 2 ) γ 1 sinh 2 ( n 2 + k 2 T ) 2 sinh ( n 2 + k 2 y ) n 2 + k 2 2 ψ n 2 1 2 n = 1 ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 sinh ( n 2 + k 2 T ) n 2 + k 2 2 ψ n 2 1 2 E ,
then, by the condition stability result (11), we can get that
v β ( y , · ) v ( y , · ) 2 y 2 T K 1 2 γ T y y 2 T e K T 1 e 2 K T y 2 T ( τ + 1 ) 1 y 2 T E y 2 T δ 1 y 2 T .
Finally, combining (56) with (60), we can derive the inequality in (54). □
Remark 1.
In order to derive the error estimates of sharp type for our method, we impose the stronger a priori assumptions (28) and (45), and apply the inequalities in Theorems 3 and 4 to proof Theorems 5–8. It can be verify that there have some functions that satisfy these two assumptions. For instance, we make a verification on the feasibility of the condition (28). We take u ( y , x ) = sin ( x ) cosh ( 1 + k 2 y ) , it can be found that
u ( T , · ) D γ u 2 = n = 1 ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 | u ( T , · ) , X n L 2 ( 0 , π ) | 2 = n = 1 ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 π 2 cosh ( 1 + k 2 T ) 2 π sin ( x ) , 2 π sin ( n x ) L 2 ( 0 , π ) 2 = ( 1 + k 2 ) 2 γ e 4 T 1 + k 2 π 2 cosh ( 1 + k 2 T ) 2 π sin ( x ) , 2 π sin ( x ) L 2 ( 0 , π ) 2 = ( 2 / π ) ( 1 + k 2 ) 2 γ e 4 T 1 + k 2 cosh 2 ( 1 + k 2 T ) 0 π sin 2 x d x 2 = ( π / 2 ) ( 1 + k 2 ) 2 γ e 4 T 1 + k 2 cosh 2 ( 1 + k 2 T ) .
For each fixed k , γ 1 , we always can find the positive numbers l and μ, such that l > k , μ > γ , then it holds that
u ( T , · ) D γ u 2 = n = 1 ( n 2 + k 2 ) 2 γ e 4 T n 2 + k 2 | u ( T , · ) , X n L 2 ( 0 , π ) | 2 = ( π / 2 ) ( 1 + k 2 ) 2 γ e 4 T 1 + k 2 cosh 2 ( 1 + k 2 T ) ( π / 2 ) ( 1 + l 2 ) 2 μ e 4 T 1 + l 2 cosh 2 ( 1 + l 2 T ) ,
i.e., there exists a positive number E = E ( l , μ ) = ( π / 2 ) ( 1 + l 2 ) 2 μ e 4 T 1 + l 2 cosh 2 ( 1 + l 2 T ) . This shows that the assumption (28) is practicable, and the function u ( y , x ) = sin ( x ) cosh ( 1 + k 2 y ) satisfies (28). In fact, we can verify that the functions u ( y , x ) = sin ( m x ) cosh ( m 2 + k 2 y ) ( m 1 is a positive integer) all satisfy the condition (28). About the explanation for the rationality of assumption (45), the procedure is similar with the one above, here we skip it.

6. Numerical Experiments

This section verifies the calculated effect of regularized method by making some special experiments. For the simplification, we only investigate numerical efficiency of the regularization method for (2), which is similar to the case of inhomogeneous Neumann data (3).
Example 1.
We take T = 1 and u ( y , x ) = sin ( x ) cosh ( 1 + k 2 y ) ( k > 0 ) as the exact solution of (2), φ ( x ) = u ( 0 , x ) = sin ( x ) . Denote Δ x = π N , x ı = ı Δ x ( ı = 0 , 1 , 2 , , N ), φ δ is taken as φ δ = φ + ε r a n d n ( s i z e ( φ ) ) , here ε is the noisy level, r a n d n ( s i z e ( φ ) ) returns an random array whose size is same with φ. The noisy error bound δ is calculated by
δ : = φ δ φ l 2 = 1 N + 1 ı = 0 N φ δ ( x ı ) φ ( x ı ) 2 1 / 2 .
For each 0 < y 1 , the regularization solution u α δ ( y , x ) is computed by (15) for n = 1 , 2 , , M , and the computation error is defined by
ϵ ( u ) = 1 N + 1 ı = 0 N u ( y , x ı ) u α δ ( y , x ı ) 2 1 N + 1 ı = 0 N u 2 ( y , x ı ) .
In practice, since we can not easily know the a priori bound E of the solution, here we only make some numerical experiments in which we choose the regularization parameter by the a posteriori rule (34). The Matlab (R2015a, MathWorks Company, Natick, MA, USA) command “fzero” was used to find α, and the constant τ was taken as 1.1 .
For k = 0.5 , 1.5 , ε = 0.01 , γ = 2 , the numerical results for exact solution u ( y , x ) and regularized solution u α δ ( y , x ) at y = 0.4 , 0.6 , 0.8 , 1 are shown in Figure 1 and Figure 2, respectively. For k = 0.5 , 1.5 , γ = 3 , the calculated errors for various noisy level ε are shown in Table 1 and Table 2. For k = 0.5 , 1.5 , taking ε = 0.01 , we also compute the corresponding errors for various γ, the computation results are presented in Table 3 and Table 4.
Figure 1 and Figure 2 and Table 1, Table 2, Table 3 and Table 4 indicate that our method is stable and feasible. Table 1 and Table 2 show that numerical results become better as ε goes to zero, which verifies the convergence of our method in practice. Table 3 and Table 4 show that, for the same ε , the error decreases as γ becomes large. So in order to guarantee obtaining a good computational result, we should choose the parameter γ as a relatively large positive number, this can also be found from the expressions of the regularization solutions (15) and (20).

7. Conclusions and Discussion

This paper gives the estimates of conditional stability for (2) and (3) under an a priori bound assumption for an exact solution. We use a generalized Tikhonov regularization method to overcome the ill-poseness of two problems. By combining a priori and an a posteriori rules of the regularization parameter, we derive the results of sharp types of error estimates for this method. We also verify the feasibility of our method by doing the corresponding numerical experiments.
We point out that we write the expression of the solution by using the method of separation of variables, so this regularization technique can also be used to investigate some other similar problems in cylindrical region. However we can not apply this method to deal with some problems in a more general domain, which is a limitation of this work.

Author Contributions

Investigation, writing—original draft preparation, H.Z.; investigation, writing—review and editing, X.Z.

Funding

This research was funded by the Key Scientific Research Projects (grant number 2017KJ33) at North Minzu University.

Acknowledgments

The authors would like to thank the reviewers for their constructive comments and valuable suggestions that improve the quality of our paper.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Cheng, H.W.; Huang, J.F.; Leiterman, T.J. An adaptive fast solver for the modified Helmholtz equation in two dimensions. J. Comput. Phys. 2006, 211, 616–637. [Google Scholar] [CrossRef]
  2. Juffer, A.H.; Botta, E.F.F.; Van Keulen, B.A.M.; Ploeg, A.V.D.; Berendsen, H.J.C. The electric potential of a macromolecule in a solvent: A fundamental approach. J. Comput. Phys. 1991, 97, 144–171. [Google Scholar] [CrossRef]
  3. Liang, J.; Subramaniam, S. Computation of molecular electrostatics with boundary element methods. Biophys. J. 1997, 73, 1830–1841. [Google Scholar] [CrossRef] [Green Version]
  4. Russel, W.B.; Saville, D.A.; Schowalter, W.R. Colloidal Dispersions; Cambridge University Press: Cambridge, UK, 1991. [Google Scholar]
  5. Hadamard, J. Lectures on Cauchy Problem in Linear Partial Differential Equations; Yale University Press: New Haven, CT, USA, 1923. [Google Scholar]
  6. Isakov, V. Inverse Problems for Partial Differential Equations. In Applied Mathematical Sciences; Springer: New York, NY, USA, 1998; Volume 127. [Google Scholar]
  7. Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; V. H. Winston and Sons: Washington, DC, USA; John Wiley and Sons: New York, NY, USA, 1997. [Google Scholar]
  8. Engl, H.W.; Hanke, M.; Neubauer, A. Regularization of Inverse Problems. In Mathematics and Its Applications; Kluwer Academic: Dordrecht, The Netherlands, 1996; Volume 375. [Google Scholar]
  9. Kirsch, A. An Introduction to the Mathematical Theory of Inverse Problems. In Applied Mathematical Sciences; Springer: New York, NY, USA, 2011; Volume 120. [Google Scholar]
  10. Shi, R.; Wei, T.; Qin, H.H. Fourth-order modified method for the Cauchy problem of the modified Helmholtz equation. Numer. Math. Theory Methods Appl. 2009, 3, 326–340. [Google Scholar] [CrossRef]
  11. Qian, A.L.; Yang, X.M.; Wu, Y.S. Optimal error bound and a quasi-boundary value regularization method for a Cauchy problem of the modified helmholtz equation. Int. J. Comput. Math. 2016, 93, 2028–2041. [Google Scholar] [CrossRef]
  12. Qin, H.H.; Wei, T. Quasi-reversibility and truncation methods to solve a Cauchy problem for the modified Helmholtz equation. Math. Comput. Simul. 2009, 80, 352–366. [Google Scholar] [CrossRef]
  13. Yang, H.; Yang, Y. A quasi-reversibility regularization method for a Cauchy problem of the modified helmholtz-type equation. Bound. Value Probl. 2019, 1, 1–19. [Google Scholar] [CrossRef]
  14. Xiong, X.T.; Shi, W.X.; Fan, X.Y. Two numerical methods for a Cauchy problem for modified Helmholtz equation. Appl. Math. Model. 2011, 35, 4951–4964. [Google Scholar] [CrossRef]
  15. Cheng, H. Filtering method for the Cauchy problem of the modified Helmholtz equation. J. Lanzhou Univ. 2013, 6, 323–328. [Google Scholar]
  16. Cheng, H.; Zhu, P.; Gao, J. A regularization method for the Cauchy problem of the modified helmholtz equation. Math. Methods Appl. Sci. 2015, 38, 3711–3719. [Google Scholar] [CrossRef]
  17. He, S.Q.; Feng, X.F. A regularization method to solve a Cauchy problem for the two-Dimensional modified Helmholtz equation. Mathematics 2019, 7, 360. [Google Scholar] [CrossRef]
  18. He, S.Q.; Feng, X.F. A mollification method with Dirichlet kernel to solve Cauchy problem for two-dimensional Helmholtz equation. Int. J. Wavelets Multiresolut. Inf. Process. 2019. [Google Scholar] [CrossRef]
  19. Qian, A.; Mao, J.; Liu, L. A spectral regularization method for a Cauchy problem of the modified helmholtz equation. Bound. Value Probl. 2010, 1, 212056. [Google Scholar] [CrossRef]
  20. Qian, A.L.; Wu, Y.J. Optimal error bound and approximation methods for a Cauchy problem of the modified helmholtz equation. Int. J. Wavelets Multiresolut. Inf. Process. 2011, 9, 305–315. [Google Scholar] [CrossRef]
  21. Marin, L. A relaxation method of an alternating iterative MFS algorithm for the Cauchy problem associated with the two-dimensional modified Helmholtz equation. Numer. Methods Partial. Differ. Equ. 2012, 28, 899–925. [Google Scholar] [CrossRef]
  22. Johansson, B.T.; Marin, L. Relaxation of alternating iterative algorithms for the Cauchy problem associated with the modified Helmholtz equation. Comput. Mater. Contin. 2009, 13, 153–189. [Google Scholar]
  23. Qin, H.H.; Wen, D.W. Tikhonov type regularization method for the Cauchy problem of the modified Helmholtz equation. Appl. Math. Comput. 2008, 203, 617–628. [Google Scholar] [CrossRef]
  24. Sun, P.; Feng, X.L. A simple method for solving the Cauchy problem for the modified Helmholtz equation. J. Math. 2011, 31, 756–762. [Google Scholar]
  25. Liu, C.S.; Qu, W.; Chen, W.; Lin, J. A novel trefftz method of the inverse Cauchy problem for 3d modified helmholtz equation. Inverse Probl. Sci. Eng. 2017, 25, 1278–1298. [Google Scholar] [CrossRef]
  26. You, L. The weighted generalized solution Tikhonov regularization method for Cauchy problem for the modified Helmholtz equation. Adv. Inf. Technol. Educ. 2011, 201, 343–350. [Google Scholar]
  27. Tautenhahn, U. Optimal stable solution of Cauchy problems of elliptic equations. J. Anal. Appl. 1996, 15, 961–984. [Google Scholar] [CrossRef]
  28. Alessandrini, G.; Rondi, L.; Rosset, E.; Vessella, S. The stability for the Cauchy problem for elliptic equations. Inverse Prob. 2009, 25, 123004. [Google Scholar] [CrossRef]
  29. Cheng, J.; Yamamoto, M. One new strategy for a priori choice of regularizing parameters in Tikhonov’s regularization. Inverse Prob. 2000, 16, L31–L38. [Google Scholar] [CrossRef]
  30. Payne, L.E. Bounds in the Cauchy problem for the Laplace equation. Arch. Ration. Mech. Anal. 1960, 5, 35–45. [Google Scholar] [CrossRef]
  31. Egger, H.; Hofmann, B. Tikhonov regularization in Hilbert scales under conditional stability assumptions. Inverse Probl. 2018, 34, 115015. [Google Scholar] [CrossRef] [Green Version]
  32. Hofmann, B.; Mathe, P. Tikhonov regularization with oversmoothing penalty for non-linear ill-posed problems in Hilbert scales. Inverse Probl. 2017, 34, 015007. [Google Scholar] [CrossRef] [Green Version]
  33. Hofmann, B.; Kaltenbacher, B.; Resmerita, E. Lavrentiev’s regularization method in Hilbert spaces revisited. Inverse Probl. Imaging 2017, 10, 741–764. [Google Scholar] [CrossRef]
  34. Plato, R.; Mathé, P.; Hofmann, B. Optimal rates for lavrentiev regularization with adjoint source conditions. Math. Comput. 2017, 87, 1. [Google Scholar] [CrossRef]
  35. Chen, D.H.; Hofmann, B.; Zou, J. Regularization and convergence for ill-posed backward evolution equations in Banach spaces. J. Differ. Equ. 2018, 265, 3533–3566. [Google Scholar] [CrossRef]
  36. H<i>a</i>`o, D.N.; Duc, N.V.; Lesnic, D. A non-local boundary value problem method for the Cauchy problem for elliptic equations. Inverse Probl. 2009, 25, 055002. [Google Scholar]
  37. Morozov, V.A.; Nashed, Z.; Aries, A.B. Methods for Solving Incorrectly Posed Problems; Springer: New York, NY, USA, 1984. [Google Scholar]
Figure 1. k = 0.5 , ε = 0.01 , γ = 2 , the exact and regularized solutions; (a) y = 0.4 , (b): y = 0.6 , (c) y = 0.8 , (d) y = 1 .
Figure 1. k = 0.5 , ε = 0.01 , γ = 2 , the exact and regularized solutions; (a) y = 0.4 , (b): y = 0.6 , (c) y = 0.8 , (d) y = 1 .
Mathematics 07 00667 g001
Figure 2. k = 1.5 , ε = 0.01 , γ = 2 , the exact and regularized solutions; (a) y = 0.4 , (b) y = 0.6 , (c) y = 0.8 , (d) y = 1 .
Figure 2. k = 1.5 , ε = 0.01 , γ = 2 , the exact and regularized solutions; (a) y = 0.4 , (b) y = 0.6 , (c) y = 0.8 , (d) y = 1 .
Mathematics 07 00667 g002
Table 1. k = 0.5 , γ = 3 , the relative root mean square errors for various noise level ε at y = 0.6 , 1 .
Table 1. k = 0.5 , γ = 3 , the relative root mean square errors for various noise level ε at y = 0.6 , 1 .
ε 0.0010.0050.010.050.1
α 5.0507 × 10 5 2.4655 × 10 4 4.7276 × 10 4 0.00190.0032
ϵ 0.6 ( u ) 6.4027  × 10 4 0.00300.00550.02190.0396
ϵ 1 ( u ) 8.0136  × 10 4 0.00350.00640.02330.0409
Table 2. k = 1.5 , γ = 3 , the relative root mean square errors for various noise levels ε at y = 0.6 , 1 .
Table 2. k = 1.5 , γ = 3 , the relative root mean square errors for various noise levels ε at y = 0.6 , 1 .
ε 0.0010.0050.010.050.1
α 8.5042 × 10 7 4.2552 × 10 6 8.5144 × 10 6 4.1820 × 10 5 8.0668 × 10 5
ϵ 0.6 ( u ) 6.3978  × 10 4 0.00320.00620.02840.0525
ϵ 1 ( u ) 7.4456  × 10 4 0.00370.00720.03160.0569
Table 3. k = 0.5 , ε = 0.01 , the relative root mean square errors for various γ at y = 0.6 , 1 .
Table 3. k = 0.5 , ε = 0.01 , the relative root mean square errors for various γ at y = 0.6 , 1 .
γ 123456
α 7.9344 × 10 4 6.2776 × 10 4 4.7276 × 10 4 3.1785 × 10 4 1.9122 × 10 4 1.0780 × 10 4
ϵ 0.6 ( u ) 0.00640.00610.00550.00460.00390.0033
ϵ 1 ( u ) 0.00810.00750.00640.00500.00400.0034
Table 4. k = 1.5 , ε = 0.01 , the relative root mean square errors for various γ at y = 0.6 , 1 .
Table 4. k = 1.5 , ε = 0.01 , the relative root mean square errors for various γ at y = 0.6 , 1 .
γ 123456
α 9.0299 × 10 5 2.7734 × 10 5 8.5144 × 10 6 2.6042 × 10 6 7.8824 × 10 7 2.3350 × 10 7
ϵ 0.6 ( u ) 0.00640.00630.00620.00610.00580.0055
ϵ 1 ( u ) 0.00740.00730.00720.00690.00660.0060

Share and Cite

MDPI and ACS Style

Zhang, H.; Zhang, X. Generalized Tikhonov Method and Convergence Estimate for the Cauchy Problem of Modified Helmholtz Equation with Nonhomogeneous Dirichlet and Neumann Datum. Mathematics 2019, 7, 667. https://doi.org/10.3390/math7080667

AMA Style

Zhang H, Zhang X. Generalized Tikhonov Method and Convergence Estimate for the Cauchy Problem of Modified Helmholtz Equation with Nonhomogeneous Dirichlet and Neumann Datum. Mathematics. 2019; 7(8):667. https://doi.org/10.3390/math7080667

Chicago/Turabian Style

Zhang, Hongwu, and Xiaoju Zhang. 2019. "Generalized Tikhonov Method and Convergence Estimate for the Cauchy Problem of Modified Helmholtz Equation with Nonhomogeneous Dirichlet and Neumann Datum" Mathematics 7, no. 8: 667. https://doi.org/10.3390/math7080667

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop